Knee point search using cascading top-k sorting with minimized time complexity.
Wang, Zheng; Tseng, Shian-Shyong
2013-01-01
Anomaly detection systems and many other applications are frequently confronted with the problem of finding the largest knee point in the sorted curve for a set of unsorted points. This paper proposes an efficient knee point search algorithm with minimized time complexity using the cascading top-k sorting when a priori probability distribution of the knee point is known. First, a top-k sort algorithm is proposed based on a quicksort variation. We divide the knee point search problem into multiple steps. And in each step an optimization problem of the selection number k is solved, where the objective function is defined as the expected time cost. Because the expected time cost in one step is dependent on that of the afterwards steps, we simplify the optimization problem by minimizing the maximum expected time cost. The posterior probability of the largest knee point distribution and the other parameters are updated before solving the optimization problem in each step. An example of source detection of DNS DoS flooding attacks is provided to illustrate the applications of the proposed algorithm.
NMR diffusion simulation based on conditional random walk.
Gudbjartsson, H; Patz, S
1995-01-01
The authors introduce here a new, very fast, simulation method for free diffusion in a linear magnetic field gradient, which is an extension of the conventional Monte Carlo (MC) method or the convolution method described by Wong et al. (in 12th SMRM, New York, 1993, p.10). In earlier NMR-diffusion simulation methods, such as the finite difference method (FD), the Monte Carlo method, and the deterministic convolution method, the outcome of the calculations depends on the simulation time step. In the authors' method, however, the results are independent of the time step, although, in the convolution method the step size has to be adequate for spins to diffuse to adjacent grid points. By always selecting the largest possible time step the computation time can therefore be reduced. Finally the authors point out that in simple geometric configurations their simulation algorithm can be used to reduce computation time in the simulation of restricted diffusion.
NASA Technical Reports Server (NTRS)
Eppink, Jenna L.
2017-01-01
Stereo particle image velocimetry measurements were performed downstream of a forward-facing step in a stationary-crossflow dominated flow. Three different step heights were studied with the same leading-edge roughness configuration to determine the effect of the step on the evolution of the stationary-crossflow. Above the critical step height, which is approximately 68% of the boundary-layer thickness at the step, the step caused a significant increase in the growth of the stationary crossflow. For the largest step height studied (68%), premature transition occurred shortly downstream of the step. The stationary crossflow amplitude only reached approximately 7% of U(sub e) in this case, which suggests that transition does not occur via the high-frequency secondary instabilities typically associated with stationary crossflow transition. The next largest step of 60% delta still caused a significant impact on the growth of the stationary crossflow downstream of the step, but the amplitude eventually returned to that of the baseline case, and the transition front remained the same. The smallest step height (56%) only caused a small increase in the stationary crossflow amplitude and no change in the transition front. A final case was studied in which the roughness on the leading edge of the model was enhanced for the lowest step height case to determine the impact of the stationary crossflow amplitude on transition. The stationary crossflow amplitude was increased by approximately four times, which resulted in premature transition for this step height. However, some notable differences were observed in the behavior of the stationary crossflow mode, which indicate that the interaction mechanism which results in the increased growth of the stationary crossflow downstream of the step may be different in this case compared to the larger step heights.
Impact of SCBA size and fatigue from different firefighting work cycles on firefighter gait.
Kesler, Richard M; Bradley, Faith F; Deetjen, Grace S; Angelini, Michael J; Petrucci, Matthew N; Rosengren, Karl S; Horn, Gavin P; Hsiao-Wecksler, Elizabeth T
2018-04-04
Risk of slips, trips and falls in firefighters maybe influenced by the firefighter's equipment and duration of firefighting. This study examined the impact of a four self-contained breathing apparatus (SCBA) three SCBA of increasing size and a prototype design and three work cycles one bout (1B), two bouts with a five-minute break (2B) and two bouts back-to-back (BB) on gait in 30 firefighters. Five gait parameters (double support time, single support time, stride length, step width and stride velocity) were examined pre- and post-firefighting activity. The two largest SCBA resulted in longer double support times relative to the smallest SCBA. Multiple bouts of firefighting activity resulted in increased single and double support time and decreased stride length, step width and stride velocity. These results suggest that with larger SCBA or longer durations of activity, firefighters may adopt more conservative gait patterns to minimise fall risk. Practitioner Summary: The effects of four self-contained breathing apparatus (SCBA) and three work cycles on five gait parameters were examined pre- and post-firefighting activity. Both SCBA size and work cycle affected gait. The two largest SCBA resulted in longer double support times. Multiple bouts of activity resulted in more conservative gait patterns.
NASA Astrophysics Data System (ADS)
Mikkili, Suresh; Panda, Anup Kumar; Prattipati, Jayanthi
2015-06-01
Nowadays the researchers want to develop their model in real-time environment. Simulation tools have been widely used for the design and improvement of electrical systems since the mid twentieth century. The evolution of simulation tools has progressed in step with the evolution of computing technologies. In recent years, computing technologies have improved dramatically in performance and become widely available at a steadily decreasing cost. Consequently, simulation tools have also seen dramatic performance gains and steady cost decreases. Researchers and engineers now have the access to affordable, high performance simulation tools that were previously too cost prohibitive, except for the largest manufacturers. This work has introduced a specific class of digital simulator known as a real-time simulator by answering the questions "what is real-time simulation", "why is it needed" and "how it works". The latest trend in real-time simulation consists of exporting simulation models to FPGA. In this article, the Steps involved for implementation of a model from MATLAB to REAL-TIME are provided in detail.
Short‐term time step convergence in a climate model
Rasch, Philip J.; Taylor, Mark A.; Jablonowski, Christiane
2015-01-01
Abstract This paper evaluates the numerical convergence of very short (1 h) simulations carried out with a spectral‐element (SE) configuration of the Community Atmosphere Model version 5 (CAM5). While the horizontal grid spacing is fixed at approximately 110 km, the process‐coupling time step is varied between 1800 and 1 s to reveal the convergence rate with respect to the temporal resolution. Special attention is paid to the behavior of the parameterized subgrid‐scale physics. First, a dynamical core test with reduced dynamics time steps is presented. The results demonstrate that the experimental setup is able to correctly assess the convergence rate of the discrete solutions to the adiabatic equations of atmospheric motion. Second, results from full‐physics CAM5 simulations with reduced physics and dynamics time steps are discussed. It is shown that the convergence rate is 0.4—considerably slower than the expected rate of 1.0. Sensitivity experiments indicate that, among the various subgrid‐scale physical parameterizations, the stratiform cloud schemes are associated with the largest time‐stepping errors, and are the primary cause of slow time step convergence. While the details of our findings are model specific, the general test procedure is applicable to any atmospheric general circulation model. The need for more accurate numerical treatments of physical parameterizations, especially the representation of stratiform clouds, is likely common in many models. The suggested test technique can help quantify the time‐stepping errors and identify the related model sensitivities. PMID:27660669
DOE Office of Scientific and Technical Information (OSTI.GOV)
Meng, Jianxin; Mei, Deqing, E-mail: meidq-127@zju.edu.cn; Yang, Keji
2014-08-14
In existing ultrasonic transportation methods, the long-range transportation of micro-particles is always realized in step-by-step way. Due to the substantial decrease of the driving force in each step, the transportation is lower-speed and stair-stepping. To improve the transporting velocity, a non-stepping ultrasonic transportation approach is proposed. By quantitatively analyzing the acoustic potential well, an optimal region is defined as the position, where the largest driving force is provided under the condition that the driving force is simultaneously the major component of an acoustic radiation force. To keep the micro-particle trapped in the optimal region during the whole transportation process, anmore » approach of optimizing the phase-shifting velocity and phase-shifting step is adopted. Due to the stable and large driving force, the displacement of the micro-particle is an approximately linear function of time, instead of a stair-stepping function of time as in the existing step-by-step methods. An experimental setup is also developed to validate this approach. Long-range ultrasonic transportations of zirconium beads with high transporting velocity were realized. The experimental results demonstrated that this approach is an effective way to improve transporting velocity in the long-range ultrasonic transportation of micro-particles.« less
Monte Carlo Sampling in Fractal Landscapes
NASA Astrophysics Data System (ADS)
Leitão, Jorge C.; Lopes, J. M. Viana Parente; Altmann, Eduardo G.
2013-05-01
We design a random walk to explore fractal landscapes such as those describing chaotic transients in dynamical systems. We show that the random walk moves efficiently only when its step length depends on the height of the landscape via the largest Lyapunov exponent of the chaotic system. We propose a generalization of the Wang-Landau algorithm which constructs not only the density of states (transient time distribution) but also the correct step length. As a result, we obtain a flat-histogram Monte Carlo method which samples fractal landscapes in polynomial time, a dramatic improvement over the exponential scaling of traditional uniform-sampling methods. Our results are not limited by the dimensionality of the landscape and are confirmed numerically in chaotic systems with up to 30 dimensions.
Monte, Andrea; Muollo, Valentina; Nardello, Francesca; Zamparo, Paola
2017-02-01
The purpose of this study was to investigate the changes in selected biomechanical variables in 80-m maximal sprint runs while imposing changes in step frequency (SF) and to investigate if these adaptations differ based on gender and training level. A total of 40 athletes (10 elite men and 10 women, 10 intermediate men and 10 women) participated in this study; they were requested to perform 5 trials at maximal running speed (RS): at the self-selected frequency (SF s ) and at SF ±15% and ±30%SF s . Contact time (CT) and flight time (FT) as well as step length (SL) decreased with increasing SF, while k vert increased with it. At SF s , k leg was the lowest (a 20% decrease at ±30%SF s ), while RS was the largest (a 12% decrease at ±30%SF s ). Only small changes (1.5%) in maximal vertical force (F max ) were observed as a function of SF, but maximum leg spring compression (ΔL) was largest at SF s and decreased by about 25% at ±30%SF s . Significant differences in F max , Δy, k leg and k vert were observed as a function of skill and gender (P < 0.001). Our results indicate that RS is optimised at SF s and that, while k vert follows the changes in SF, k leg is lowest at SF s .
Mehdizadeh, Sina; Sanjari, Mohammad Ali
2017-11-07
This study aimed to determine the effect of added noise, filtering and time series length on the largest Lyapunov exponent (LyE) value calculated for time series obtained from a passive dynamic walker. The simplest passive dynamic walker model comprising of two massless legs connected by a frictionless hinge joint at the hip was adopted to generate walking time series. The generated time series was used to construct a state space with the embedding dimension of 3 and time delay of 100 samples. The LyE was calculated as the exponential rate of divergence of neighboring trajectories of the state space using Rosenstein's algorithm. To determine the effect of noise on LyE values, seven levels of Gaussian white noise (SNR=55-25dB with 5dB steps) were added to the time series. In addition, the filtering was performed using a range of cutoff frequencies from 3Hz to 19Hz with 2Hz steps. The LyE was calculated for both noise-free and noisy time series with different lengths of 6, 50, 100 and 150 strides. Results demonstrated a high percent error in the presence of noise for LyE. Therefore, these observations suggest that Rosenstein's algorithm might not perform well in the presence of added experimental noise. Furthermore, findings indicated that at least 50 walking strides are required to calculate LyE to account for the effect of noise. Finally, observations support that a conservative filtering of the time series with a high cutoff frequency might be more appropriate prior to calculating LyE. Copyright © 2017 Elsevier Ltd. All rights reserved.
Crenshaw, Jeremy R; Rosenblatt, Noah J; Hurt, Christopher P; Grabiner, Mark D
2012-01-03
This study evaluated the discriminant capability of stability measures, trunk kinematics, and step kinematics to classify successful and failed compensatory stepping responses. In addition, the shared variance between stability measures, step kinematics, and trunk kinematics is reported. The stability measures included the anteroposterior distance (d) between the body center of mass and the stepping limb toe, the margin of stability (MOS), as well as time-to-boundary considering velocity (TTB(v)), velocity and acceleration (TTB(a)), and MOS (TTB(MOS)). Kinematic measures included trunk flexion angle and angular velocity, step length, and the time after disturbance onset of recovery step completion. Fourteen young adults stood on a treadmill that delivered surface accelerations necessitating multiple forward compensatory steps. Thirteen subjects fell from an initial disturbance, but recovered from a second, identical disturbance. Trunk flexion velocity at completion of the first recovery step and trunk flexion angle at completion of the second step had the greatest overall classification of all measures (92.3%). TTB(v) and TTB(a) at completion of both steps had the greatest classification accuracy of all stability measures (80.8%). The length of the first recovery step (r ≤ 0.70) and trunk flexion angle at completion of the second recovery step (r ≤ -0.54) had the largest correlations with stability measures. Although TTB(v) and TTB(a) demonstrated somewhat smaller discriminant capabilities than trunk kinematics, the small correlations between these stability measures and trunk kinematics (|r| ≤ 0.52) suggest that they reflect two important, yet different, aspects of a compensatory stepping response. Copyright © 2011 Elsevier Ltd. All rights reserved.
Application of largest Lyapunov exponent analysis on the studies of dynamics under external forces
NASA Astrophysics Data System (ADS)
Odavić, Jovan; Mali, Petar; Tekić, Jasmina; Pantić, Milan; Pavkov-Hrvojević, Milica
2017-06-01
Dynamics of driven dissipative Frenkel-Kontorova model is examined by using largest Lyapunov exponent computational technique. Obtained results show that besides the usual way where behavior of the system in the presence of external forces is studied by analyzing its dynamical response function, the largest Lyapunov exponent analysis can represent a very convenient tool to examine system dynamics. In the dc driven systems, the critical depinning force for particular structure could be estimated by computing the largest Lyapunov exponent. In the dc+ac driven systems, if the substrate potential is the standard sinusoidal one, calculation of the largest Lyapunov exponent offers a more sensitive way to detect the presence of Shapiro steps. When the amplitude of the ac force is varied the behavior of the largest Lyapunov exponent in the pinned regime completely reflects the behavior of Shapiro steps and the critical depinning force, in particular, it represents the mirror image of the amplitude dependence of critical depinning force. This points out an advantage of this technique since by calculating the largest Lyapunov exponent in the pinned regime we can get an insight into the dynamics of the system when driving forces are applied. Additionally, the system is shown to be not chaotic even in the case of incommensurate structures and large amplitudes of external force, which is a consequence of overdampness of the model and the Middleton's no passing rule.
OZONE MONITORING, MAPPING, AND PUBLIC OUTREACH ...
The U.S. EPA had developed a handbook to help state and local government officials implement ozone monitoring, mapping, and outreach programs. The handbook, called Ozone Monitoring, Mapping, and Public Outreach: Delivering Real-Time Ozone Information to Your Community, provides step-by-step instructions on how to: Design, site, operate, and maintain an ozone monitoring network. Install, configure, and operate the Automatic Data Transfer System Use MapGen software to create still-frame and animated ozone maps. Develop and outreach plan to communicate information about real-time ozone levels and their health effects to the public.This handbook was developed by EPA's EMPACT program. The program takes advantage of new technologies that make it possible to provide environmental information to the public in near real time. EMPACT is working with the 86 largest metropolitan areas of the country to help communities in these areas: Collect, manage and distribute time-relevant environmental information. Provide their residents with easy-to-understand information they can use in making informed, day-to-day decisions. Information
Lowry, Kristin A; Carrel, Andrew J; McIlrath, Jessica M; Smiley-Oyen, Ann L
2010-04-01
To determine if gait stability, as measured by harmonic ratios (HRs) derived from trunk accelerations, is improved during 3 amplitude-based cueing strategies (visual cues, lines on the floor 20% longer than preferred step length; verbal cues, experimenter saying "big step" every third; cognitive cues, participants think "big step") in people with Parkinson's disease. Gait analysis with a triaxial accelerometer. University research laboratory. A volunteer sample of persons with Parkinson's disease (N=7) (Hoehn and Yahr stages 2-3). Not applicable Gait stability was quantified by anterior-posterior (AP), vertical, and mediolateral (ML) HRs; higher ratios indicated improved gait stability. Spatiotemporal parameters assessed were walking speed, stride length, cadence, and the coefficient of variation for stride time. Of the amplitude-based cues, verbal and cognitive resulted in the largest improvements in the AP HR (P=.018) with a trend in the vertical HR as well as the largest improvements in both stride length and velocity. None of the cues positively affected stability in the ML direction. Descriptively, all participants increased speed and stride length, but only those in Hoehn and Yahr stage 2 (not Hoehn and Yahr stage 3) showed improvements in HRs. Cueing for "big steps" is effective for improving gait stability in the AP direction with modest improvements in the vertical direction, but it is not effective in the ML direction. These data support the use of trunk acceleration measures in assessing the efficacy of common therapeutic interventions. Copyright 2010 American Congress of Rehabilitation Medicine. Published by Elsevier Inc. All rights reserved.
Analysis of Partitioned Methods for the Biot System
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bukac, Martina; Layton, William; Moraiti, Marina
2015-02-18
In this work, we present a comprehensive study of several partitioned methods for the coupling of flow and mechanics. We derive energy estimates for each method for the fully-discrete problem. We write the obtained stability conditions in terms of a key control parameter defined as a ratio of the coupling strength and the speed of propagation. Depending on the parameters in the problem, give the choice of the partitioned method which allows the largest time step. (C) 2015 Wiley Periodicals, Inc.
Studying relaxation phenomena via effective master equations
NASA Astrophysics Data System (ADS)
Chan, David; Wan, Jones T. K.; Chu, L. L.; Yu, K. W.
2000-04-01
The real-time dynamics of various relaxation phenomena can be conveniently formulated by a master equation with the enumeration of transition rates between given classes of conformations. To study the relaxation time towards equilibrium, it suffices to solve for the second largest eigenvalue of the resulting eigenvalue equation. Generally speaking, there is no analytic solution for the dynamic equation. Mean-field approaches generally yield misleading results while the presumably exact Monte-Carlo methods require prohibitive time steps in most real systems. In this work, we propose an exact decimation procedure for reducing the number of conformations significantly, while there is no loss of information, i.e., the reduced (or effective) equation is an exact transformed version of the original one. However, we have to pay the price: the initial Markovianity of the evolution equation is lost and the reduced equation contains memory terms in the transition rates. Since the transformed equation has significantly reduced number of degrees of freedom, the systems can readily be diagonalized by iterative means, to obtain the exact second largest eigenvalue and hence the relaxation time. The decimation method has been applied to various relaxation equations with generally desirable results. The advantages and limitations of the method will be discussed.
Solving large mixed linear models using preconditioned conjugate gradient iteration.
Strandén, I; Lidauer, M
1999-12-01
Continuous evaluation of dairy cattle with a random regression test-day model requires a fast solving method and algorithm. A new computing technique feasible in Jacobi and conjugate gradient based iterative methods using iteration on data is presented. In the new computing technique, the calculations in multiplication of a vector by a matrix were recorded to three steps instead of the commonly used two steps. The three-step method was implemented in a general mixed linear model program that used preconditioned conjugate gradient iteration. Performance of this program in comparison to other general solving programs was assessed via estimation of breeding values using univariate, multivariate, and random regression test-day models. Central processing unit time per iteration with the new three-step technique was, at best, one-third that needed with the old technique. Performance was best with the test-day model, which was the largest and most complex model used. The new program did well in comparison to other general software. Programs keeping the mixed model equations in random access memory required at least 20 and 435% more time to solve the univariate and multivariate animal models, respectively. Computations of the second best iteration on data took approximately three and five times longer for the animal and test-day models, respectively, than did the new program. Good performance was due to fast computing time per iteration and quick convergence to the final solutions. Use of preconditioned conjugate gradient based methods in solving large breeding value problems is supported by our findings.
Krajczár, Károly; Tóth, Vilmos; Nyárády, Zoltán; Szabó, Gyula
2005-06-01
The aim of the authors' study was to compare the remaining root canal wall thickness and the preparation time of root canals, prepared either with step-back technique, or with GT Rotary File, an engine driven nickel-titanium rotary instrument system. Twenty extracted molars were decoronated. Teeth were divided in two groups. In Group 1 root canals were prepared with step-back technique. In Group 2 GT Rotary File System was utilized. Preoperative vestibulo-oral X-ray pictures were taken from all teeth with radiovisiograph (RVG). The final preparations at the mesiobuccal canals (MB) were performed with size #30 and palatinal/distal canals with size #40 instruments. Postoperative RVG pictures were taken ensuring the preoperative positioning. The working time was measured in seconds during each preparation. The authors also assessed the remaining root canal wall thickness at 3, 6 and 9 mm from the radiological apex, comparing the width of the canal walls of the vestibulo-oral projections on pre- and postoperative RVG pictures both mesially and buccally. The ratios of the residual and preoperative root canal wall thickness were calculated and compared. The largest difference was found at the MB canals of the coronal and middle third level of the root, measured on the distal canal wall. The ratio of the remaining dentin wall thickness at the coronal and the middle level in the case of step-back preparation was 0.605 and 0.754, and 0.824 and 0.895 in the cases of GT files respectively. The preparation time needed for GT Rotary File System was altogether 68.7% (MB) and 52.5% (D/P canals) of corresponding step-back preparation times. The use of GT Rotary File with comparison of standard step-back method resulted in a shortened preparation time and excessive damage of the coronal part of the root canal could be avoided.
Validity of Activity Monitor Step Detection Is Related to Movement Patterns.
Hickey, Amanda; John, Dinesh; Sasaki, Jeffer E; Mavilia, Marianna; Freedson, Patty
2016-02-01
There is a need to examine step-counting accuracy of activity monitors during different types of movements. The purpose of this study was to compare activity monitor and manually counted steps during treadmill and simulated free-living activities and to compare the activity monitor steps to the StepWatch (SW) in a natural setting. Fifteen participants performed laboratory-based treadmill (2.4, 4.8, 7.2 and 9.7 km/h) and simulated free-living activities (eg, cleaning room) while wearing an activPAL, Omron HJ720-ITC, Yamax Digi- Walker SW-200, 2 ActiGraph GT3Xs (1 in "low-frequency extension" [AGLFE] and 1 in "normal-frequency" mode), an ActiGraph 7164, and a SW. Participants also wore monitors for 1-day in their free-living environment. Linear mixed models identified differences between activity monitor steps and the criterion in the laboratory/free-living settings. Most monitors performed poorly during treadmill walking at 2.4 km/h. Cleaning a room had the largest errors of all simulated free-living activities. The accuracy was highest for forward/rhythmic movements for all monitors. In the free-living environment, the AGLFE had the largest discrepancy with the SW. This study highlights the need to verify step-counting accuracy of activity monitors with activities that include different movement types/directions. This is important to understand the origin of errors in step-counting during free-living conditions.
A Comparison of Six Repair Scheduling Policies for the P3 Aircraft.
1988-03-01
each type component i: RHO(i) = LAMBDA(i) / SRATE(i) LINEUPti) - RHO(i) x COUNT(i) Step 14c: Sort components by LINEUP (i), reorder position in line in...favor of the largest LINEUP (i). Return to step 7. Dynamic 3 Model Modifications: Step 14a: Count the number of operating parts of each component i...STOCK(i)). Step 14b: Assign a priority to each component type based on the count of current stock in step 14a: LINEUP (i) < LINEUP (J) iff STOCK(i
Capillary fluctuations of surface steps: An atomistic simulation study for the model Cu(111) system
NASA Astrophysics Data System (ADS)
Freitas, Rodrigo; Frolov, Timofey; Asta, Mark
2017-10-01
Molecular dynamics (MD) simulations are employed to investigate the capillary fluctuations of steps on the surface of a model metal system. The fluctuation spectrum, characterized by the wave number (k ) dependence of the mean squared capillary-wave amplitudes and associated relaxation times, is calculated for 〈110 〉 and 〈112 〉 steps on the {111 } surface of elemental copper near the melting temperature of the classical potential model considered. Step stiffnesses are derived from the MD results, yielding values from the largest system sizes of (37 ±1 ) meV/A ˚ for the different line orientations, implying that the stiffness is isotropic within the statistical precision of the calculations. The fluctuation lifetimes are found to vary by approximately four orders of magnitude over the range of wave numbers investigated, displaying a k dependence consistent with kinetics governed by step-edge mediated diffusion. The values for step stiffness derived from these simulations are compared to step free energies for the same system and temperature obtained in a recent MD-based thermodynamic-integration (TI) study [Freitas, Frolov, and Asta, Phys. Rev. B 95, 155444 (2017), 10.1103/PhysRevB.95.155444]. Results from the capillary-fluctuation analysis and TI calculations yield statistically significant differences that are discussed within the framework of statistical-mechanical theories for configurational contributions to step free energies.
Baco, Eduard; Ukimura, Osamu; Rud, Erik; Vlatkovic, Ljiljana; Svindland, Aud; Aron, Manju; Palmer, Suzanne; Matsugasumi, Toru; Marien, Arnaud; Bernhard, Jean-Christophe; Rewcastle, John C; Eggesbø, Heidi B; Gill, Inderbir S
2015-04-01
Prostate biopsies targeted by elastic fusion of magnetic resonance (MR) and three-dimensional (3D) transrectal ultrasound (TRUS) images may allow accurate identification of the index tumor (IT), defined as the lesion with the highest Gleason score or the largest volume or extraprostatic extension. To determine the accuracy of MR-TRUS image-fusion biopsy in characterizing ITs, as confirmed by correlation with step-sectioned radical prostatectomy (RP) specimens. Retrospective analysis of 135 consecutive patients who sequentially underwent pre-biopsy MR, MR-TRUS image-fusion biopsy, and robotic RP at two centers between January 2010 and September 2013. Image-guided biopsies of MR-suspected IT lesions were performed with tracking via real-time 3D TRUS. The largest geographically distinct cancer focus (IT lesion) was independently registered on step-sectioned RP specimens. A validated schema comprising 27 regions of interest was used to identify the IT center location on MR images and in RP specimens, as well as the location of the midpoint of the biopsy trajectory, and variables were correlated. The concordance between IT location on biopsy and RP specimens was 95% (128/135). The coefficient for correlation between IT volume on MRI and histology was r=0.663 (p<0.001). The maximum cancer core length on biopsy was weakly correlated with RP tumor volume (r=0.466, p<0.001). The concordance of primary Gleason pattern between targeted biopsy and RP specimens was 90% (115/128; κ=0.76). The study limitations include retrospective evaluation of a selected patient population, which limits the generalizability of the results. Use of MR-TRUS image fusion to guide prostate biopsies reliably identified the location and primary Gleason pattern of the IT lesion in >90% of patients, but showed limited ability to predict cancer volume, as confirmed by step-sectioned RP specimens. Biopsies targeted using magnetic resonance images combined with real-time three-dimensional transrectal ultrasound allowed us to reliably identify the spatial location of the most important tumor in prostate cancer and characterize its aggressiveness. Copyright © 2014 European Association of Urology. Published by Elsevier B.V. All rights reserved.
Solar forcing of the stream flow of a continental scale South American river.
Mauas, Pablo J D; Flamenco, Eduardo; Buccino, Andrea P
2008-10-17
Solar forcing on climate has been reported in several studies although the evidence so far remains inconclusive. Here, we analyze the stream flow of one of the largest rivers in the world, the Paraná in southeastern South America. For the last century, we find a strong correlation with the sunspot number, in multidecadal time scales, and with larger solar activity corresponding to larger stream flow. The correlation coefficient is r=0.78, significant to a 99% level. In shorter time scales we find a strong correlation with El Niño. These results are a step toward flood prediction, which might have great social and economic impacts.
Evaluation of subgrid-scale turbulence models using a fully simulated turbulent flow
NASA Technical Reports Server (NTRS)
Clark, R. A.; Ferziger, J. H.; Reynolds, W. C.
1977-01-01
An exact turbulent flow field was calculated on a three-dimensional grid with 64 points on a side. The flow simulates grid-generated turbulence from wind tunnel experiments. In this simulation, the grid spacing is small enough to include essentially all of the viscous energy dissipation, and the box is large enough to contain the largest eddy in the flow. The method is limited to low-turbulence Reynolds numbers, in our case R sub lambda = 36.6. To complete the calculation using a reasonable amount of computer time with reasonable accuracy, a third-order time-integration scheme was developed which runs at about the same speed as a simple first-order scheme. It obtains this accuracy by saving the velocity field and its first-time derivative at each time step. Fourth-order accurate space-differencing is used.
Combating the Reliability Challenge of GPU Register File at Low Supply Voltage
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tan, Jingweijia; Song, Shuaiwen; Yan, Kaige
Supply voltage reduction is an effective approach to significantly reduce GPU energy consumption. As the largest on-chip storage structure, the GPU register file becomes the reliability hotspot that prevents further supply voltage reduction below the safe limit (Vmin) due to process variation effects. This work addresses the reliability challenge of the GPU register file at low supply voltages, which is an essential first step for aggressive supply voltage reduction of the entire GPU chip. We propose GR-Guard, an architectural solution that leverages long register dead time to enable reliable operations from unreliable register file at low voltages.
Elite sprinting: are athletes individually step-frequency or step-length reliant?
Salo, Aki I T; Bezodis, Ian N; Batterham, Alan M; Kerwin, David G
2011-06-01
The aim of this study was to investigate the step characteristics among the very best 100-m sprinters in the world to understand whether the elite athletes are individually more reliant on step frequency (SF) or step length (SL). A total of 52 male elite-level 100-m races were recorded from publicly available television broadcasts, with 11 analyzed athletes performing in 10 or more races. For each run of each athlete, the average SF and SL over the whole 100-m distance was analyzed. To determine any SF or SL reliance for an individual athlete, the 90% confidence interval (CI) for the difference between the SF-time versus SL-time relationships was derived using a criterion nonparametric bootstrapping technique. Athletes performed these races with various combinations of SF and SL reliance. Athlete A10 yielded the highest positive CI difference (SL reliance), with a value of 1.05 (CI = 0.50-1.53). The largest negative difference (SF reliance) occurred for athlete A11 as -0.60, with the CI range of -1.20 to 0.03. Previous studies have generally identified only one of these variables to be the main reason for faster running velocities. However, this study showed that there is a large variation of performance patterns among the elite athletes and, overall, SF or SL reliance is a highly individual occurrence. It is proposed that athletes should take this reliance into account in their training, with SF-reliant athletes needing to keep their neural system ready for fast leg turnover and SL-reliant athletes requiring more concentration on maintaining strength levels.
Measurement of Ohms Law and Transport with Two Interacting Flux Ropes
NASA Astrophysics Data System (ADS)
Gekelman, Walter; Dehaas, Tim; Vincena, Steve; Daughton, Bill
2016-10-01
Two flux ropes, which are kink unstable, and repeatedly collide, were generated in a laboratory magnetoplasma. All the electric field terms in Ohms law: - ∇ ϕ -∂/A-> ∂ t ,1/ne , J-> × B-> , -1/ne ∇ P , u-> × B-> were measured at 48,000 spatial locations and thousands of time steps. All quantities oscillate at the flux rope collision frequency. The resistivity was derived from these quantities and could locally be 30 times the classical value. The resistivity, which was evaluated by integrating the electric field and current along 3D magnetic field is not largest at the quasi-seperatrix layer (QSL) where reconnection occurs. The relative size and spatial distribution of the Ohms law terms will be presented. The reconnection rate, Ξ = ∫ E-> . dl-> was largest near the QSL and could be positive or negative. Regions of negative resistivity exists (the volume integrated resistivity is positive) indicating dynamo action or the possibility of a non-local Ohms law. Volumetric temperature and density measurements are used to estimate electron heat transport and particle diffusion across the magnetic field. Work supported by UC office of the President (LANL-UCLA Grant) and done at the BAPSF which is supported by NSF-DOE.
Heintze, Siegward D; Forjanic, Monika; Roulet, François-Jean
2007-08-01
Using an optical sensor, to automatically evaluate the marginal seal of restorations placed with 21 adhesive systems of all four adhesive categories in cylindrical cavities of bovine dentin applying different outcome variables, and to evaluate their discriminatory power. Twenty-one adhesive systems were evaluated: three 3-step etch-and-rinse systems, three 2-step etch-and-rinse systems, five 2-step self-etching systems, and ten 1-step self-etching systems. All adhesives were applied in cylindrical cavities in bovine dentin together with Tetric Ceram (n=8). In the control group, no adhesive system was used. After 24 h of storage in water at 37 degrees C, the surface was polished with 4000-grit SiC paper, and epoxy resin replicas were produced. An optical sensor (FRT MicroProf) created 100 profiles of the restoration margin, and an algorithm detected gaps and calculated their depths and widths. The following evaluation criteria were used: percentage of specimens without gaps, the percentage of gap-free profiles in relation to all profiles per specimen, mean gap width, mean gap depth, largest gap, modified marginal integrity index MI. The statistical analysis was carried out on log-transformed data for all variables with ANOVA and post-hoc Tukey's test for multiple comparisons. The correlation between the variables was tested with regression analysis, and the pooled data accordingto the four adhesive categories were compared by applying the Mann-Whitney nonparametric test (p < 0.05). For all the variables that characterized the marginal adaptation, there was a great variation from material to material. In general, the etch-and-rinse adhesive systems demonstrated the best marginal adaptation, followed by the 2-step self-etching and the 1-step self-etching adhesives; the latter showed the highest variability in test results between materials and within the same material. The only exception to this rule was Xeno IV, which showed a marginal adaptation that was comparable to that of the best 3-step etch-and-rinse systems. Except for the variables "largest gap" and "mean gap depth", all the other variables had a similar ability to discriminate between materials. Pooled data according to the four adhesive categories revealed statistically significant differences between the one-step self-etching systems and the other three systems as well as between two-step self-etching and three-step etch-and-rinse systems. With one exception, the one-step self-etching systems yielded the poorest marginal adaptation results and the highest variability between materials and within the same material. Except for the variable "largest gap", the percentage of continuous margin, mean gap width, mean gap depth, and the marginal integrity index MI were closely related to one another and showed--with the exception of "mean gap depth"--similar discriminatory power.
Cross-Scale Modelling of Subduction from Minute to Million of Years Time Scale
NASA Astrophysics Data System (ADS)
Sobolev, S. V.; Muldashev, I. A.
2015-12-01
Subduction is an essentially multi-scale process with time-scales spanning from geological to earthquake scale with the seismic cycle in-between. Modelling of such process constitutes one of the largest challenges in geodynamic modelling today.Here we present a cross-scale thermomechanical model capable of simulating the entire subduction process from rupture (1 min) to geological time (millions of years) that employs elasticity, mineral-physics-constrained non-linear transient viscous rheology and rate-and-state friction plasticity. The model generates spontaneous earthquake sequences. The adaptive time-step algorithm recognizes moment of instability and drops the integration time step to its minimum value of 40 sec during the earthquake. The time step is then gradually increased to its maximal value of 5 yr, following decreasing displacement rates during the postseismic relaxation. Efficient implementation of numerical techniques allows long-term simulations with total time of millions of years. This technique allows to follow in details deformation process during the entire seismic cycle and multiple seismic cycles. We observe various deformation patterns during modelled seismic cycle that are consistent with surface GPS observations and demonstrate that, contrary to the conventional ideas, the postseismic deformation may be controlled by viscoelastic relaxation in the mantle wedge, starting within only a few hours after the great (M>9) earthquakes. Interestingly, in our model an average slip velocity at the fault closely follows hyperbolic decay law. In natural observations, such deformation is interpreted as an afterslip, while in our model it is caused by the viscoelastic relaxation of mantle wedge with viscosity strongly varying with time. We demonstrate that our results are consistent with the postseismic surface displacement after the Great Tohoku Earthquake for the day-to-year time range. We will also present results of the modeling of deformation of the upper plate during multiple earthquake cycles at times of hundred thousand and million years and discuss effect of great earthquakes in changing long-term stress field in the upper plate.
Maximum magnitude in the Lower Rhine Graben
NASA Astrophysics Data System (ADS)
Vanneste, Kris; Merino, Miguel; Stein, Seth; Vleminckx, Bart; Brooks, Eddie; Camelbeeck, Thierry
2014-05-01
Estimating Mmax, the assumed magnitude of the largest future earthquakes expected on a fault or in an area, involves large uncertainties. No theoretical basis exists to infer Mmax because even where we know the long-term rate of motion across a plate boundary fault, or the deformation rate across an intraplate zone, neither predict how strain will be released. As a result, quite different estimates can be made based on the assumptions used. All one can say with certainty is that Mmax is at least as large as the largest earthquake in the available record. However, because catalogs are often short relative to the average recurrence time of large earthquakes, larger earthquakes than anticipated often occur. Estimating Mmax is especially challenging within plates, where deformation rates are poorly constrained, large earthquakes are rarer and variable in space and time, and often occur on previously unrecognized faults. We explore this issue for the Lower Rhine Graben seismic zone where the largest known earthquake, the 1756 Düren earthquake, has magnitude 5.7 and should occur on average about every 400 years. However, paleoseismic studies suggest that earthquakes with magnitudes up to 6.7 occurred during the Late Pleistocene and Holocene. What to assume for Mmax is crucial for critical facilities like nuclear power plants that should be designed to withstand the maximum shaking in 10,000 years. Using the observed earthquake frequency-magnitude data, we generate synthetic earthquake histories, and sample them over shorter intervals corresponding to the real catalog's completeness. The maximum magnitudes appearing most often in the simulations tend to be those of earthquakes with mean recurrence time equal to the catalog length. Because catalogs are often short relative to the average recurrence time of large earthquakes, we expect larger earthquakes than observed to date to occur. In a next step, we will compute hazard maps for different return periods based on the synthetic catalogs, in order to determine the influence of underestimating Mmax.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Boggs, Paul T.; Althsuler, Alan; Larzelere, Alex R.
2005-08-01
The Design-through-Analysis Realization Team (DART) is chartered with reducing the time Sandia analysts require to complete the engineering analysis process. The DART system analysis team studied the engineering analysis processes employed by analysts in Centers 9100 and 8700 at Sandia to identify opportunities for reducing overall design-through-analysis process time. The team created and implemented a rigorous analysis methodology based on a generic process flow model parameterized by information obtained from analysts. They also collected data from analysis department managers to quantify the problem type and complexity distribution throughout Sandia's analyst community. They then used this information to develop a communitymore » model, which enables a simple characterization of processes that span the analyst community. The results indicate that equal opportunity for reducing analysis process time is available both by reducing the ''once-through'' time required to complete a process step and by reducing the probability of backward iteration. In addition, reducing the rework fraction (i.e., improving the engineering efficiency of subsequent iterations) offers approximately 40% to 80% of the benefit of reducing the ''once-through'' time or iteration probability, depending upon the process step being considered. Further, the results indicate that geometry manipulation and meshing is the largest portion of an analyst's effort, especially for structural problems, and offers significant opportunity for overall time reduction. Iteration loops initiated late in the process are more costly than others because they increase ''inner loop'' iterations. Identifying and correcting problems as early as possible in the process offers significant opportunity for time savings.« less
NASA 2nd Generation RLV Program Introduction, Status and Future Plans
NASA Technical Reports Server (NTRS)
Dumbacher, Dan L.; Smith, Dennis E. (Technical Monitor)
2002-01-01
The Space Launch Initiative (SLI), managed by the Second Generation Reusable Launch Vehicle (2ndGen RLV) Program, was established to examine the possibility of revolutionizing space launch capabilities, define conceptual architectures, and concurrently identify the advanced technologies required to support a next-generation system. Initial Program funds have been allocated to design, evaluate, and formulate realistic plans leading to a 2nd Gen RLV full-scale development (FSD) decision by 2006. Program goals are to reduce both risk and cost for accessing the limitless opportunities afforded outside Earth's atmosphere fo civil, defense, and commercial enterprises. A 2nd Gen RLV architecture includes a reusable Earth-to-orbit launch vehicle, an on-orbit transport and return vehicle, ground and flight operations, mission planning, and both on-orbit and on-the-ground support infrastructures All segments of the architecture must advance in step with development of the RLV if a next-generation system is to be fully operational early next decade. However, experience shows that propulsion is the single largest contributor to unreliability during ascent, requires the largest expenditure of time for maintenance, and takes a long time to develop; therefore, propulsion is the key to meeting safety, reliability, and cost goals. For these reasons, propulsion is SLI's top technology investment area.
Gait impairment precedes clinical symptoms in spinocerebellar ataxia type 6.
Rochester, Lynn; Galna, Brook; Lord, Sue; Mhiripiri, Dadirayi; Eglon, Gail; Chinnery, Patrick F
2014-02-01
Spinocerebellar ataxia type 6 (SCA6) is an inherited ataxia with no established treatment. Gait ataxia is a prominent feature causing substantial disability. Understanding the evolution of the gait disturbance is a key step in developing treatment strategies. We studied 9 gait variables in 24 SCA6 (6 presymptomatic; 18 symptomatic) and 24 controls and correlated gait with clinical severity (presymptomatic and symptomatic). Discrete gait characteristics precede symptoms in SCA6 with significantly increased variability of step width and step time, whereas a more global gait deficit was evident in symptomatic individuals. Gait characteristics discriminated between presymptomatic and symptomatic individuals and were selectively associated with disease severity. This is the largest study to include a detailed characterization of gait in SCA6, including presymptomatic subjects, allowing changes across the disease spectrum to be compared. Selective gait disturbance is already present in SCA6 before clinical symptoms appear and gait characteristics are also sensitive to disease progression. Early gait disturbance likely reflects primary pathology distinct from secondary changes. These findings open the opportunity for early evaluation and sensitive measures of therapeutic efficacy using instrumented gait analysis which may have broader relevance for all degenerative ataxias. © 2013 Movement Disorder Society.
NASA Astrophysics Data System (ADS)
Ertsen, M. W.; Murphy, J. T.; Purdue, L. E.; Zhu, T.
2014-04-01
When simulating social action in modeling efforts, as in socio-hydrology, an issue of obvious importance is how to ensure that social action by human agents is well-represented in the analysis and the model. Generally, human decision-making is either modeled on a yearly basis or lumped together as collective social structures. Both responses are problematic, as human decision-making is more complex and organizations are the result of human agency and cannot be used as explanatory forces. A way out of the dilemma of how to include human agency is to go to the largest societal and environmental clustering possible: society itself and climate, with time steps of years or decades. In the paper, another way out is developed: to face human agency squarely, and direct the modeling approach to the agency of individuals and couple this with the lowest appropriate hydrological level and time step. This approach is supported theoretically by the work of Bruno Latour, the French sociologist and philosopher. We discuss irrigation archaeology, as it is in this discipline that the issues of scale and explanatory force are well discussed. The issue is not just what scale to use: it is what scale matters. We argue that understanding the arrangements that permitted the management of irrigation over centuries requires modeling and understanding the small-scale, day-to-day operations and personal interactions upon which they were built. This effort, however, must be informed by the longer-term dynamics, as these provide the context within which human agency is acted out.
NASA Astrophysics Data System (ADS)
Ertsen, M. W.; Murphy, J. T.; Purdue, L. E.; Zhu, T.
2013-11-01
When simulating social action in modeling efforts, as in socio-hydrology, an issue of obvious importance is how to ensure that social action by human agents is well-represented in the analysis and the model. Generally, human decision-making is either modeled on a yearly basis or lumped together as collective social structures. Both responses are problematic, as human decision making is more complex and organizations are the result of human agency and cannot be used as explanatory forces. A way out of the dilemma how to include human agency is to go to the largest societal and environmental clustering possible: society itself and climate, with time steps of years or decades. In the paper, the other way out is developed: to face human agency squarely, and direct the modeling approach to the human agency of individuals and couple this with the lowest appropriate hydrological level and time step. This approach is supported theoretically by the work of Bruno Latour, the French sociologist and philosopher. We discuss irrigation archaeology, as it is in this discipline that the issues of scale and explanatory force are well discussed. The issue is not just what scale to use: it is what scale matters. We argue that understanding the arrangements that permitted the management of irrigation over centuries, requires modeling and understanding the small-scale, day-to-day operations and personal interactions upon which they were built. This effort, however, must be informed by the longer-term dynamics as these provide the context within which human agency, is acted out.
Physical activity in low-income postpartum women.
Wilkinson, Susan; Huang, Chiu-Mieh; Walker, Lorraine O; Sterling, Bobbie Sue; Kim, Minseong
2004-01-01
To validate the 7-day physical activity recall (PAR), including alternative PAR scoring algorithms, using pedometer readings with low-income postpartum women, and to describe physical activity patterns of a low-income population of postpartum women. Forty-four women (13 African American, 19 Hispanic, and 12 White) from the Austin New Mothers Study (ANMS) were interviewed at 3 months postpartum. Data were scored alternatively according to the Blair (sitting treated as light activity) and Welk (sitting excluded from light activity and treated as rest) algorithms. Step counts based on 3 days of wearing pedometers served as the validation measure. Using the Welk algorithm, PAR components significantly correlated with step counts were: minutes spent in light activity, total activity (sum of light to very hard activity), and energy expenditure. Minutes spent in sitting were negatively correlated with step counts. No PAR component activities derived with the Blair algorithm were significantly related to step counts. The largest amount of active time was spent in light activity: 384.4 minutes with the Welk algorithm. Mothers averaged fewer than 16 minutes per day in moderate or high intensity activity. Step counts measured by pedometers averaged 6,262 (SD = 2,712) per day. The findings indicate support for the validity of the PAR as a measure of physical activity with low-income postpartum mothers when scored according to the Welk algorithm. On average, low-income postpartum women in this study did not meet recommendations for amount of moderate or high intensity physical activity.
Role of delay-based reward in the spatial cooperation
NASA Astrophysics Data System (ADS)
Wang, Xu-Wen; Nie, Sen; Jiang, Luo-Luo; Wang, Bing-Hong; Chen, Shi-Ming
2017-01-01
Strategy selection in games, a typical decision making, usually brings noticeable reward for players which have discounted value if the delay appears. The discounted value is measure: earning sooner with a small reward or later with a delayed larger reward. Here, we investigate effects of delayed rewards on the cooperation in structured population. It is found that delayed reward supports the spreading of cooperation in square lattice, small-world and random networks. In particular, intermediate reward differences between delays impel the highest cooperation level. Interestingly, cooperative individuals with the same delay time steps form clusters to resist the invasion of defects, and cooperative individuals with lowest delay reward survive because they form the largest clusters in the lattice.
Yan, Zheping; Li, Jiyun; Zhang, Gengshi; Wu, Yi
2018-01-01
A novel real-time reaction obstacle avoidance algorithm (RRA) is proposed for autonomous underwater vehicles (AUVs) that must adapt to unknown complex terrains, based on forward looking sonar (FLS). To accomplish this algorithm, obstacle avoidance rules are planned, and the RRA processes are split into five steps Introduction only lists 4 so AUVs can rapidly respond to various environment obstacles. The largest polar angle algorithm (LPAA) is designed to change detected obstacle’s irregular outline into a convex polygon, which simplifies the obstacle avoidance process. A solution is designed to solve the trapping problem existing in U-shape obstacle avoidance by an outline memory algorithm. Finally, simulations in three unknown obstacle scenes are carried out to demonstrate the performance of this algorithm, where the obtained obstacle avoidance trajectories are safety, smooth and near-optimal. PMID:29393915
Yan, Zheping; Li, Jiyun; Zhang, Gengshi; Wu, Yi
2018-02-02
A novel real-time reaction obstacle avoidance algorithm (RRA) is proposed for autonomous underwater vehicles (AUVs) that must adapt to unknown complex terrains, based on forward looking sonar (FLS). To accomplish this algorithm, obstacle avoidance rules are planned, and the RRA processes are split into five steps Introduction only lists 4 so AUVs can rapidly respond to various environment obstacles. The largest polar angle algorithm (LPAA) is designed to change detected obstacle's irregular outline into a convex polygon, which simplifies the obstacle avoidance process. A solution is designed to solve the trapping problem existing in U-shape obstacle avoidance by an outline memory algorithm. Finally, simulations in three unknown obstacle scenes are carried out to demonstrate the performance of this algorithm, where the obtained obstacle avoidance trajectories are safety, smooth and near-optimal.
Airport Catchment Area- Example Warsaw Modlin Airport
NASA Astrophysics Data System (ADS)
Błachut, Jakub
2017-10-01
The form and functions of airports change over time, just like the form and function of cities. Historically, airports are understood as places of aircraft landing, control towers operation and location of other facilities used for communication and transport. This traditional model is giving way to the concept of so-called Airport Cities, based on the assumption that, in addition to its infrastructure and air services, also non-air services are performed, constituting a source of income. At the same time, their reach and impact on the economy of the areas around the airport are expanding. Idea City Airport appeared in the United States in the late twentieth century. The author is J. D. Kasarda, he believes that it is around these big air ports that airport cities develop. In the world, there are currently 45 areas which can be classified in this category, out of which 12 are located in Europe. Main air traffic hubs in Europe are not only the most important passenger traffic junctions, but also largest centres dispatching goods (cargo). It can be said that, among the 30 largest airports, 24 are the largest in terms of both passenger and freight traffic. These airports cover up to 89.9% of the total freight transport of all European airports. At the same time, they serve 56.9% of all passengers in Europe. Based on the concept of Airport City was developed document THE INTEGRATED REGIONAL POLYCENTRIC DEVELOPMENT PLANS FOR THE WARSAW MODLIN AIRPORT CATCHMENT AREA. The plan developed takes into account the findings of the Mazovian voivodeship spatial development plan, specifying the details of its provisions where possible. The development is the first step for the implementation of the concept of the Modlin Airport City. The accomplishment of this ambitious vision will only be possible with hard work of a number of entities, as well as taking into account the former Modlin Fortress, currently under revitalisation, in concepts and plans.
Blanco, Elias; Foster, Christopher W; Cumba, Loanda R; do Carmo, Devaney R; Banks, Craig E
2016-04-25
In this paper the effect of solvent induced chemical surface enhancements upon graphitic screen-printed electrodes (SPEs) is explored. Previous literature has indicated that treating the working electrode of a SPE with the solvent N,N-dimethylformamide (DMF) offers improvements within the electroanalytical response, resulting in a 57-fold increment in the electrode surface area compared to their unmodified counterparts. The protocol involves two steps: (i) the SPE is placed into DMF for a selected time, and (ii) it is cured in an oven at a selected time and temperature. Beneficial electroanalytical outputs are reported to be due to the increased surface area attributed to the binder within the bulk surface of the SPEs dissolving out during the immersion step (step i). We revisit this exciting concept and explore these solvent induced chemical surface enhancements using edge- and basal-plane like SPEs and a new bespoke SPE, utilising the solvent DMF and explore, in detail, the parameters utilised in steps (i) and (ii). The electrochemical performance following steps (i) and (ii) is evaluated using the outer-sphere redox probe hexaammineruthenium(iii) chloride/0.1 M KCl, where it is found that the largest improvement is obtained using DMF with an immersion time of 10 minutes and a curing time of 30 minutes at 100 °C. Solvent induced chemical surface enhancement upon the electrochemical performance of SPEs is also benchmarked in terms of their electroanalytical sensing of NADH (dihydronicotinamide adenine dinucleotide reduced form) and capsaicin both of which are compared to their unmodified SPE counterparts. In both cases, it is apparent that a marginal improvement in the electroanalytical sensitivity (i.e. gradient of calibration plots) of 1.08-fold and 1.38-fold are found respectively. Returning to the original exciting concept, interestingly it was found that when a poor experimental technique was employed, only then significant increases within the working electrode area are evident. In this case, the insulating layer that defines the working electrode surface, which was not protected from the solvent (step (i)) creates cracks within the insulating layer exposing the underlying carbon connections and thus increasing the electrode area by an unknown quantity. We infer that the origin of the response reported within the literature, where an extreme increase in the electrochemical surface area (57-fold) was reported, is unlikely to be solely due to the binder dissolving but rather poor experimental control over step (i).
Asbestos Abatement: Start to Finish.
ERIC Educational Resources Information Center
Makruski, Edward D.
1984-01-01
An EPA survey of the largest school districts in the nation revealed that over 50 percent have not inspected for asbestos and two-thirds have failed to notify parents adequately. Seven steps are therefore provided for successful asbestos abatement, in anticipation of tougher regulations now under consideration. (TE)
Audio-Visual Integration in a Redundant Target Paradigm: A Comparison between Rhesus Macaque and Man
Bremen, Peter; Massoudi, Rooholla; Van Wanrooij, Marc M.; Van Opstal, A. J.
2017-01-01
The mechanisms underlying multi-sensory interactions are still poorly understood despite considerable progress made since the first neurophysiological recordings of multi-sensory neurons. While the majority of single-cell neurophysiology has been performed in anesthetized or passive-awake laboratory animals, the vast majority of behavioral data stems from studies with human subjects. Interpretation of neurophysiological data implicitly assumes that laboratory animals exhibit perceptual phenomena comparable or identical to those observed in human subjects. To explicitly test this underlying assumption, we here characterized how two rhesus macaques and four humans detect changes in intensity of auditory, visual, and audio-visual stimuli. These intensity changes consisted of a gradual envelope modulation for the sound, and a luminance step for the LED. Subjects had to detect any perceived intensity change as fast as possible. By comparing the monkeys' results with those obtained from the human subjects we found that (1) unimodal reaction times differed across modality, acoustic modulation frequency, and species, (2) the largest facilitation of reaction times with the audio-visual stimuli was observed when stimulus onset asynchronies were such that the unimodal reactions would occur at the same time (response, rather than physical synchrony), and (3) the largest audio-visual reaction-time facilitation was observed when unimodal auditory stimuli were difficult to detect, i.e., at slow unimodal reaction times. We conclude that despite marked unimodal heterogeneity, similar multisensory rules applied to both species. Single-cell neurophysiology in the rhesus macaque may therefore yield valuable insights into the mechanisms governing audio-visual integration that may be informative of the processes taking place in the human brain. PMID:29238295
Bremen, Peter; Massoudi, Rooholla; Van Wanrooij, Marc M; Van Opstal, A J
2017-01-01
The mechanisms underlying multi-sensory interactions are still poorly understood despite considerable progress made since the first neurophysiological recordings of multi-sensory neurons. While the majority of single-cell neurophysiology has been performed in anesthetized or passive-awake laboratory animals, the vast majority of behavioral data stems from studies with human subjects. Interpretation of neurophysiological data implicitly assumes that laboratory animals exhibit perceptual phenomena comparable or identical to those observed in human subjects. To explicitly test this underlying assumption, we here characterized how two rhesus macaques and four humans detect changes in intensity of auditory, visual, and audio-visual stimuli. These intensity changes consisted of a gradual envelope modulation for the sound, and a luminance step for the LED. Subjects had to detect any perceived intensity change as fast as possible. By comparing the monkeys' results with those obtained from the human subjects we found that (1) unimodal reaction times differed across modality, acoustic modulation frequency, and species, (2) the largest facilitation of reaction times with the audio-visual stimuli was observed when stimulus onset asynchronies were such that the unimodal reactions would occur at the same time (response, rather than physical synchrony), and (3) the largest audio-visual reaction-time facilitation was observed when unimodal auditory stimuli were difficult to detect, i.e., at slow unimodal reaction times. We conclude that despite marked unimodal heterogeneity, similar multisensory rules applied to both species. Single-cell neurophysiology in the rhesus macaque may therefore yield valuable insights into the mechanisms governing audio-visual integration that may be informative of the processes taking place in the human brain.
NASA Astrophysics Data System (ADS)
Krider, E. P.; Baffou, G.; Murray, N. D.; Willett, J. C.
2004-12-01
We have analyzed the shapes and other characteristics of the electric field, E, and dE/dt waveforms that were radiated by leader steps just before the first return stroke in cloud-to-ocean lightning. dE/dt waveforms were recorded using an 8-bit digitizer sampling at 100 MHz, and an integrated waveform, Eint, was computed by numerically integrating dE/dt and comparing the result with an analog E waveform digitized at 10 MHz. All signals were recorded under conditions where the lightning locations were known and there was minimal distortion in the fields due to the effects of ground-wave propagation. The dE/dt waveforms radiated by leader steps tend to fall into three categories: (1) "simple" - an isolated negative peak that is immediately followed by a positive overshoot (where negative polarity follows the normal physics convention), (2) "double" - two simple waveforms that occur at almost the same time, and (3) "burst" - a complex cluster of pulses with a total duration of about one microsecond. In this paper, we will give examples of each of these waveform types, and we will summarize their characteristics on a submicrosecond time-scale. For example, in an interval starting 9 μ s before to 4 μ s before the largest, negative (dominant) peak in dE/dt peak in the return stroke, 131 first strokes produced a total of 296 impulses with a peak amplitude greater than 10% of the dominant peak, and the average amplitude of these pulses was 0.21 of the dominant peak. The last leader step in a 12 μ s interval before the dominant peak was a simple waveform in 51 first strokes, and in these cases, the average time-interval between the peak dE/dt of the step and the dominant peak of the stroke was 5.8 ± 1.7 μ s, a value that is in good agreement with prior measurements. The median full-width-at-half-maximum (FWHM) of 274 simple Eint signatures was 141 ns, and the associated mean and standard deviation were 187 ± 131 ns.
Emboldened, the FTC Seems Ready to Fight More Mergers.
Kirkner, Richard Mark
2016-11-01
Because hospital spending makes up the largest piece of U.S. health care spending-32%, according to data from CMS-any judicial rulings or legislative move to curb a hospital's market dominance are key to controlling overall health care costs. The FTC is stepping in.
DEFINING THE MANDATE OF PROTEOMICS IN THE POST-GENOMIC ERA: WORKSHOP REPORT
Research in proteomics is the next step after genomics in understanding life processes at the molecular level. In the largest sense proteomics encompasses knowledge of the structure, function and expression of all proteins in the biochemical or biological contexts of all organism...
Seeing Red Over Those Black Marks on Your Floors?
ERIC Educational Resources Information Center
Rittner-Heir, Robbin
1999-01-01
Describes one custodian's cost-effective approach for resolving black scuff marks on flooring and staving off their return. An eight-step cleaning program is detailed. Cautions that the largest obstacle to proper handling of floor problems lies in the lack of a proper maintenance schedule. (GR)
Pedometer-determined segmented physical activity patterns of fourth- and fifth-grade children.
Brusseau, Timothy A; Kulinna, Pamela H; Tudor-Locke, Catrine; Ferry, Matthew; van der Mars, Hans; Darst, Paul W
2011-02-01
The need to understand where and how much physical activity (PA) children accumulate has become important in assisting the development, implementation, and evaluation of PA interventions. The purpose of this study was to describe the daily PA patterns of children during the segmented school-week. 829 children participated by wearing pedometers (Yamax-Digiwalker SW-200) for 5 consecutive days. Students recorded their steps at arrival/departure from school, Physical Education (PE), recess, and lunchtime. Boys took significantly more steps/day than girls during most PA opportunities; recess, t(440)=8.80, P<.01; lunch, t(811)=14.57, P<.01; outside of school, t(763)=5.34, P<.01; school, t(811)=10.61, P<.01; and total day, t(782)=7.69, P<.01. Boys and girls accumulated a similar number of steps t(711) .69, P=.09 during PE. For boys, lunchtime represented the largest single source of PA (13.4%) at school, followed by PE (12.7%) and recess (9.5%). For girls, PE was the largest (14.3%), followed by lunchtime (11.7%) and recess (8.3%). An understanding of the contributions of the in-school segments can serve as baseline measures for practitioners and researchers to use in school-based PA interventions.
Tracking cohesive subgroups over time in inferred social networks
NASA Astrophysics Data System (ADS)
Chin, Alvin; Chignell, Mark; Wang, Hao
2010-04-01
As a first step in the development of community trackers for large-scale online interaction, this paper shows how cohesive subgroup analysis using the Social Cohesion Analysis of Networks (SCAN; Chin and Chignell 2008) and Data-Intensive Socially Similar Evolving Community Tracker (DISSECT; Chin and Chignell 2010) methods can be applied to the problem of identifying cohesive subgroups and tracking them over time. Three case studies are reported, and the findings are used to evaluate how well the SCAN and DISSECT methods work for different types of data. In the largest of the case studies, variations in temporal cohesiveness are identified across a set of subgroups extracted from the inferred social network. Further modifications to the DISSECT methodology are suggested based on the results obtained. The paper concludes with recommendations concerning further research that would be beneficial in addressing the community tracking problem for online data.
Survey Methods to Optimize Response Rate in the National Dental Practice-Based Research Network.
Funkhouser, Ellen; Vellala, Kavya; Baltuck, Camille; Cacciato, Rita; Durand, Emily; McEdward, Deborah; Sowell, Ellen; Theisen, Sarah E; Gilbert, Gregg H
2017-09-01
Surveys of health professionals typically have low response rates, and these rates have been decreasing in the recent years. We report on the methods used in a successful survey of dentist members of the National Dental Practice-Based Research Network. The objectives were to quantify the (1) increase in response rate associated with successive survey methods, (2) time to completion with each successive step, (3) contribution from the final method and personal contact, and (4) differences in response rate and mode of response by practice/practitioner characteristics. Dentist members of the network were mailed an invitation describing the study. Subsequently, up to six recruitment steps were followed: initial e-mail, two e-mail reminders at 2-week intervals, a third e-mail reminder with postal mailing a paper questionnaire, a second postal mailing of paper questionnaire, and staff follow-up. Of the 1,876 invited, 160 were deemed ineligible and 1,488 (87% of 1,716 eligible) completed the survey. Completion by step: initial e-mail, 35%; second e-mail, 15%; third e-mail, 7%; fourth e-mail/first paper, 11%; second paper, 15%; and staff follow-up, 16%. Overall, 76% completed the survey online and 24% on paper. Completion rates increased in absolute numbers and proportionally with later methods of recruitment. Participation rates varied little by practice/practitioner characteristics. Completion on paper was more likely by older dentists. Multiple methods of recruitment resulted in a high participation rate: Each step and method produced incremental increases with the final step producing the largest increase.
Seals/Secondary Fluid Flows Workshop 1997; Volume II: HSR Engine Special Session
NASA Technical Reports Server (NTRS)
Hendricks, Robert C. (Editor)
2006-01-01
The High Speed Civil Transport (HSCT) will be the largest engine ever built and operated at maximum conditions for long periods of time. It is being developed collaboratively with NASA, FAA, Boeing-McDonnell Douglas, Pratt & Whitney, and General Electric. This document provides an initial step toward defining high speed research (HSR) sealing needs. The overview for HSR seals includes defining objectives, summarizing sealing and material requirements, presenting relevant seal cross-sections, and identifying technology needs. Overview presentations are given for the inlet, turbomachinery, combustor and nozzle. The HSCT and HSR seal issues center on durability and efficiency of rotating equipment seals, structural seals and high speed bearing and sump seals. Tighter clearances, propulsion system size and thermal requirements challenge component designers.
Massively Multithreaded Maxflow for Image Segmentation on the Cray XMT-2
Bokhari, Shahid H.; Çatalyürek, Ümit V.; Gurcan, Metin N.
2014-01-01
SUMMARY Image segmentation is a very important step in the computerized analysis of digital images. The maxflow mincut approach has been successfully used to obtain minimum energy segmentations of images in many fields. Classical algorithms for maxflow in networks do not directly lend themselves to efficient parallel implementations on contemporary parallel processors. We present the results of an implementation of Goldberg-Tarjan preflow-push algorithm on the Cray XMT-2 massively multithreaded supercomputer. This machine has hardware support for 128 threads in each physical processor, a uniformly accessible shared memory of up to 4 TB and hardware synchronization for each 64 bit word. It is thus well-suited to the parallelization of graph theoretic algorithms, such as preflow-push. We describe the implementation of the preflow-push code on the XMT-2 and present the results of timing experiments on a series of synthetically generated as well as real images. Our results indicate very good performance on large images and pave the way for practical applications of this machine architecture for image analysis in a production setting. The largest images we have run are 320002 pixels in size, which are well beyond the largest previously reported in the literature. PMID:25598745
Considering dominance in reduced single-step genomic evaluations.
Ertl, J; Edel, C; Pimentel, E C G; Emmerling, R; Götz, K-U
2018-06-01
Single-step models including dominance can be an enormous computational task and can even be prohibitive for practical application. In this study, we try to answer the question whether a reduced single-step model is able to estimate breeding values of bulls and breeding values, dominance deviations and total genetic values of cows with acceptable quality. Genetic values and phenotypes were simulated (500 repetitions) for a small Fleckvieh pedigree consisting of 371 bulls (180 thereof genotyped) and 553 cows (40 thereof genotyped). This pedigree was virtually extended for 2,407 non-genotyped daughters. Genetic values were estimated with the single-step model and with different reduced single-step models. Including more relatives of genotyped cows in the reduced single-step model resulted in a better agreement of results with the single-step model. Accuracies of genetic values were largest with single-step and smallest with reduced single-step when only the cows genotyped were modelled. The results indicate that a reduced single-step model is suitable to estimate breeding values of bulls and breeding values, dominance deviations and total genetic values of cows with acceptable quality. © 2018 Blackwell Verlag GmbH.
ERIC Educational Resources Information Center
Samuels, Christina A.
2009-01-01
The nation's largest school district is engaged in a fierce debate over the merits and drawbacks of mayoral control as a legislative deadline looms for renewing the governance arrangement. The 2002 law that gave New York City's mayor authority over the school system will "sunset" on June 30 unless state lawmakers step in, as they are…
Associative Networks on a Massively Parallel Computer.
1985-10-01
lgbt (as a group of numbers, in this case), but this only leads to sensible queries when a statistical function is applied: "What is the largest salary...34.*"* . •.,. 64 the siW~pe operations being used during ascend, each movement step costs the same as executing an operation
Readers of Largest U.S. History Textbooks Discover a Storehouse of Misinformation.
ERIC Educational Resources Information Center
Putka, Gary
1992-01-01
Reports that a Texas advocacy group discovered thousands of errors in U.S. history textbooks. Notes that the books underwent the review after drawing favorable reactions from Texas education officials. Identifies possible explanations for the errors and steps being taken to reduce errors in the future. (SG)
ERIC Educational Resources Information Center
Weinstein, Margery
2011-01-01
For The PNC Financial Services Group, Inc., last year proved the perfect backdrop for meeting learning and development goals as the company completed the largest acquisition in its history. While training and development have always been a priority for PNC, in 2010 the company climbed one step higher. The acquisition of National City…
Wang, Xiao-Ye; Zhuang, Fang-Dong; Wang, Rui-Bo; Wang, Xin-Chang; Cao, Xiao-Yu; Wang, Jie-Yu; Pei, Jian
2014-03-12
A straightforward strategy has been used to construct large BN-embedded π-systems simply from azaacenes. BN heterosuperbenzene derivatives, the largest BN heteroaromatics to date, have been synthesized in three steps. The molecules exhibit curved π-surfaces, showing two different conformations which are self-organized into a sandwich structure and further packed into a π-stacking column. The assembled microribbons exhibit good charge transport properties and photoconductivity, representing an important step toward the optoelectronic applications of BN-embedded aromatics.
van Albada, Sacha J.; Rowley, Andrew G.; Senk, Johanna; Hopkins, Michael; Schmidt, Maximilian; Stokes, Alan B.; Lester, David R.; Diesmann, Markus; Furber, Steve B.
2018-01-01
The digital neuromorphic hardware SpiNNaker has been developed with the aim of enabling large-scale neural network simulations in real time and with low power consumption. Real-time performance is achieved with 1 ms integration time steps, and thus applies to neural networks for which faster time scales of the dynamics can be neglected. By slowing down the simulation, shorter integration time steps and hence faster time scales, which are often biologically relevant, can be incorporated. We here describe the first full-scale simulations of a cortical microcircuit with biological time scales on SpiNNaker. Since about half the synapses onto the neurons arise within the microcircuit, larger cortical circuits have only moderately more synapses per neuron. Therefore, the full-scale microcircuit paves the way for simulating cortical circuits of arbitrary size. With approximately 80, 000 neurons and 0.3 billion synapses, this model is the largest simulated on SpiNNaker to date. The scale-up is enabled by recent developments in the SpiNNaker software stack that allow simulations to be spread across multiple boards. Comparison with simulations using the NEST software on a high-performance cluster shows that both simulators can reach a similar accuracy, despite the fixed-point arithmetic of SpiNNaker, demonstrating the usability of SpiNNaker for computational neuroscience applications with biological time scales and large network size. The runtime and power consumption are also assessed for both simulators on the example of the cortical microcircuit model. To obtain an accuracy similar to that of NEST with 0.1 ms time steps, SpiNNaker requires a slowdown factor of around 20 compared to real time. The runtime for NEST saturates around 3 times real time using hybrid parallelization with MPI and multi-threading. However, achieving this runtime comes at the cost of increased power and energy consumption. The lowest total energy consumption for NEST is reached at around 144 parallel threads and 4.6 times slowdown. At this setting, NEST and SpiNNaker have a comparable energy consumption per synaptic event. Our results widen the application domain of SpiNNaker and help guide its development, showing that further optimizations such as synapse-centric network representation are necessary to enable real-time simulation of large biological neural networks. PMID:29875620
NASA Astrophysics Data System (ADS)
Rengers, F. K.; McGuire, L. A.; Ebel, B. A.; Tucker, G. E.
2018-05-01
The transition of a colluvial hollow to a fluvial channel with discrete steps was observed after two landscape-scale disturbances. The first disturbance, a high-severity wildfire, changed the catchment hydrology to favor overland flow, which incised a colluvial hollow, creating a channel in the same location. This incised channel became armored with cobbles and boulders following repeated post-wildfire overland flow events. Three years after the fire, a record rainstorm produced regional flooding and generated sufficient fluvial erosion and sorting to produce a fluvial channel with periodically spaced steps. An analysis of the step spacing shows that after the flood, newly formed steps retained a similar spacing to the topographic roughness spacing in the original colluvial hollow (prior to channelization). This suggests that despite a distinct change in channel form roughness and bedform morphology, the endogenous roughness periodicity was conserved. Variations in sediment erodibility helped to create the emergent steps as the largest particles (>D84) remained immobile, becoming step features, and downstream soil was easily winnowed away.
Rengers, Francis K.; McGuire, Luke; Ebel, Brian A.; Tucker, G. E.
2018-01-01
The transition of a colluvial hollow to a fluvial channel with discrete steps was observed after two landscape-scale disturbances. The first disturbance, a high-severity wildfire, changed the catchment hydrology to favor overland flow, which incised a colluvial hollow, creating a channel in the same location. This incised channel became armored with cobbles and boulders following repeated post-wildfire overland flow events. Three years after the fire, a record rainstorm produced regional flooding and generated sufficient fluvial erosion and sorting to produce a fluvial channel with periodically spaced steps. An analysis of the step spacing shows that after the flood, newly formed steps retained a similar spacing to the topographic roughness spacing in the original colluvial hollow (prior to channelization). This suggests that despite a distinct change in channel form roughness and bedform morphology, the endogenous roughness periodicity was conserved. Variations in sediment erodibility helped to create the emergent steps as the largest particles ( >D84) remained immobile, becoming step features, and downstream soil was easily winnowed away.
BIG CITY SCHOOL DESEGREGATION--TRENDS AND METHODS.
ERIC Educational Resources Information Center
DENTLER, ROBERT A.; ELSBERY, JAMES
THE CONCERNS OF THIS SPEECH ARE THE EXTENT OF SCHOOL SEGREGATION IN THE NATION'S 20 LARGEST CITIES, THE STEPS WHICH HAVE BEEN AND MIGHT BE TAKEN TO DESEGREGATE THEIR SCHOOL SYSTEMS, AND THE STRATEGIES NECESSARY TO EFFECTIVELY IMPLEMENT SCHOOL DESEGREGATION PLANS. THERE IS ALMOST TOTAL RESIDENTIAL SEGREGATION IN 13 OF THESE CITIES. SEVENTY PERCENT…
Economic Crisis in Asia: The Impact on Enrollment in 4 Countries.
ERIC Educational Resources Information Center
Desruisseaux, Paul
1998-01-01
A survey of United States colleges and universities that enroll the largest numbers of students from Indonesia, Malaysia, South Korea, and Thailand, which have experienced currency devaluations and economic uncertainty, found a less than 10% drop in those enrollments, a much lower rate than anticipated. Institutions have taken steps to ease the…
SWAP-Assembler 2: Optimization of De Novo Genome Assembler at Large Scale
DOE Office of Scientific and Technical Information (OSTI.GOV)
Meng, Jintao; Seo, Sangmin; Balaji, Pavan
2016-08-16
In this paper, we analyze and optimize the most time-consuming steps of the SWAP-Assembler, a parallel genome assembler, so that it can scale to a large number of cores for huge genomes with the size of sequencing data ranging from terabyes to petabytes. According to the performance analysis results, the most time-consuming steps are input parallelization, k-mer graph construction, and graph simplification (edge merging). For the input parallelization, the input data is divided into virtual fragments with nearly equal size, and the start position and end position of each fragment are automatically separated at the beginning of the reads. Inmore » k-mer graph construction, in order to improve the communication efficiency, the message size is kept constant between any two processes by proportionally increasing the number of nucleotides to the number of processes in the input parallelization step for each round. The memory usage is also decreased because only a small part of the input data is processed in each round. With graph simplification, the communication protocol reduces the number of communication loops from four to two loops and decreases the idle communication time. The optimized assembler is denoted as SWAP-Assembler 2 (SWAP2). In our experiments using a 1000 Genomes project dataset of 4 terabytes (the largest dataset ever used for assembling) on the supercomputer Mira, the results show that SWAP2 scales to 131,072 cores with an efficiency of 40%. We also compared our work with both the HipMER assembler and the SWAP-Assembler. On the Yanhuang dataset of 300 gigabytes, SWAP2 shows a 3X speedup and 4X better scalability compared with the HipMer assembler and is 45 times faster than the SWAP-Assembler. The SWAP2 software is available at https://sourceforge.net/projects/swapassembler.« less
High levels of melatonin generated during the brewing process.
Garcia-Moreno, H; Calvo, J R; Maldonado, M D
2013-08-01
Beer is a beverage consumed worldwide. It is produced from cereals (barley or wheat) and contains a wide array of bioactive phytochemicals and nutraceutical compounds. Specifically, high melatonin concentrations have been found in beer. Beers with high alcohol content are those that present the greatest concentrations of melatonin and vice versa. In this study, gel filtration chromatography and ELISA were combined for melatonin determination. We brewed beer to determine, for the first time, the beer production steps in which melatonin appears. We conclude that the barley, which is malted and ground in the early process, and the yeast, during the second fermentation, are the largest contributors to the enrichment of the beer with melatonin. © 2012 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, C; Kumarasiri, A; Chetvertkov, M
2015-06-15
Purpose: Accurate deformable image registration (DIR) between CT and CBCT in H&N is challenging. In this study, we propose a practical hybrid method that uses not only the pixel intensities but also organ physical properties, structure volume of interest (VOI), and interactive local registrations. Methods: Five oropharyngeal cancer patients were selected retrospectively. For each patient, the planning CT was registered to the last fraction CBCT, where the anatomy difference was largest. A three step registration strategy was tested; Step1) DIR using pixel intensity only, Step2) DIR with additional use of structure VOI and rigidity penalty, and Step3) interactive local correction.more » For Step1, a public-domain open-source DIR algorithm was used (cubic B-spline, mutual information, steepest gradient optimization, and 4-level multi-resolution). For Step2, rigidity penalty was applied on bony anatomies and brain, and a structure VOI was used to handle the body truncation such as the shoulder cut-off on CBCT. Finally, in Step3, the registrations were reviewed on our in-house developed software and the erroneous areas were corrected via a local registration using level-set motion algorithm. Results: After Step1, there were considerable amount of registration errors in soft tissues and unrealistic stretching in the posterior to the neck and near the shoulder due to body truncation. The brain was also found deformed to a measurable extent near the superior border of CBCT. Such errors could be effectively removed by using a structure VOI and rigidity penalty. The rest of the local soft tissue error could be corrected using the interactive software tool. The estimated interactive correction time was approximately 5 minutes. Conclusion: The DIR using only the image pixel intensity was vulnerable to noise and body truncation. A corrective action was inevitable to achieve good quality of registrations. We found the proposed three-step hybrid method efficient and practical for CT/CBCT registrations in H&N. My department receives grant support from Industrial partners: (a) Varian Medical Systems, Palo Alto, CA, and (b) Philips HealthCare, Best, Netherlands.« less
Rep. Waters, Maxine [D-CA-35
2009-10-20
House - 12/08/2009 Referred to the Subcommittee on Higher Education, Lifelong Learning, and Competitiveness. (All Actions) Tracker: This bill has the status IntroducedHere are the steps for Status of Legislation:
Sen. Nelson, Ben [D-NE
2009-04-23
Senate - 10/21/2009 Resolution agreed to in Senate without amendment and with a preamble by Unanimous Consent. (All Actions) Tracker: This bill has the status Agreed to in SenateHere are the steps for Status of Legislation:
Towards a Curriculum Typology for Australian Generalist Arts Degree Programmes
ERIC Educational Resources Information Center
Gannaway, Deanne
2010-01-01
The Bachelor of Arts (BA) degree is arguably one of the longest-established and largest degree programmes in the Australian higher education system. Traditionally, the BA programme is a liberal arts degree that is considered the first step in the lifelong journey of learning and that is frequently marketed as such. Yet, in an increasingly…
Couriers in the Inca Empire: Getting Your Message Across. [Lesson Plan].
ERIC Educational Resources Information Center
2002
This lesson shows how the Inca communicated across the vast stretches of their mountain realm, the largest empire of the pre-industrial world. The lesson explains how couriers carried messages along mountain-ridge roads, up and down stone steps, and over chasm-spanning footbridges. It states that couriers could pass a message from Quito (Ecuador)…
USDA-ARS?s Scientific Manuscript database
Introduction: It is important to understand effective strategies to reach and treat individuals who lack awareness of or have uncontrolled hypertension (HTN). The objectives of this secondary analysis from a community-based participatory research initiative, HUB City Steps, were to quantify the pre...
Ankışhan, Haydar; Yılmaz, Derya
2013-01-01
Snoring, which may be decisive for many diseases, is an important indicator especially for sleep disorders. In recent years, many studies have been performed on the snore related sounds (SRSs) due to producing useful results for detection of sleep apnea/hypopnea syndrome (SAHS). The first important step of these studies is the detection of snore from SRSs by using different time and frequency domain features. The SRSs have a complex nature that is originated from several physiological and physical conditions. The nonlinear characteristics of SRSs can be examined with chaos theory methods which are widely used to evaluate the biomedical signals and systems, recently. The aim of this study is to classify the SRSs as snore/breathing/silence by using the largest Lyapunov exponent (LLE) and entropy with multiclass support vector machines (SVMs) and adaptive network fuzzy inference system (ANFIS). Two different experiments were performed for different training and test data sets. Experimental results show that the multiclass SVMs can produce the better classification results than ANFIS with used nonlinear quantities. Additionally, these nonlinear features are carrying meaningful information for classifying SRSs and are able to be used for diagnosis of sleep disorders such as SAHS. PMID:24194786
Waaijman, Roelof; Keukenkamp, Renske; de Haart, Mirjam; Polomski, Wojtek P; Nollet, Frans; Bus, Sicco A
2013-06-01
Prescription custom-made footwear can only be effective in preventing diabetic foot ulcers if worn by the patient. Particularly, the high prevalence of recurrent foot ulcers focuses the attention on adherence, for which objective data are nonexisting. We objectively assessed adherence in patients with high risk of ulcer recurrence and evaluated what determines adherence. In 107 patients with diabetes, neuropathy, a recently healed plantar foot ulcer, and custom-made footwear, footwear use was measured during 7 consecutive days using a shoe-worn, temperature-based monitor. Daily step count was measured simultaneously using an ankle-worn activity monitor. Patients logged time away from home. Adherence was calculated as the percentage of steps that prescription footwear was worn. Determinants of adherence were evaluated in multivariate linear regression analysis. Mean ± SD adherence was 71 ± 25%. Adherence at home was 61 ± 32%, over 3,959 ± 2,594 steps, and away from home 87 ± 26%, over 2,604 ± 2,507 steps. In 35 patients with low adherence (<60%), adherence at home was 28 ± 24%. Lower BMI, more severe foot deformity, and more appealing footwear were significantly associated with higher adherence. The results show that adherence to wearing custom-made footwear is insufficient, particularly at home where patients exhibit their largest walking activity. This low adherence is a major threat for reulceration. These objective findings provide directions for improvement in adherence, which could include prescribing specific off-loading footwear for indoors, and they set a reference for future comparative research on footwear adherence in diabetes.
Pellerin, Brian A.; Bergamaschi, Brian A.; Gilliom, Robert J.; Crawford, Charles G.; Saraceno, John F.; Frederick, C. Paul; Downing, Bryan D.; Murphy, Jennifer C.
2014-01-01
Accurately quantifying nitrate (NO3–) loading from the Mississippi River is important for predicting summer hypoxia in the Gulf of Mexico and targeting nutrient reduction within the basin. Loads have historically been modeled with regression-based techniques, but recent advances with high frequency NO3– sensors allowed us to evaluate model performance relative to measured loads in the lower Mississippi River. Patterns in NO3– concentrations and loads were observed at daily to annual time steps, with considerable variability in concentration-discharge relationships over the two year study. Differences were particularly accentuated during the 2012 drought and 2013 flood, which resulted in anomalously high NO3– concentrations consistent with a large flush of stored NO3– from soil. The comparison between measured loads and modeled loads (LOADEST, Composite Method, WRTDS) showed underestimates of only 3.5% across the entire study period, but much larger differences at shorter time steps. Absolute differences in loads were typically greatest in the spring and early summer critical to Gulf hypoxia formation, with the largest differences (underestimates) for all models during the flood period of 2013. In additional to improving the accuracy and precision of monthly loads, high frequency NO3– measurements offer additional benefits not available with regression-based or other load estimation techniques.
Peng, Xing; Shi, GuoLiang; Liu, GuiRong; Xu, Jiao; Tian, YingZe; Zhang, YuFen; Feng, YinChang; Russell, Armistead G
2017-02-01
Heavy metals (Cr, Co, Ni, As, Cd, and Pb) can be bound to PM adversely affecting human health. Quantifying the source impacts on heavy metals can provide source-specific estimates of the heavy metal health risk (HMHR) to guide effective development of strategies to reduce such risks from exposure to heavy metals in PM 2.5 (particulate matter (PM) with aerodynamic diameter less than or equal to 2.5 μm). In this study, a method combining Multilinear Engine 2 (ME2) and a risk assessment model is developed to more effectively quantify source contributions to HMHR, including heavy metal non-cancer risk (non-HMCR) and cancer risk (HMCR). The combined model (called ME2-HMHR) has two steps: step1, source contributions to heavy metals are estimated by employing the ME2 model; step2, the source contributions in step 1 are introduced into the risk assessment model to calculate the source contributions to HMHR. The approach was applied to Huzou, China and five significant sources were identified. Soil dust is the largest source of non-HMCR. For HMCR, the source contributions of soil dust, coal combustion, cement dust, vehicle, and secondary sources are 1.0 × 10 -4 , 3.7 × 10 -5 , 2.7 × 10 -6 , 1.6 × 10 -6 and 1.9 × 10 -9 , respectively. The soil dust is the largest contributor to HMCR, being driven by the high impact of soil dust on PM 2.5 and the abundance of heavy metals in soil dust. Copyright © 2016 Elsevier Ltd. All rights reserved.
ICESat Observations of Arctic Sea Ice: A First Look
NASA Technical Reports Server (NTRS)
Kwok, Ron; Zwally, H. Jay; Yi, Dong-Hui
2004-01-01
Analysis of near-coincident ICESat and RADARSAT imagery shows that the retrieved elevations from the laser altimeter are sensitive to new openings (containing thin ice or open water) in the sea ice cover as well as to surface relief of old and first-year ice. The precision of the elevation estimates, measured over relatively flat sea ice, is approx. 2 cm Using the thickness of thin-ice in recent openings to estimate sea level references, we obtain the sea-ice free-board along the altimeter tracks. This step is necessitated by the large uncertainties in the time-varying sea surface topography compared to that required for accurate determination of free-board. Unknown snow depth introduces the largest uncertainty in the conversion of free-board to ice thickness. Surface roughness is also derived, for the first time, from the variability of successive elevation estimates along the altimeter track Overall, these ICESat measurements provide an unprecedented view of the Arctic Ocean ice cover at length scales at and above the spatial dimension of the altimeter footprint.
Geng, Xiaohua; Podlaha, Elizabeth J
2016-12-14
A new methodology is reported to shape template-assisted electrodeposition of Fe-rich, Fe-Ni-Co nanowires to have a thin nanowire segment using a coupled displacement reaction with a more noble elemental ion, Cu(II), and at the same time dealloying predominantly Fe from Fe-Ni-Co by the reduction of protons (H + ), followed by a subsequent etching step. The displacement/dealloyed layer was sandwiched between two trilayers of Fe-Ni-Co to facilitate the characterization of the reaction front, or penetration length. The penetration length region was found to be a function of the ratio of proton and Cu(II) concentration, and a ratio of 0.5 was found to provide the largest penetration rate, and hence the larger thinned length of the nanowire. Altering the etching time affected the diameter of the thinned region. This methodology presents a new way to thin nanowire segments connected to larger nanowire sections and also introduces a way to study the propagation of a reaction front into a nanowire.
Stent-protected carotid angioplasty using a membrane stent: a comparative cadaver study.
Müller-Hülsbeck, Stefan; Gühne, Albrecht; Tsokos, Michael; Hüsler, Erhard J; Schaffner, Silvio R; Paulsen, Friedrich; Hedderich, Jürgen; Heller, Martin; Jahnke, Thomas
2006-01-01
To evaluate the performance of a prototype membrane stent, MembraX, in the prevention of acute and late embolization and to quantify particle embolization during carotid stent placement in human carotid explants in a proof of concept study. Thirty human carotid cadaveric explants (mild stenoses 0-29%, n = 23; moderate stenoses 30-69%, n = 3; severe stenoses 70-99%, n = 2) that included the common, internal and external carotid arteries were integrated into a pulsatile-flow model. Three groups were formed according to the age of the donors (mean 58.8 years; sample SD 15.99 years) and randomized to three test groups: (I) MembraX, n = 9; (II) Xpert bare stent, n = 10; (III) Xpert bare stent with Emboshield protection device, n = 9. Emboli liberated during stent deployment (step A), post-dilatation (step B), and late embolization (step C) were measured in 100 microm effluent filters. When the Emboshield was used, embolus penetration was measured during placement (step D) and retrieval (step E). Late embolization was simulated by compressing the area of the stented vessel five times. Absolute numbers of particles (median; >100 microm) caught in the effluent filter were: (I) MembraX: A = 7, B = 9, C = 3; (II) bare stent: A = 6.5, B = 6, C = 4.5; (III) bare stent and Emboshield: A = 7, B = 7, C.=.5, D = 8, E = 10. The data showed no statistical differences according to whether embolic load was analyzed by weight or mean particle size. When summing all procedural steps, the Emboshield caused the greatest load by weight (p = 0.011) and the largest number (p = 0.054) of particles. On the basis of these limited data neither a membrane stent nor a protection device showed significant advantages during ex vivo carotid angioplasty. However, the membrane stent seems to have the potential for reducing the emboli responsible for supposed late embolization, whereas more emboli were observed when using a protection device. Further studies are necessary and warranted.
Physical activity levels early after lung transplantation.
Wickerson, Lisa; Mathur, Sunita; Singer, Lianne G; Brooks, Dina
2015-04-01
Little is known of the early changes in physical activity after lung transplantation. The purposes of this study were: (1) to describe physical activity levels in patients up to 6 months following lung transplantation and (2) to explore predictors of the change in physical activity in that population. This was a prospective cohort study. Physical activity (daily steps and time spent in moderate-intensity activity) was measured using an accelerometer before and after transplantation (at hospital discharge, 3 months, and 6 months). Additional functional measurements included submaximal exercise capacity (measured with the 6-Minute Walk Test), quadriceps muscle torque, and health-related quality of life (measured with the Medical Outcomes Study 36-Item Short-Form Health Survey 36 [SF-36] and the St George's Respiratory Questionnaire). Thirty-six lung transplant recipients (18 men, 18 women; mean age=49 years, SD=14) completed posttransplant measurements. Before transplant, daily steps were less than a third of the general population. By 3 months posttransplant, the largest improvement in physical activity had occurred, and level of daily steps reached 55% of the general population. The change in daily steps (pretransplant to 3 months posttransplant) was inversely correlated with pretransplant 6-minute walk distance (r=-.48, P=.007), daily steps (r=-.36, P=.05), and SF-36 physical functioning (SF-36 PF) score (r=-.59, P=.0005). The SF-36 PF was a significant predictor of the change in physical activity, accounting for 35% of the variation in change in daily steps. Only individuals who were ambulatory prior to transplant and discharged from the hospital in less than 3 months were included in the study. Physical activity levels improve following lung transplantation, particularly in individuals with low self-reported physical functioning. However, the majority of lung transplant recipients remain sedentary between 3 to 6 months following transplant. The role of exercise training, education, and counseling in further improving physical activity levels in lung transplant recipients should be further explored. © 2015 American Physical Therapy Association.
Effective precipitation duration for runoff peaks based on catchment modelling
NASA Astrophysics Data System (ADS)
Sikorska, A. E.; Viviroli, D.; Seibert, J.
2018-01-01
Despite precipitation intensities may greatly vary during one flood event, detailed information about these intensities may not be required to accurately simulate floods with a hydrological model which rather reacts to cumulative precipitation sums. This raises two questions: to which extent is it important to preserve sub-daily precipitation intensities and how long does it effectively rain from the hydrological point of view? Both questions might seem straightforward to answer with a direct analysis of past precipitation events but require some arbitrary choices regarding the length of a precipitation event. To avoid these arbitrary decisions, here we present an alternative approach to characterize the effective length of precipitation event which is based on runoff simulations with respect to large floods. More precisely, we quantify the fraction of a day over which the daily precipitation has to be distributed to faithfully reproduce the large annual and seasonal floods which were generated by the hourly precipitation rate time series. New precipitation time series were generated by first aggregating the hourly observed data into daily totals and then evenly distributing them over sub-daily periods (n hours). These simulated time series were used as input to a hydrological bucket-type model and the resulting runoff flood peaks were compared to those obtained when using the original precipitation time series. We define then the effective daily precipitation duration as the number of hours n, for which the largest peaks are simulated best. For nine mesoscale Swiss catchments this effective daily precipitation duration was about half a day, which indicates that detailed information on precipitation intensities is not necessarily required to accurately estimate peaks of the largest annual and seasonal floods. These findings support the use of simple disaggregation approaches to make usage of past daily precipitation observations or daily precipitation simulations (e.g. from climate models) for hydrological modeling at an hourly time step.
The Amazon, measuring a mighty river
,
1967-01-01
The Amazon, the world's largest river, discharges enough water into the sea each day to provide fresh water to the City of New York for over 9 years. Its flow accounts for about 15 percent of all the fresh water discharged into the oceans by all the rivers of the world. By comparison, the Amazon's flow is over 4 times that of the Congo River, the world's second largest river. And it is 10 times that of the Mississippi, the largest river on the North American Continent.
Automated identification and modeling aseismic slip events on Kilauea Volcano, Hawaii
NASA Astrophysics Data System (ADS)
Desmarais, E. K.; Segall, P.; Miklius, A.
2006-12-01
Several aseismic slip events have been observed on the south flank of Kilauea volcano, Hawaii (Cervelli et al., Nature, 2002; Brooks et al., EPSL, 2006; Segall et al., Nature, 2006). These events are identified as spatially coherent offsets in GPS time series. We have interpreted the events as slip on a sub-horizontal surface at depths consistent with a decollement under Kilauea's south flank. In order to determine whether smaller slow slip events are present in the time series, we developed an algorithm that searches for coherent displacement patterns similar to the known slow slip events. We compute candidate displacements by taking a running difference of the mean position 6 days before and after a window of 6 days centered on the candidate time step. The candidate displacements are placed in a 3N dimensional data vector, where N is the number of stations. We then compute the angle, in the 3N dimensional data space, between the candidate displacement and a reference vector at each time step. The reference vector is a stack of displacements due to the four largest known slow slip events. Small angles indicate similar displacement patterns, regardless of amplitude. The algorithm strongly identifies four events (September 20, 1998, November 9, 2000, December 16, 2002, and January 26, 2005), each separated by approximately 2.11 years. The algorithm also identified one smaller event (March 3, 1998) that preceeded the September 1998 event by ~ 200 days, and another event (July 4, 2003) that followed the December 2002 event by ~ 200 days. These smaller, 'paired' events appear to alternate rupturing of the eastern and western parts of the south flank. Each of the slow slip events is correlated with an increase, sometimes slight, in microseismicity on the south flank of Kilauea. The temporal evolution of the microseismicity for the 2005 event is well explained by increased stress due to the slow slip (Segall et al., Nature, 2006). The microearthquakes, at depths of 6.5 - 8.5 km, thus constrain the depth of the slow earthquakes to comparable depths. The triggering of microearthquakes implies that there is finite probability that a larger earthquake could be triggered, given appropriate stress conditions. In order to better constrain the locations of the slow slip events based solely on geodetic observations, we expand on the simple uniform slip models by adding the effects of distributed slip, layered elastic structure, and topography. There are many difficulties in observing slow slip events on Kilauea volcano. The GPS network only provides displacements on land, which is primarily to the north of the largest slip. The vertical displacement field is essential to understanding the northward extent of the slip, however, the GPS observations of slow slip events are primarily observed in the horizontal component, which have smaller noise levels (~ 3 mm). The maximum vertical deformation from the largest event (2005) was very small (± 9 mm), about the same size as the typical vertical noise. We are exploring the possibility that tiltmeters will allow sufficiently accurate measurements to help identify the northern extent of the slip surface.
Surface Plasmon Resonance Imaging of the Enzymatic Degradation of Cellulose Microfibrils
NASA Astrophysics Data System (ADS)
Reiter, Kyle; Raegen, Adam; Clarke, Anthony; Lipkowski, Jacek; Dutcher, John
2012-02-01
As the largest component of biomass on Earth, cellulose represents a significant potential energy reservoir. Enzymatic hydrolysis of cellulose into fermentable sugars, an integral step in the production of biofuel, is a challenging problem on an industrial scale. More efficient conversion processes may be developed by an increased understanding of the action of the cellulolytic enzymes involved in cellulose degradation. We have used our recently developed quantitative, angle-scanning surface plasmon resonance imaging (SPRi) device to study the degradation of cellulose microfibrils upon exposure to cellulosic enzymes. In particular, we have studied the action of individual enzymes, and combinations of enzymes, from the Hypocrea Jecorina cellulase system on heterogeneous, industrially-relevant cellulose substrates. This has allowed us to define a characteristic time of action for the enzymes for different degrees of surface coverage of the cellulose microfibrils.
Charter Schools Don't Serve Black Children Well: An Interview with Julian Vasquez Heilig
ERIC Educational Resources Information Center
Richardson, Joan
2017-01-01
The NAACP, nation's largest civil rights organization, steps up its opposition to charter schools just as a president and new education secretary appear ready to kick the sector into high gear. In 2016, the NAACP passed a resolution calling on a moratorium on the expansion of charter schools, citing concerns about transparency and accountability,…
Satellite-Based Fusion of Image/Inertial Sensors for Precise Geolocation
2009-03-01
largest contributor and is a valid approximation of orbital position prediction [15]. According to Newton, the gravitational force of the Earth onto an...steps in developing an image-aided navigation system for an orbiting satellite is the understanding of the satellite’s trajectory around the Earth . This...Development . . . . . . . . . . . . . . . . . . . . . . . . 77 4.2 Low Earth Orbit Simulation . . . . . . . . . . . . . . . . . . . . . . . 78 4.3 High Earth
Wang, Yaqiong; Ma, Hong
2015-09-01
Proteins often function as complexes, yet little is known about the evolution of dissimilar subunits of complexes. DNA-directed RNA polymerases (RNAPs) are multisubunit complexes, with distinct eukaryotic types for different classes of transcripts. In addition to Pol I-III, common in eukaryotes, plants have Pol IV and V for epigenetic regulation. Some RNAP subunits are specific to one type, whereas other subunits are shared by multiple types. We have conducted extensive phylogenetic and sequence analyses, and have placed RNAP gene duplication events in land plant history, thereby reconstructing the subunit compositions of the novel RNAPs during land plant evolution. We found that Pol IV/V have experienced step-wise duplication and diversification of various subunits, with increasingly distinctive subunit compositions. Also, lineage-specific duplications have further increased RNAP complexity with distinct copies in different plant families and varying divergence for subunits of different RNAPs. Further, the largest subunits of Pol IV/V probably originated from a gene fusion in the ancestral land plants. We propose a framework of plant RNAP evolution, providing an excellent model for protein complex evolution. © 2015 The Authors. New Phytologist © 2015 New Phytologist Trust.
Binding Isotherms and Time Courses Readily from Magnetic Resonance.
Xu, Jia; Van Doren, Steven R
2016-08-16
Evidence is presented that binding isotherms, simple or biphasic, can be extracted directly from noninterpreted, complex 2D NMR spectra using principal component analysis (PCA) to reveal the largest trend(s) across the series. This approach renders peak picking unnecessary for tracking population changes. In 1:1 binding, the first principal component captures the binding isotherm from NMR-detected titrations in fast, slow, and even intermediate and mixed exchange regimes, as illustrated for phospholigand associations with proteins. Although the sigmoidal shifts and line broadening of intermediate exchange distorts binding isotherms constructed conventionally, applying PCA directly to these spectra along with Pareto scaling overcomes the distortion. Applying PCA to time-domain NMR data also yields binding isotherms from titrations in fast or slow exchange. The algorithm readily extracts from magnetic resonance imaging movie time courses such as breathing and heart rate in chest imaging. Similarly, two-step binding processes detected by NMR are easily captured by principal components 1 and 2. PCA obviates the customary focus on specific peaks or regions of images. Applying it directly to a series of complex data will easily delineate binding isotherms, equilibrium shifts, and time courses of reactions or fluctuations.
NASA Astrophysics Data System (ADS)
Li, Yun; Qiu, Shi; Shi, Lihua; Huang, Zhengyu; Wang, Tao; Duan, Yantao
2017-12-01
The time resolved three-dimensional (3-D) spatial reconstruction of lightning channels using high-speed video (HSV) images and VHF broadband interferometer (BITF) data is first presented in this paper. Because VHF and optical radiations in step formation process occur with time separation no more than 1 μs, the observation data of BITF and HSV at two different sites provide the possibility of reconstructing the time resolved 3-D channel of lightning. With the proposed procedures for 3-D reconstruction of leader channels, dart leaders as well as stepped leaders with complex multiple branches can be well reconstructed. The differences between 2-D speeds and 3-D speeds of leader channels are analyzed by comparing the development of leader channels in 2-D and 3-D space. Since return stroke (RS) usually follows the path of previous leader channels, the 3-D speeds of the return strokes are first estimated by combination with the 3-D structure of the preceding leaders and HSV image sequences. For the fourth RS, the ratios of the 3-D to 2-D RS speeds increase with height, and the largest ratio of the 3-D to 2-D return stroke speeds can reach 2.03, which is larger than the result of triggered lightning reported by Idone. Since BITF can detect lightning radiation in a 360° view, correlated BITF and HSV observations increase the 3-D detection probability than dual-station HSV observations, which is helpful to obtain more events and deeper understanding of the lightning process.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wan, Hui; Rasch, Philip J.; Zhang, Kai
2014-09-08
This paper explores the feasibility of an experimentation strategy for investigating sensitivities in fast components of atmospheric general circulation models. The basic idea is to replace the traditional serial-in-time long-term climate integrations by representative ensembles of shorter simulations. The key advantage of the proposed method lies in its efficiency: since fewer days of simulation are needed, the computational cost is less, and because individual realizations are independent and can be integrated simultaneously, the new dimension of parallelism can dramatically reduce the turnaround time in benchmark tests, sensitivities studies, and model tuning exercises. The strategy is not appropriate for exploring sensitivitymore » of all model features, but it is very effective in many situations. Two examples are presented using the Community Atmosphere Model version 5. The first example demonstrates that the method is capable of characterizing the model cloud and precipitation sensitivity to time step length. A nudging technique is also applied to an additional set of simulations to help understand the contribution of physics-dynamics interaction to the detected time step sensitivity. In the second example, multiple empirical parameters related to cloud microphysics and aerosol lifecycle are perturbed simultaneously in order to explore which parameters have the largest impact on the simulated global mean top-of-atmosphere radiation balance. Results show that in both examples, short ensembles are able to correctly reproduce the main signals of model sensitivities revealed by traditional long-term climate simulations for fast processes in the climate system. The efficiency of the ensemble method makes it particularly useful for the development of high-resolution, costly and complex climate models.« less
Understanding how biodiversity unfolds through time under neutral theory.
Missa, Olivier; Dytham, Calvin; Morlon, Hélène
2016-04-05
Theoretical predictions for biodiversity patterns are typically derived under the assumption that ecological systems have reached a dynamic equilibrium. Yet, there is increasing evidence that various aspects of ecological systems, including (but not limited to) species richness, are not at equilibrium. Here, we use simulations to analyse how biodiversity patterns unfold through time. In particular, we focus on the relative time required for various biodiversity patterns (macroecological or phylogenetic) to reach equilibrium. We simulate spatially explicit metacommunities according to the Neutral Theory of Biodiversity (NTB) under three modes of speciation, which differ in how evenly a parent species is split between its two daughter species. We find that species richness stabilizes first, followed by species area relationships (SAR) and finally species abundance distributions (SAD). The difference in timing of equilibrium between these different macroecological patterns is the largest when the split of individuals between sibling species at speciation is the most uneven. Phylogenetic patterns of biodiversity take even longer to stabilize (tens to hundreds of times longer than species richness) so that equilibrium predictions from neutral theory for these patterns are unlikely to be relevant. Our results suggest that it may be unwise to assume that biodiversity patterns are at equilibrium and provide a first step in studying how these patterns unfold through time. © 2016 The Author(s).
Understanding how biodiversity unfolds through time under neutral theory
2016-01-01
Theoretical predictions for biodiversity patterns are typically derived under the assumption that ecological systems have reached a dynamic equilibrium. Yet, there is increasing evidence that various aspects of ecological systems, including (but not limited to) species richness, are not at equilibrium. Here, we use simulations to analyse how biodiversity patterns unfold through time. In particular, we focus on the relative time required for various biodiversity patterns (macroecological or phylogenetic) to reach equilibrium. We simulate spatially explicit metacommunities according to the Neutral Theory of Biodiversity (NTB) under three modes of speciation, which differ in how evenly a parent species is split between its two daughter species. We find that species richness stabilizes first, followed by species area relationships (SAR) and finally species abundance distributions (SAD). The difference in timing of equilibrium between these different macroecological patterns is the largest when the split of individuals between sibling species at speciation is the most uneven. Phylogenetic patterns of biodiversity take even longer to stabilize (tens to hundreds of times longer than species richness) so that equilibrium predictions from neutral theory for these patterns are unlikely to be relevant. Our results suggest that it may be unwise to assume that biodiversity patterns are at equilibrium and provide a first step in studying how these patterns unfold through time. PMID:26977066
Waaijman, Roelof; Keukenkamp, Renske; de Haart, Mirjam; Polomski, Wojtek P.; Nollet, Frans; Bus, Sicco A.
2013-01-01
OBJECTIVE Prescription custom-made footwear can only be effective in preventing diabetic foot ulcers if worn by the patient. Particularly, the high prevalence of recurrent foot ulcers focuses the attention on adherence, for which objective data are nonexisting. We objectively assessed adherence in patients with high risk of ulcer recurrence and evaluated what determines adherence. RESEARCH DESIGN AND METHODS In 107 patients with diabetes, neuropathy, a recently healed plantar foot ulcer, and custom-made footwear, footwear use was measured during 7 consecutive days using a shoe-worn, temperature-based monitor. Daily step count was measured simultaneously using an ankle-worn activity monitor. Patients logged time away from home. Adherence was calculated as the percentage of steps that prescription footwear was worn. Determinants of adherence were evaluated in multivariate linear regression analysis. RESULTS Mean ± SD adherence was 71 ± 25%. Adherence at home was 61 ± 32%, over 3,959 ± 2,594 steps, and away from home 87 ± 26%, over 2,604 ± 2,507 steps. In 35 patients with low adherence (<60%), adherence at home was 28 ± 24%. Lower BMI, more severe foot deformity, and more appealing footwear were significantly associated with higher adherence. CONCLUSIONS The results show that adherence to wearing custom-made footwear is insufficient, particularly at home where patients exhibit their largest walking activity. This low adherence is a major threat for reulceration. These objective findings provide directions for improvement in adherence, which could include prescribing specific off-loading footwear for indoors, and they set a reference for future comparative research on footwear adherence in diabetes. PMID:23321218
NASA Astrophysics Data System (ADS)
Weaver, Matthew L.; Qiu, S. Roger; Hoyer, John R.; Casey, William H.; Nancollas, George H.; De Yoreo, James J.
2007-08-01
Pathological mineralization is a common phenomenon in broad range of plants and animals. In humans, kidney stone formation is a well-known example that afflicts approximately 10% of the population. Of the various calcium salt phases that comprise human kidney stones, the primary component is calcium oxalate monohydrate (COM). Citrate, a naturally occurring molecule in the urinary system and a common therapeutic agent for treating stone disease, is a known inhibitor of COM. Understanding the physical mechanisms of citrate inhibition requires quantification of the effects of both background electrolytes and citrate on COM step kinetics. Here we report the results of an in situ AFM study of these effects, in which we measure the effect of the electrolytes LiCl, NaCl, KCl, RbCl, and CsCl, and the dependence of step speed on citrate concentration for a range of COM supersaturations. We find that varying the background electrolyte results in significant differences in the measured step speeds and in step morphology, with KCl clearly producing the smallest impact and NaCl the largest. The kinetic coefficient for the former is nearly three times larger than for the latter, while the steps change from smooth to highly serrated when KCl is changed to NaCl. The results on the dependence of step speed on citrate concentration show that citrate produces a dead zone whose width increases with citrate concentration as well as a continual reduction in kinetic coefficient with increasing citrate level. We relate these results to a molecular-scale view of inhibition that invokes a combination of kink blocking and step pinning. Furthermore, we demonstrate that the classic step-pinning model of Cabrera and Vermilyea (C-V model) does an excellent job of predicting the effect of citrate on COM step kinetics provided the model is reformulated to more realistically account for impurity adsorption, include an expression for the Gibbs-Thomson effect that is correct for all supersaturations, and take into account a reduction in kinetic coefficient through kink blocking. The detailed derivation of this reformulated C-V model is presented and the underlying materials parameters that control its impact are examined. Despite the fact that the basic C-V model was proposed nearly 50 years ago and has seen extensive theoretical treatment, this study represents the first quantitative and molecular scale experimental confirmation for any crystal system.
Localization of beta and high-frequency oscillations within the subthalamic nucleus region.
van Wijk, B C M; Pogosyan, A; Hariz, M I; Akram, H; Foltynie, T; Limousin, P; Horn, A; Ewert, S; Brown, P; Litvak, V
2017-01-01
Parkinsonian bradykinesia and rigidity are typically associated with excessive beta band oscillations in the subthalamic nucleus. Recently another spectral peak has been identified that might be implicated in the pathophysiology of the disease: high-frequency oscillations (HFO) within the 150-400 Hz range. Beta-HFO phase-amplitude coupling (PAC) has been found to correlate with severity of motor impairment. However, the neuronal origin of HFO and its usefulness as a potential target for deep brain stimulation remain to be established. For example, it is unclear whether HFO arise from the same neural populations as beta oscillations. We intraoperatively recorded local field potentials from the subthalamic nucleus while advancing DBS electrodes in 2 mm steps from 4 mm above the surgical target point until 2 mm below, resulting in 4 recording sites. Data from 26 nuclei from 14 patients were analysed. For each trajectory, we identified the recording site with the largest spectral peak in the beta range (13-30 Hz), and the largest peak in the HFO range separately. In addition, we identified the recording site with the largest beta-HFO PAC. Recording sites with largest beta power and largest HFO power coincided in 50% of cases. In the other 50%, HFO was more likely to be detected at a more superior recording site in the target area. PAC followed more closely the site with largest HFO (45%) than beta power (27%). HFO are likely to arise from spatially close, but slightly more superior neural populations than beta oscillations. Further work is necessary to determine whether the different activities can help fine-tune deep brain stimulation targeting.
ERIC Educational Resources Information Center
National Association of Latino Elected and Appointed Officials Education Fund, Washington, DC.
This paper reports on a 1989 survey of publicly funded amnesty class capacity in the chief metropolitan areas of the four states outside of California that have the largest populations of legalization applicants. The areas covered are Chicago (Illinois), Houston (Texas), Miami (Florida), and New York (New York). The study sought to determine if…
USDA-ARS?s Scientific Manuscript database
Clinical mastitis (CM) is one of the health disorders with largest impacts on dairy farming profitability and animal welfare. Previous studies have consistently shown that CM is under genetic control but knowledge about regions of the genome associated with resistance to CM in US Holstein is lacking...
ERIC Educational Resources Information Center
Wang, Jia; Schweig, Jonathan D.; Herman, Joan L.
2014-01-01
Magnet schools are one of the largest sectors of choice schools in the United States. In this study, we explored whether there is heterogeneity in magnet school effects on student achievement by examining the effectiveness of 24 recently funded magnet schools in 5 school districts across 4 states. We used a two-step analysis: First, separate…
ERIC Educational Resources Information Center
Barnett, Claire L.
2005-01-01
This report makes the case that no one is in charge of protecting children from harmful environmental exposures at school and recommends steps at the federal and in New York State to begin to address this hidden world. With information gleaned from adult occupational health experts, from new national studies and reports, and from the reports of…
Rep. Bass, Karen [D-CA-37
2014-07-31
House - 09/08/2014 Referred to the Subcommittee on Africa, Global Health, Global Human Rights and International Organizations. (All Actions) Tracker: This bill has the status IntroducedHere are the steps for Status of Legislation:
Extreme events as foundation of Lévy walks with varying velocity
NASA Astrophysics Data System (ADS)
Kutner, Ryszard
2002-11-01
In this work we study the role of extreme events [E.W. Montroll, B.J. West, in: J.L. Lebowitz, E.W. Montrell (Eds.), Fluctuation Phenomena, SSM, vol. VII, North-Holland, Amsterdam, 1979, p. 63; J.-P. Bouchaud, M. Potters, Theory of Financial Risks from Statistical Physics to Risk Management, Cambridge University Press, Cambridge, 2001; D. Sornette, Critical Phenomena in Natural Sciences. Chaos, Fractals, Selforganization and Disorder: Concepts and Tools, Springer, Berlin, 2000] in determining the scaling properties of Lévy walks with varying velocity. This model is an extension of the well-known Lévy walks one [J. Klafter, G. Zumofen, M.F. Shlesinger, in M.F. Shlesinger, G.M. Zaslavsky, U. Frisch (Eds.), Lévy Flights and Related Topics ion Physics, Lecture Notes in Physics, vol. 450, Springer, Berlin, 1995, p. 196; G. Zumofen, J. Klafter, M.F. Shlesinger, in: R. Kutner, A. Pȩkalski, K. Sznajd-Weron (Eds.), Anomalous Diffusion. From Basics to Applications, Lecture Note in Physics, vol. 519, Springer, Berlin, 1999, p. 15] introduced in the context of chaotic dynamics where a fixed value of the walker velocity is assumed for simplicity. Such an extension seems to be necessary when the open and/or complex system is studied. The model of Lévy walks with varying velocity is spanned on two coupled velocity-temporal hierarchies: the first one consisting of velocities and the second of corresponding time intervals which the walker spends between the successive turning points. Both these hierarchical structures are characterized by their own self-similar dimensions. The extreme event, which can appear within a given time interval, is defined as a single random step of the walker having largest length. By finding power-laws which describe the time-dependence of this displacement and its statistics we obtained two independent diffusion exponents, which are related to the above-mentioned dimensions and which characterize the extreme event kinetics. In this work we show the principal influence of extreme events on the basic quantities (one-step distributions and moments as well as two-step correlation functions) of the continuous-time random walk formalism. Besides, we construct both the waiting-time distribution and sojourn probability density directly in a real space and time in the scaling form by proper component analysis which takes into account all possible fluctuations of the walker steps in contrast to the extreme event analysis. In this work we pay our attention to the basic quantities, since the summarized multi-step ones were already discussed earlier [Physica A 264 (1999) 107; Comp. Phys. Commun. 147 (2002) 565]. Moreover, we study not only the scaling phenomena but also, assuming a finite number of hierarchy levels, the breaking of scaling and its dependence on control parameters. This seems to be important for studying empirical systems the more so as there are still no closed formulae describing this phenomenon except the one for truncated Lévy flights [Phys. Rev. Lett. 73 (1994) 2946]. Our formulation of the model made possible to develop an efficient Monte Carlo algorithm [Physica A 264 (1999) 107; Comp. Phys. Commun. 147 (2002) 565] where no MC step is lost.
Initial Breakdown Pulse Amplitudes in Intracloud and Cloud-to-Ground Lightning Flashes
NASA Astrophysics Data System (ADS)
Marshall, T. C.; Smith, E. M.; Stolzenburg, M.; Karunarathne, S.; Siedlecki, R. D., II
2017-12-01
This study analyzes the largest initial breakdown (IB) pulse in flashes from three storms in Florida. The study was motivated in part by the possibility that IB pulses of IC flashes may cause of terrestrial gamma-ray flashes (TGFs). The range-normalized, zero-to-peak amplitude of the largest IB pulse within each flash was determined along with its altitude, duration, and occurrence time in the flash. Appropriate data were available for 40 intracloud (IC) and 32 cloud-to-ground (CG) flashes. Histograms of the magnitude of the largest IB pulse amplitude by flash type were similar, with mean (median) values of 1.49 (1.05) V/m for IC flashes and -1.35 (-0.87) V/m for CG flashes. The mean amplitude of the largest IC IB pulses are substantially smaller (roughly an order of magnitude smaller) than the few known pulse amplitudes of TGF events and TGF candidate events. The largest IB pulse in 30 IC flashes showed a weak inverse relation between pulse amplitude and altitude. Amplitude of the largest IB pulse for 25 CG flashes showed no altitude correlation. Duration of the largest IB pulse in ICs averaged twice as long as in CGs (96 μs versus 46 μs); all of the CG durations were <100 μs. Among the ICs, there is a positive relation between largest IB pulse duration and amplitude; the linear correlation coefficient is 0.385 with outliers excluded. The largest IB pulse in IC flashes typically occurred at a longer time after the first IB pulse (average 4.1 ms) than was the case in CG flashes (average 0.6 ms). In both flash types, the largest IB pulse was the first IB pulse in about 30% of the cases.
Partitioning the Fitness Components of RNA Populations Evolving In Vitro
Díaz Arenas, Carolina; Lehman, Niles
2013-01-01
All individuals in an evolving population compete for resources, and their performance is measured by a fitness metric. The performance of the individuals is relative to their abilities and to the biotic surroundings – the conditions under which they are competing – and involves many components. Molecules evolving in a test tube can also face complex environments and dynamics, and their fitness measurements should reflect the complexity of various contributing factors as well. Here, the fitnesses of a set of ligase ribozymes evolved by the continuous in vitro evolution system were measured. During these evolution cycles there are three different catalytic steps, ligation, reverse transcription, and forward transcription, each with a potential differential influence on the total fitness of each ligase. For six distinct ligase ribozyme genotypes that resulted from continuous evolution experiments, the rates of reaction were measured for each catalytic step by tracking the kinetics of enzymes reacting with their substrates. The reaction products were analyzed for the amount of product formed per time. Each catalytic step of the evolution cycle was found to have a differential incidence in the total fitness of the ligases, and therefore the total fitness of any ligase cannot be inferred from only one catalytic step of the evolution cycle. Generally, the ribozyme-directed ligation step tends to impart the largest effect on overall fitness. Yet it was found that the ligase genotypes have different absolute fitness values, and that they exploit different stages of the overall cycle to gain a net advantage. This is a new example of molecular niche partitioning that may allow for coexistence of more than one species in a population. The dissection of molecular events into multiple components of fitness provides new insights into molecular evolutionary studies in the laboratory, and has the potential to explain heretofore counterintuitive findings. PMID:24391957
Observation of Spectral Signatures of 1/f Dynamics in Avalanches on Granular Piles
NASA Astrophysics Data System (ADS)
Kim, Yong W.; Nishino, Thomas K.
1997-03-01
Granular piles of monodisperse glass spheres, 0.46+0.03 mm in diameter, have been studied. The base diameter of the pile has been varied from 3/8" to 2" in 1/8" increments. A single-grain dispenser with greater than 95consisting of a stepping motor-actuated reciprocating arm with a single-grain scoop. Each grain is dropped on the apex of the pile with lowest possible landing velocity at intervals at least 30longer than the duration of largest avalanches for each given pile. Each grain being added and being lost in avalanches from the pile is optically detected and recorded. The power spectrum of the net addition of grains to the pile as a function of time is found to be robustly 1/f for all base sizes. A wide variety of dynamical properties of 1/f systems, as obtained from the high precision data, will be presented.
Initial Breakdown Pulse Parameters in Intracloud and Cloud-to-Ground Lightning Flashes
NASA Astrophysics Data System (ADS)
Smith, E. M.; Marshall, T. C.; Karunarathne, S.; Siedlecki, R.; Stolzenburg, M.
2018-02-01
This study analyzes the largest initial breakdown (IB) pulse in flashes from four storms in Florida; data from three sensor arrays are used. The range-normalized, zero-to-peak amplitude of the largest IB pulse was determined along with its altitude, duration, and timing within each flash. Appropriate data were available for 40 intracloud (IC) and 32 cloud-to-ground (CG) flashes. Histograms of amplitude of the largest IB pulse by flash type were similar, with mean (median) values of 1.49 (1.05) V/m for IC flashes and -1.35 (-0.87) V/m for CG flashes. The largest IB pulse in 30 IC flashes showed a weak inverse relation between pulse amplitude and altitude. Amplitude of the largest IB pulse for 25 CG flashes showed no altitude correlation. Duration of the largest IB pulse in ICs averaged twice as long as in CGs (96 μs versus 46 μs), and all of the CG durations were <100 μs. Among the ICs, there is a positive relation between largest IB pulse duration and amplitude; the linear correlation coefficient is 0.385 with outliers excluded. The largest IB pulse in IC flashes typically occurred at a longer time after the first IB pulse (average 4.1 ms) than was the case in CG flashes (average 0.6 ms). In both flash types, the largest IB pulse was the first IB pulse in about 30% of the cases. In one storm all 42 IC flashes with triggered data had IB pulses.
International Space Station exhibit
NASA Technical Reports Server (NTRS)
2000-01-01
The International Space Station (ISS) exhibit in StenniSphere at John C. Stennis Space Center in Hancock County, Miss., gives visitors an up-close look at the largest international peacetime project in history. Step inside a module of the ISS and glimpse how astronauts will live and work in space. Currently, 16 countries contribute resources and hardware to the ISS. When complete, the orbiting research facility will be larger than a football field.
Development of a method to analyze orthopaedic practice expenses.
Brinker, M R; Pierce, P; Siegel, G
2000-03-01
The purpose of the current investigation was to present a standard method by which an orthopaedic practice can analyze its practice expenses. To accomplish this, a five-step process was developed to analyze practice expenses using a modified version of activity-based costing. In this method, general ledger expenses were assigned to 17 activities that encompass all the tasks and processes typically performed in an orthopaedic practice. These 17 activities were identified in a practice expense study conducted for the American Academy of Orthopaedic Surgeons. To calculate the cost of each activity, financial data were used from a group of 19 orthopaedic surgeons in Houston, Texas. The activities that consumed the largest portion of the employee work force (person hours) were service patients in office (25.0% of all person hours), maintain medical records (13.6% of all person hours), and resolve collection disputes and rebill charges (12.3% of all person hours). The activities that comprised the largest portion of the total expenses were maintain facility (21.4%), service patients in office (16.0%), and sustain business by managing and coordinating practice (13.8%). The five-step process of analyzing practice expenses was relatively easy to perform and it may be used reliably by most orthopaedic practices.
Maximum relative speeds of living organisms: Why do bacteria perform as fast as ostriches?
NASA Astrophysics Data System (ADS)
Meyer-Vernet, Nicole; Rospars, Jean-Pierre
2016-12-01
Self-locomotion is central to animal behaviour and survival. It is generally analysed by focusing on preferred speeds and gaits under particular biological and physical constraints. In the present paper we focus instead on the maximum speed and we study its order-of-magnitude scaling with body size, from bacteria to the largest terrestrial and aquatic organisms. Using data for about 460 species of various taxonomic groups, we find a maximum relative speed of the order of magnitude of ten body lengths per second over a 1020-fold mass range of running and swimming animals. This result implies a locomotor time scale of the order of one tenth of second, virtually independent on body size, anatomy and locomotion style, whose ubiquity requires an explanation building on basic properties of motile organisms. From first-principle estimates, we relate this generic time scale to other basic biological properties, using in particular the recent generalisation of the muscle specific tension to molecular motors. Finally, we go a step further by relating this time scale to still more basic quantities, as environmental conditions at Earth in addition to fundamental physical and chemical constants.
Demonstration of Wavelet Techniques in the Spectral Analysis of Bypass Transition Data
NASA Technical Reports Server (NTRS)
Lewalle, Jacques; Ashpis, David E.; Sohn, Ki-Hyeon
1997-01-01
A number of wavelet-based techniques for the analysis of experimental data are developed and illustrated. A multiscale analysis based on the Mexican hat wavelet is demonstrated as a tool for acquiring physical and quantitative information not obtainable by standard signal analysis methods. Experimental data for the analysis came from simultaneous hot-wire velocity traces in a bypass transition of the boundary layer on a heated flat plate. A pair of traces (two components of velocity) at one location was excerpted. A number of ensemble and conditional statistics related to dominant time scales for energy and momentum transport were calculated. The analysis revealed a lack of energy-dominant time scales inside turbulent spots but identified transport-dominant scales inside spots that account for the largest part of the Reynolds stress. Momentum transport was much more intermittent than were energetic fluctuations. This work is the first step in a continuing study of the spatial evolution of these scale-related statistics, the goal being to apply the multiscale analysis results to improve the modeling of transitional and turbulent industrial flows.
The largest Lyapunov exponent of gait in young and elderly individuals: A systematic review.
Mehdizadeh, Sina
2018-02-01
The largest Lyapunov exponent (LyE) is an accepted method to quantify gait stability in young and old adults. However, a range of LyE values has been reported in the literature for healthy young and elderly adults in normal walking. Therefore, it has been impractical to use the LyE as a clinical measure of gait stability. The aims of this systematic review were to summarize different methodological approaches of quantifying LyE, as well as to classify LyE values of different body segments and joints in young and elderly individuals during normal walking. The Pubmed, Ovid Medline, Scopus and ISI Web of Knowledge databases were searched using keywords related to gait, stability, variability, and LyE. Only English language articles using the Lyapunov exponent to quantify the stability of healthy normal young and old subjects walking on a level surface were considered. 102 papers were included for full-text review and data extraction. Data associated with the walking surface, data recording method, sampling rate, walking speed, body segments and joints, number of strides/steps, variable type, filtering, time-normalizing, state space dimension, time delay, LyE algorithm, and the LyE values were extracted. The disparity in implementation and calculation of the LyE was from, (i) experiment design, (ii) data pre-processing, and (iii) LyE calculation method. For practical implementation of LyE as a measure of gait stability in clinical settings, a standard and universally accepted approach of calculating LyE is required. Therefore, future studies should look for a standard and generalized procedure to apply and calculate LyE. Copyright © 2017 Elsevier B.V. All rights reserved.
Large-Scale Simulations of Plastic Neural Networks on Neuromorphic Hardware
Knight, James C.; Tully, Philip J.; Kaplan, Bernhard A.; Lansner, Anders; Furber, Steve B.
2016-01-01
SpiNNaker is a digital, neuromorphic architecture designed for simulating large-scale spiking neural networks at speeds close to biological real-time. Rather than using bespoke analog or digital hardware, the basic computational unit of a SpiNNaker system is a general-purpose ARM processor, allowing it to be programmed to simulate a wide variety of neuron and synapse models. This flexibility is particularly valuable in the study of biological plasticity phenomena. A recently proposed learning rule based on the Bayesian Confidence Propagation Neural Network (BCPNN) paradigm offers a generic framework for modeling the interaction of different plasticity mechanisms using spiking neurons. However, it can be computationally expensive to simulate large networks with BCPNN learning since it requires multiple state variables for each synapse, each of which needs to be updated every simulation time-step. We discuss the trade-offs in efficiency and accuracy involved in developing an event-based BCPNN implementation for SpiNNaker based on an analytical solution to the BCPNN equations, and detail the steps taken to fit this within the limited computational and memory resources of the SpiNNaker architecture. We demonstrate this learning rule by learning temporal sequences of neural activity within a recurrent attractor network which we simulate at scales of up to 2.0 × 104 neurons and 5.1 × 107 plastic synapses: the largest plastic neural network ever to be simulated on neuromorphic hardware. We also run a comparable simulation on a Cray XC-30 supercomputer system and find that, if it is to match the run-time of our SpiNNaker simulation, the super computer system uses approximately 45× more power. This suggests that cheaper, more power efficient neuromorphic systems are becoming useful discovery tools in the study of plasticity in large-scale brain models. PMID:27092061
Moving through Life-Space Areas and Objectively Measured Physical Activity of Older People.
Portegijs, Erja; Tsai, Li-Tang; Rantanen, Taina; Rantakokko, Merja
2015-01-01
Physical activity-an important determinant of health and function in old age-may vary according to the life-space area reached. Our aim was to study how moving through greater life-space areas is associated with greater physical activity of community-dwelling older people. The association between objectively measured physical activity and life-space area reached on different days by the same individual was studied using one-week longitudinal data, to provide insight in causal relationships. One-week surveillance of objectively assessed physical activity of community-dwelling 70-90-year-old people in central Finland from the "Life-space mobility in old age" cohort substudy (N = 174). In spring 2012, participants wore an accelerometer for 7 days and completed a daily diary including the largest life-space area reached (inside home, outside home, neighborhood, town, and beyond town). The daily step count, and the time in moderate (incl. walking) and low activity and sedentary behavior were assessed. Differences in physical activity between days on which different life-space areas were reached were tested using Generalized Estimation Equation models (within-group comparison). Participants' mean age was 80.4±4.2 years and 63.5% were female. Participants had higher average step counts (p < .001) and greater moderate and low activity time (p < .001) on days when greater life-space areas were reached, from the home to the town area. Only low activity time continued to increase when moving beyond the town. Community-dwelling older people were more physically active on days when they moved through greater life-space areas. While it is unknown whether physical activity was a motivator to leave the home, intervention studies are needed to determine whether facilitation of daily outdoor mobility, regardless of the purpose, may be beneficial in terms of promoting physical activity.
Stent-Protected Carotid Angioplasty Using a Membrane Stent: A Comparative Cadaver Study
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mueller-Huelsbeck, Stefan, E-mail: muehue@rad.uni-kiel.de; Guehne, Albrecht; Tsokos, Michael
2006-08-15
Purpose. To evaluate the performance of a prototype membrane stent, MembraX, in the prevention of acute and late embolization and to quantify particle embolization during carotid stent placement in human carotid explants in a proof of concept study. Methods. Thirty human carotid cadaveric explants (mild stenoses 0-29%, n = 23; moderate stenoses 30-69%, n = 3; severe stenoses 70-99%, n = 2) that included the common, internal and external carotid arteries were integrated into a pulsatile-flow model. Three groups were formed according to the age of the donors (mean 58.8 years; sample SD 15.99 years) and randomized to three testmore » groups: (I) MembraX, n 9; (II) Xpert bare stent, n = 10; (III) Xpert bare stent with Emboshield protection device, n = 9. Emboli liberated during stent deployment (step A), post-dilatation (step B), and late embolization (step C) were measured in 100 {mu}m effluent filters. When the Emboshield was used, embolus penetration was measured during placement (step D) and retrieval (step E). Late embolization was simulated by compressing the area of the stented vessel five times. Results. Absolute numbers of particles (median; >100 {mu}m) caught in the effluent filter were: (I) MembraX: A = 7, B = 9, C = 3; (II) bare stent: A 6.5, B = 6, C = 4.5; (III) bare stent and Emboshield: A = 7, B = 7, C.=.5, D = 8, E = 10. The data showed no statistical differences according to whether embolic load was analyzed by weight or mean particle size. When summing all procedural steps, the Emboshield caused the greatest load by weight (p 0.011) and the largest number (p = 0.054) of particles. Conclusions. On the basis of these limited data neither a membrane stent nor a protection device showed significant advantages during ex vivo carotid angioplasty. However, the membrane stent seems to have the potential for reducing the emboli responsible for supposed late embolization, whereas more emboli were observed when using a protection device. Further studies are necessary and warranted.« less
Process Waste Assessment - Paint Shop
DOE Office of Scientific and Technical Information (OSTI.GOV)
Phillips, N.M.
1993-06-01
This Process Waste Assessment was conducted to evaluate hazardous wastes generated in the Paint Shop, Building 913, Room 130. Special attention is given to waste streams generated by the spray painting process because it requires a number of steps for preparing, priming, and painting an object. Also, the spray paint booth covers the largest area in R-130. The largest and most costly waste stream to dispose of is {open_quote}Paint Shop waste{close_quotes} -- a combination of paint cans, rags, sticks, filters, and paper containers. These items are compacted in 55-gallon drums and disposed of as solid hazardous waste. Recommendations are mademore » for minimizing waste in the Paint Shop. Paint Shop personnel are very aware of the need to minimize hazardous wastes and are continuously looking for opportunities to do so.« less
Astrophysics in the Era of Massive Time-Domain Surveys
NASA Astrophysics Data System (ADS)
Djorgovski, G.
Synoptic sky surveys are now the largest data producers in astronomy, entering the Petascale regime, opening the time domain for a systematic exploration. A great variety of interesting phenomena, spanning essentially all subfields of astronomy, can only be studied in the time domain, and these new surveys are producing large statistical samples of the known types of objects and events for further studies (e.g., SNe, AGN, variable stars of many kinds), and have already uncovered previously unknown subtypes of these (e.g., rare or peculiar types of SNe). These surveys are generating a new science, and paving the way for even larger surveys to come, e.g., the LSST; our ability to fully exploit such forthcoming facilities depends critically on the science, methodology, and experience that are being accumulated now. Among the outstanding challenges, the foremost is our ability to conduct an effective follow-up of the interesting events discovered by the surveys in any wavelength regime. The follow-up resources, especially spectroscopy, are already and, for the predictable future, will be severely limited, thus requiring an intelligent down-selection of the most astrophysically interesting events to follow. The first step in that process is an automated, real-time, iterative classification of events, that incorporates heterogeneous data from the surveys themselves, archival and contextual information (spatial, temporal, and multiwavelength), and the incoming follow-up observations. The second step is an optimal automated event prioritization and allocation of the available follow-up resources that also change in time. Both of these challenges are highly non-trivial, and require a strong cyber-infrastructure based on the Virtual Observatory data grid, and the various astroinformatics efforts. Time domain astronomy is inherently an astronomy of telescope-computational systems, and will increasingly depend on novel machine learning and artificial intelligence tools. Another arena with a strong potential for discovery is a purely archival, non-time-critical exploration of the time domain, with the time dimension adding the complexity to an already challenging problem of data mining of highly-dimensional parameter spaces produced by sky surveys.
NASA Astrophysics Data System (ADS)
Tsai, Ming Han; Wu, Chi-Ting; Lee, Wen-His
2014-04-01
In this study, high-current and low-energy (400 eV) ion implantation and low-temperature microwave annealing were employed to achieve ultra shallow junctions. To use the characteristic of microwave annealing more effectively, two-step microwave annealing was also employed. In the first step annealing, a high-power (2400 W; ˜500 °C) microwave was used to achieve solid-state epitaxial regrowth (SPER) and enhance microwave absorption. In the second step of annealing, unlike in conventional thermal annealing, which requires a higher energy to activate the dopant, a 600 W (˜250 °C) microwave was used to achieve low sheet resistance. The device subjected to two-step microwave annealing at 2400 W for 300 s + 600 W for 600 s has the lowest Vth. It also has the lowest subthreshold swing (SS), which means that it has the highest cap ability to control sub threshold current. In these three devices, the largest Ion/Ioff ratio is 2.203 × 106, and the smallest Ion/Ioff ratio is 2.024 × 106.
NASA Astrophysics Data System (ADS)
Horálek, Josef; Čermáková, Hana; Fischer, Tomáš
2016-04-01
Earthquake swarms are sequences of numerous events closely clustered in space and time and do not have a single dominant mainshock. A few of the largest events in a swarm reach similar magnitudes and usually occur throughout the course of the earthquake sequence. These attributes differentiate earthquake swarms from ordinary mainshock-aftershock sequences. Earthquake swarms occur worldwide, in diverse geological units. The swarms typically accompany volcanic activity at margins of the tectonic plate but also occur in intracontinental areas where strain from tectonic-plate movement is small. The origin of earthquake swarms is still unclear. The swarms typically occur at the plate margins but also in intracontinental areas. West Bohemia-Vogtland represents one of the most active intraplate earthquake-swarm areas in Europe. It is characterised by a frequent reoccurrence of ML < 4.0 swarms and by high activity of crustal fluids. West Bohemia-Vogtland is one of the most active intraplate earthquake-swarm areas in Europe which also exhibits high activity of crustal fluids. The Nový Kostel focal zone (NK) dominates the recent seismicity, there were swarms in 1997, 2000, 2008 and 20011, and a striking non-swarm activity (mainshock-aftershock sequences) up to magnitude ML= 4.5 in May to August 2014. The swarms and the 2014 mainshock-aftershock sequences are located close to each other at depths between 6 and 13 km. The frequency-magnitude distributions of all the swarms show bimodal-like character: the most events obey the b-value = 1.0 distribution, but a group of the largest events depart significantly from it. All the ML > 2.8 swarm events are located in a few dense clusters which implies step by step rupturing of one or a few asperities during the individual swarms. The source mechanism patters (moment-tensor description, MT) of the individual swarms indicate several families of the mechanisms, which fit well geometry of respective fault segments. MTs of the most events signify pure shears except for the 1997-swarm events the MTs of which indicates a combine sources including both shear and tensile components. The origin of earthquake swarms is still unclear. Nevertheless, we infer that the individual earthquake swarms in West Bohemia-Vogtland are mixture of the mainshock-aftershock sequences which correspond to step by step rupturing of one or a few asperities. The swarms occur on short fault segments with heterogeneous stress and strength, which may be affected by pressurized crustal fluids reducing normal component of the tectonic stress and lower friction. This way critically loaded faults are brought to failure and the swarm activity is driven by the differential local stress.
Catalytic ignition model in a monolithic reactor with in-depth reaction
NASA Technical Reports Server (NTRS)
Tien, Ta-Ching; Tien, James S.
1990-01-01
Two transient models have been developed to study the catalytic ignition in a monolithic catalytic reactor. The special feature in these models is the inclusion of thermal and species structures in the porous catalytic layer. There are many time scales involved in the catalytic ignition problem, and these two models are developed with different time scales. In the full transient model, the equations are non-dimensionalized by the shortest time scale (mass diffusion across the catalytic layer). It is therefore accurate but is computationally costly. In the energy-integral model, only the slowest process (solid heat-up) is taken as nonsteady. It is approximate but computationally efficient. In the computations performed, the catalyst is platinum and the reactants are rich mixtures of hydrogen and oxygen. One-step global chemical reaction rates are used for both gas-phase homogeneous reaction and catalytic heterogeneous reaction. The computed results reveal the transient ignition processes in detail, including the structure variation with time in the reactive catalytic layer. An ignition map using reactor length and catalyst loading is constructed. The comparison of computed results between the two transient models verifies the applicability of the energy-integral model when the time is greater than the second largest time scale of the system. It also suggests that a proper combined use of the two models can catch all the transient phenomena while minimizing the computational cost.
Effects of a Longer Detection Window in VHF Time-of-Arrival Lightning Detection Systems
NASA Astrophysics Data System (ADS)
Murphy, M.; Holle, R.; Demetriades, N.
2003-12-01
Lightning detection systems that operate by measuring the times of arrival (TOA) of short bursts of radiation at VHF can produce huge volumes of data. The first automated system of this kind, the NASA Kennedy Space Center LDAR network, is capable of producing one detection every 100 usec from each of seven sensors (Lennon and Maier, 1991), where each detection consists of the time and amplitude of the highest-amplitude peak observed within the 100 usec window. More modern systems have been shown to produce very detailed information with one detection every 10 usec (Rison et al., 2001). Operating such systems in real time, however, can become expensive because of the large data communications rates required. One solution to this problem is to use a longer detection window, say 500 usec. In principle, this has little or no effect on the flash detection efficiency because each flash typically produces a very large number of these VHF bursts (known as sources). By simply taking the largest-amplitude peak from every 500-usec interval instead of every 100-usec interval, we should detect the largest 20{%} of the sources that would have been detected using the 100-usec window. However, questions remain about the exact effect of a longer detection window on the source detection efficiency with distance from the network, its effects on how well flashes are represented in space, and how well the reduced information represents the parent thunderstorm. The latter issue is relevant for automated location and tracking of thunderstorm cells using data from VHF TOA lightning detection networks, as well as for understanding relationships between lightning and severe weather. References Lennon, C.L. and L.M. Maier, Lightning mapping system. Proceedings, Intl. Aerospace and Ground Conf. on Lightning and Static Elec., Cocoa Beach, Fla., NASA Conf. Pub. 3106, vol. II, pp. 89-1 - 89-10, 1991. Rison, W., P. Krehbiel, R. Thomas, T. Hamlin, J. Harlin, High time resolution lightning mapping observations of a small thunderstorm during STEPS. Eos Trans. AGU, 82 (47), Fall Meet. Suppl., Abstract AE12A-83, 2001.
K/T age for the popigai impact event
NASA Technical Reports Server (NTRS)
Deino, A. L.; Garvin, J. B.; Montanari, S.
1991-01-01
The multi-ringed POPIGAI structure, with an outer ring diameter of over 100 km, is the largest impact feature currently recognized on Earth with an Phanerozoic age. The target rocks in this relatively unglaciated region consist of upper Proterozoic through Mesozoic platform sediments and igneous rocks overlying Precambrian crystalline basement. The reported absolute age of the Popigai impact event ranges from 30.5 to 39 Ma. With the intent of refining this age estimate, a melt-breccia (suevite) sample from the inner regions of the Popigai structure was prepared for total fusion and step-wise heating Ar-40/Ar-39 analysis. Although the total fusion and step-heating experiments suggest some degree of age heterogeneity, the recurring theme is an age of around 64 to 66 Ma.
Safe harbor: protecting ports with shipboard fuel cells.
Taylor, David A
2006-04-01
With five of the largest harbors in the United States, California is beginning to take steps to manage the large amounts of pollution generated by these bustling centers of transport and commerce. One option for reducing diesel emissions is the use of fuel cells, which run cleaner than diesel and other internal combustion engines. Other technologies being explored by harbor officials are diesel-electric hybrid and gas turbine locomotives for moving freight within port complexes.
NASA Astrophysics Data System (ADS)
Mues, A.; Kuenen, J.; Hendriks, C.; Manders, A.; Segers, A.; Scholz, Y.; Hueglin, C.; Builtjes, P.; Schaap, M.
2013-07-01
In this study the sensitivity of the model performance of the chemistry transport model (CTM) LOTOS-EUROS to the description of the temporal variability of emissions was investigated. Currently the temporal release of anthropogenic emissions is described by European average diurnal, weekly and seasonal time profiles per sector. These default time profiles largely neglect the variation of emission strength with activity patterns, region, species, emission process and meteorology. The three sources dealt with in this study are combustion in energy and transformation industries (SNAP1), non-industrial combustion (SNAP2) and road transport (SNAP7). First the impact of neglecting the temporal emission profiles for these SNAP categories on simulated concentrations was explored. In a~second step, we constructed more detailed emission time profiles for the three categories and quantified their impact on the model performance separately as well as combined. The performance in comparison to observations for Germany was quantified for the pollutants NO2, SO2 and PM10 and compared to a simulation using the default LOTOS-EUROS emission time profiles. In general the largest impact on the model performance was found when neglecting the default time profiles for the three categories. The daily average correlation coefficient for instance decreased by 0.04 (NO2), 0.11 (SO2) and 0.01 (PM10) at German urban background stations compared to the default simulation. A systematic increase of the correlation coefficient is found when using the new time profiles. The size of the increase depends on the source category, the component and station. Using national profiles for road transport showed important improvements of the explained variability over the weekdays as well as the diurnal cycle for NO2. The largest impact of the SNAP1 and 2 profiles were found for SO2. When using all new time profiles simultaneously in one simulation the daily average correlation coefficient increased by 0.05 (NO2), 0.07 (SO2) and 0.03 (PM10) at urban background stations in Germany. This exercise showed that to improve the performance of a CTM a better representation of the distribution of anthropogenic emission in time is recommendable. This can be done by developing a dynamical emission model which takes into account regional specific factors and meteorology.
Phase space reconstruction and estimation of the largest Lyapunov exponent for gait kinematic data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Josiński, Henryk; Świtoński, Adam; Silesian University of Technology, Akademicka 16, 44-100 Gliwice
The authors describe an example of application of nonlinear time series analysis directed at identifying the presence of deterministic chaos in human motion data by means of the largest Lyapunov exponent. The method was previously verified on the basis of a time series constructed from the numerical solutions of both the Lorenz and the Rössler nonlinear dynamical systems.
Isometric muscle strength and mobility capacity in children with cerebral palsy.
Dallmeijer, Annet J; Rameckers, Eugene A; Houdijk, Han; de Groot, Sonja; Scholtes, Vanessa A; Becher, Jules G
2017-01-01
To determine the relationship between isometric leg muscle strength and mobility capacity in children with cerebral palsy (CP) compared to typically developing (TD) peers. Participants were 62 children with CP (6-13 years), able to walk with (n = 10) or without (n = 52) walking aids, and 47 TD children. Isometric muscle strength of five muscle groups of the leg was measured using hand-held dynamometry. Mobility capacity was assessed with the 1-min walk, the 10-m walk, sit-to-stand, lateral-step-up and timed-stair tests. Isometric strength of children with CP was reduced to 36-82% of TD. When adjusted for age and height, the percentage of variance in mobility capacity that was explained by isometric strength of the leg muscles was 21-24% (walking speed), 25% (sit-to-stand), 28% (lateral-step-up) and 35% (timed-stair) in children with CP. Hip abductors and knee flexors had the largest contribution to the explained variance, while knee extensors showed the weakest correlation. Weak or no associations were found between strength and mobility capacity in TD children. Isometric strength, especially hip abductor and knee flexor strength, is moderately related to mobility capacity in children with CP, but not in TD children. To what extent training of these muscle groups will lead to better mobility capacity needs further study. Implications for Rehabilitation Strength training in children with cerebral palsy (CP) may be targeted more specifically at hip abductors and knee flexors. The moderate associations imply that large improvements in mobility capacity may not be expected when strength increases.
Balance and gait performance in an urban and a rural population.
Ringsberg, K A; Gärdsell, P; Johnell, O; Jónsson, B; Obrant, K J; Sernbo, I
1998-01-01
To compare the differences in standing balance and gait performance between two populations, correlated with age and physical activities of daily living. A cross-sectional study. Malmö, the third largest city in Sweden, and Sjöbo, a typical agricultural community 60 km east of Malmö. Participants were 570 men and women from the urban community (urban) and 391 from the rural community (rural), born in 1938, 1928, 1918, and 1908, and women born in 1948. The two cohorts were subdivided into true urbans, who had lived only in the city (n = 269), and true rurals, who had never lived in a city (n = 354). Information about workload, housing, spare time activities, medication, and illness during different decades of life was gathered using two questionnaires. The first questionnaire was sent to the home after agreement to participate, and the second was presented at the test session. The clinical measurements were standing balance, gait speed, and step length. The urban subjects had significantly (P < .001) impaired balance compared with rural subjects. This difference increased with increasing age. The urban subjects walked faster than the rural subjects (P < .001), and the urban subjects used fewer steps than their rural counterparts (P < .001). Spare time activities had a significant influence on the above tests, but, except for gait velocity (P = .011), workload was of minor importance according to analysis of covariance. Background factors such as usual daily activities of living and lifestyle seem to be of importance when evaluating and comparing different populations with respect to their balance and gait performance.
Plagioclase nucleation and growth kinetics in a hydrous basaltic melt by decompression experiments
NASA Astrophysics Data System (ADS)
Arzilli, Fabio; Agostini, C.; Landi, P.; Fortunati, A.; Mancini, L.; Carroll, M. R.
2015-12-01
Isothermal single-step decompression experiments (at temperature of 1075 °C and pressure between 5 and 50 MPa) were used to study the crystallization kinetics of plagioclase in hydrous high-K basaltic melts as a function of pressure, effective undercooling (Δ T eff) and time. Single-step decompression causes water exsolution and a consequent increase in the plagioclase liquidus, thus imposing an effective undercooling (Δ T eff), accompanied by increased melt viscosity. Here, we show that the decompression process acts directly on viscosity and thermodynamic energy barriers (such as interfacial-free energy), controlling the nucleation process and favoring the formation of homogeneous nuclei also at high pressure (low effective undercoolings). In fact, this study shows that similar crystal number densities ( N a) can be obtained both at low and high pressure (between 5 and 50 MPa), whereas crystal growth processes are favored at low pressures (5-10 MPa). The main evidence of this study is that the crystallization of plagioclase in decompressed high-K basalts is more rapid than that in rhyolitic melts on similar timescales. The onset of the crystallization process during experiments was characterized by an initial nucleation event within the first hour of the experiment, which produced the largest amount of plagioclase. This nucleation event, at short experimental duration, can produce a dramatic change in crystal number density ( N a) and crystal fraction ( ϕ), triggering a significant textural evolution in only 1 h. In natural systems, this may affect the magma rheology and eruptive dynamics on very short time scales.
NASA Technical Reports Server (NTRS)
Jovic, Srba; Kutler, Paul F. (Technical Monitor)
1994-01-01
Experimental results for a two-dimensional separated turbulent boundary layer behind a backward facing step for five different Reynolds numbers are reported. Results are presented in the form of tables, graphs and a floppy disk for an easy access of the data. Reynolds number based on the step height was varied by changing the reference velocity upstream of the step, U(sub o), and the step height, h. Hot-wire measurement techniques were used to measure three Reynolds stresses and four triple-velocity correlations. In addition, surface pressure and skin friction coefficients were measured. All hot-wire measurements were acquired in a measuring domain which excluded recirculating flow region due to the directional insensitivity of hot-wires. The downstream extent of the domain from the step was 51 h for the largest and I 14h for the smallest step height. This significant downstream length permitted extensive study of the flow recovery. Prediction of perturbed flows and their recovery is particularly attractive for popular turbulence models since variations of turbulence length and time scales and flow interactions in different regions are generally inadequately predicted. The data indicate that the flow in the free shear layer region behaves like the plane mixing layer up to about 2/3 of the mean reattachment length when the flow interaction with the wall commences the flow recovery to that of an ordinary turbulent boundary layer structure. These changes of the flow do not occur abruptly with the change of boundary conditions. A reattachment region represents a transitional region where the flow undergoes the most dramatic adjustments to the new boundary conditions. Large eddies, created in the upstream free-shear layer region, are being torn, recirculated, reentrained back into the main stream interacting with the incoming flow structure. It is foreseeable that it is quite difficult to describe the physics of this region in a rational and quantitative manner other than statistical. Downstream of the reattachment point the flow recovers at different rates near the wall, in the newly developing internal boundary layer, and in the outer part of the flow. It appears that Reynolds stresses do not fully recover up to the longest recovery length of 114 h.
A Water Resources Management Model to Evaluate Climate Change Impacts in North-Patagonia, Argentina
NASA Astrophysics Data System (ADS)
Bucciarelli, L. F.; Losano, F. T.; Marizza, M.; Cello, P.; Forni, L.; Young, C. A.; Girardin, L. O.; Nadal, G.; Lallana, F.; Godoy, S.; Vallejos, R.
2014-12-01
Most recently developed climate scenarios indicate a potential future increase in water stress in the region of Comahue, located in the North-Patagonia, Argentina. This region covers about 140,000 km2 where the Limay River and the Neuquén River converge into the Negro River, constituting the largest integrated basins in Argentina providing various uses of water resources: a) hydropower generation, contributing 15% of the national electricity market; b) fruit-horticultural products for local markets and export; c) human and industrial water supply; d) mining and oil exploitation, including Vaca Muerta, second world largest reserves of shale gas and fourth world largest reserves of shale-oil. The span of multiple jurisdictions and the convergence of various uses of water resources are a challenge for integrated understanding of economically and politically driven resource use activities on the natural system. The impacts of climate change on the system could lead to water resource conflicts between the different political actors and stakeholders. This paper presents the results of a hydrological simulation of the Limay river and Neuquén river basins using WEAP (Water Evaluation and Planning) considering the operation of artificial reservoirs located downstream at a monthly time step. This study aims to support policy makers via integrated tools for water-energy planning under climate uncertainties, and to facilitate the formulation of water policy-related actions for future water stress adaptation. The value of the integrated resource use model is that it can support local policy makers understand the implications of resource use trade-offs under a changing climate: 1) water availability to meet future growing demand for irrigated areas; 2) water supply for hydropower production; 3) increasing demand of water for mining and extraction of unconventional oil; 4) potential resource use conflicts and impacts on vulnerable populations.
NASA Astrophysics Data System (ADS)
Zhang, Bowen; Tian, Hanqin; Lu, Chaoqun; Chen, Guangsheng; Pan, Shufen; Anderson, Christopher; Poulter, Benjamin
2017-09-01
A wide range of estimates on global wetland methane (CH4) fluxes has been reported during the recent two decades. This gives rise to urgent needs to clarify and identify the uncertainty sources, and conclude a reconciled estimate for global CH4 fluxes from wetlands. Most estimates by using bottom-up approach rely on wetland data sets, but these data sets show largely inconsistent in terms of both wetland extent and spatiotemporal distribution. A quantitative assessment of uncertainties associated with these discrepancies among wetland data sets has not been well investigated yet. By comparing the five widely used global wetland data sets (GISS, GLWD, Kaplan, GIEMS and SWAMPS-GLWD), it this study, we found large differences in the wetland extent, ranging from 5.3 to 10.2 million km2, as well as their spatial and temporal distributions among the five data sets. These discrepancies in wetland data sets resulted in large bias in model-estimated global wetland CH4 emissions as simulated by using the Dynamic Land Ecosystem Model (DLEM). The model simulations indicated that the mean global wetland CH4 emissions during 2000-2007 were 177.2 ± 49.7 Tg CH4 yr-1, based on the five different data sets. The tropical regions contributed the largest portion of estimated CH4 emissions from global wetlands, but also had the largest discrepancy. Among six continents, the largest uncertainty was found in South America. Thus, the improved estimates of wetland extent and CH4 emissions in the tropical regions and South America would be a critical step toward an accurate estimate of global CH4 emissions. This uncertainty analysis also reveals an important need for our scientific community to generate a global scale wetland data set with higher spatial resolution and shorter time interval, by integrating multiple sources of field and satellite data with modeling approaches, for cross-scale extrapolation.
Riemannian geometry of Hamiltonian chaos: hints for a general theory.
Cerruti-Sola, Monica; Ciraolo, Guido; Franzosi, Roberto; Pettini, Marco
2008-10-01
We aim at assessing the validity limits of some simplifying hypotheses that, within a Riemmannian geometric framework, have provided an explanation of the origin of Hamiltonian chaos and have made it possible to develop a method of analytically computing the largest Lyapunov exponent of Hamiltonian systems with many degrees of freedom. Therefore, a numerical hypotheses testing has been performed for the Fermi-Pasta-Ulam beta model and for a chain of coupled rotators. These models, for which analytic computations of the largest Lyapunov exponents have been carried out in the mentioned Riemannian geometric framework, appear as paradigmatic examples to unveil the reason why the main hypothesis of quasi-isotropy of the mechanical manifolds sometimes breaks down. The breakdown is expected whenever the topology of the mechanical manifolds is nontrivial. This is an important step forward in view of developing a geometric theory of Hamiltonian chaos of general validity.
The changing hydrology of a dammed Amazon
Timpe, Kelsie; Kaplan, David
2017-01-01
Developing countries around the world are expanding hydropower to meet growing energy demand. In the Brazilian Amazon, >200 dams are planned over the next 30 years, and questions about the impacts of current and future hydropower in this globally important watershed remain unanswered. In this context, we applied a hydrologic indicator method to quantify how existing Amazon dams have altered the natural flow regime and to identify predictors of alteration. The type and magnitude of hydrologic alteration varied widely by dam, but the largest changes were to critical characteristics of the flood pulse. Impacts were largest for low-elevation, large-reservoir dams; however, small dams had enormous impacts relative to electricity production. Finally, the “cumulative” effect of multiple dams was significant but only for some aspects of the flow regime. This analysis is a first step toward the development of environmental flows plans and policies relevant to the Amazon and other megadiverse river basins. PMID:29109972
Daily physical activity patterns of children living in an American Indian community.
Brusseau, Timothy A; Kulinna, Pamela H; Tudor-Locke, Catrine; Ferry, Matthew
2013-01-01
Embracing a physically active lifestyle is especially important for American Indian (AI) children who are at a greater risk for hypokinetic diseases, particularly Type 2 diabetes. The purpose of this study was to describe AI children's pedometer-determined physical activity (PA) segmented into prominent daily activity patterns. Participants included 5th- and 6th-grade children (N = 77) attending school from 1 Southwestern US AI community. Children wore a pedometer (Yamax Digiwalker SW-200) for 7 consecutive days. Boys accumulated 12,621 (± 5385) steps/weekday and girls accumulated 11,640 (± 3695) steps/weekday of which 38% (4,779 ± 1271) and 35% (4,027 ± 1285) were accumulated at school for boys and girls, respectively. Physical education (PE) provided the single largest source of PA during school for both boys (25% or 3117 steps/day) and girls (23% or 2638 steps/day). Lunchtime recess provided 1612 (13%) and 1241 (11%) steps/day for boys and girls, respectively. Children were significantly less active on weekend days, accumulating 8066 ± 1959 (boys) and 6676 ± 1884 (girls). Although children accumulate a majority of their steps outside of school, this study highlights the important contribution of PE to the overall PA accumulation of children living in AI communities. Further, PA programming during the weekend appears to be important for this population.
Direct analysis of terpenes from biological buffer systems using SESI and IR-MALDESI.
Nazari, Milad; Malico, Alexandra A; Ekelöf, Måns; Lund, Sean; Williams, Gavin J; Muddiman, David C
2018-01-01
Terpenes are the largest class of natural products with a wide range of applications including use as pharmaceuticals, fragrances, flavorings, and agricultural products. Terpenes are biosynthesized by the condensation of a variable number of isoprene units resulting in linear polyisoprene diphosphate units, which can then be cyclized by terpene synthases into a range of complex structures. While these cyclic structures have immense diversity and potential in different applications, their direct analysis in biological buffer systems requires intensive sample preparation steps such as salt cleanup, extraction with organic solvents, and chromatographic separations. Electrospray post-ionization can be used to circumvent many sample cleanup and desalting steps. SESI and IR-MALDESI are two examples of ionization methods that employ electrospray post-ionization at atmospheric pressure and temperature. By coupling the two techniques and doping the electrospray solvent with silver ions, olefinic terpenes of different classes and varying degrees of volatility were directly analyzed from a biological buffer system with no sample workup steps.
Singhal, Kunal; Kim, Jemin; Casebolt, Jeffrey; Lee, Sangwoo; Han, Ki-Hoon; Kwon, Young-Hoo
2015-06-01
Angular momentum of the body is a highly controlled quantity signifying stability, therefore, it is essential to understand its regulation during stair descent. The purpose of this study was to investigate how older adults use gravity and ground reaction force to regulate the angular momentum of the body during stair descent. A total of 28 participants (12 male and 16 female; 68.5 years and 69.0 years of mean age respectively) performed stair descent from a level walk in a step-over-step manner at a self-selected speed over a custom made three-step staircase with embedded force plates. Kinematic and force data were used to calculate angular momentum, gravitational moment, and ground reaction force moment about the stance foot center of pressure. Women show a significantly greater change in normalized angular momentum (0.92Nms/Kgm; p=.004) as compared to men (0.45Nms/Kgm). Women produce higher normalized GRF (p=.031) during the double support phase. The angular momentum changes show largest backward regulation for Step 0 and forward regulation for Step 2. This greater difference in overall change in the angular momentum in women may explain their increased risk of fall over the stairs. Copyright © 2015 Elsevier B.V. All rights reserved.
Experimental study of pancreaticojejunostomy completed using anastomotic chains
NASA Astrophysics Data System (ADS)
Pan, Wei-Dong; Xu, Rui-Yun; Li, Nan; Fang, He-Ping; Pan, Cu-Zhi; Tang, Zhao-Feng
2010-07-01
The most difficult, time-consuming, and complication-prone step in pancreaticoduodenectomy is the pancreaticojejunostomy step. The largest disadvantage of this kind of anastomosis is the high incidence of postoperative anastomotic leakage. Once pancreatic leakage occurs, the patient death rate can be very high. The aim of this study was to design a pancreaticojejunostomy procedure using anastomotic chains, which results in the cut end of the jejunum being attached to the pancreatic stump without suturing, and to evaluate the safety and efficacy of this procedure in domestic pigs. The pancreaticojejunal anastomotic chains had the following structures: the chains consisted of two braceletlike chains made of titanium, named chain A and chain B. The function of chain A was to attach the free jejunal end onto the pancreatic stump, whereas the function of chain B was to tighten the contact between the jejunal wall and the surface of the pancreatic stump to eliminate gaps between the two structures and ensure tightness that is sufficient to guarantee that there is no leakage of jejunal fluid or pancreatic juice. The following procedure was used to assess the safety and efficacy of the procedure: pancreaticojejunostomies were performed on ten domestic pigs using anastomotic chains. The time required to complete the pancreaticojejunal anastomoses, the pressure tolerance of the pancreaticojejunal anastomoses, the pig death rate, and the histopathological examinations of the pancreaticojejunostomy tissues were recorded. The average time required to complete the pancreaticojejunal anastomosis procedure was 13±2 min. The observed tolerance pressure of the pancreaticojejunal anastomoses was more than 90 mm H2O. All ten domestic pigs that underwent operations were still alive four weeks after the operations. Pathological examinations showed that the anastomotic surfaces were completely healed, and the pancreatic cutting surfaces were primarily epithelialized. In conclusion, the use of anastomotic chains in pancreaticojejunostomy procedures results in a decrease in or elimination of pancreatic leakage. In addition, the procedure is simple to perform, is not time-intensive, and appears to be safe in a pig model.
Method of Simulating Flow-Through Area of a Pressure Regulator
NASA Technical Reports Server (NTRS)
Hass, Neal E. (Inventor); Schallhorn, Paul A. (Inventor)
2011-01-01
The flow-through area of a pressure regulator positioned in a branch of a simulated fluid flow network is generated. A target pressure is defined downstream of the pressure regulator. A projected flow-through area is generated as a non-linear function of (i) target pressure, (ii) flow-through area of the pressure regulator for a current time step and a previous time step, and (iii) pressure at the downstream location for the current time step and previous time step. A simulated flow-through area for the next time step is generated as a sum of (i) flow-through area for the current time step, and (ii) a difference between the projected flow-through area and the flow-through area for the current time step multiplied by a user-defined rate control parameter. These steps are repeated for a sequence of time steps until the pressure at the downstream location is approximately equal to the target pressure.
Is the BTS/SIGN guideline confusing? A retrospective database analysis of asthma therapy.
Covvey, Jordan R; Johnston, Blair F; Wood, Fraser; Boyter, Anne C
2013-09-01
The British guideline on the management of asthma produced by the British Thoracic Society (BTS) and the Scottish Intercollegiate Guidelines Network (SIGN) describes five steps for the management of chronic asthma. Combination therapy of a long acting β2-agonist (LABA) and an inhaled corticosteroid (ICS) is recommended as first-line therapy at step 3, although the dose of ICS at which to add a LABA is subject to debate. To classify the inhaled therapy prescribed to patients with asthma in NHS Forth Valley according to two interpretations of the BTS/SIGN guideline and to evaluate the use of combination therapy in this population. A retrospective analysis including patients from 46 general practitioner surgeries was conducted. Patients with physician diagnosed asthma were classified according to the BTS/SIGN guideline based on treatment prescribed during 2008. Patient characteristics were evaluated for the overall step classification, and specifically for therapy in step 3. 12,319 patients were included. Guideline interpretation resulted in a shift of 9.2% of patients (receiving medium-dose ICS alone) between steps 2 and 3. The largest proportion of patients (32.3%) was classified at step 4. Age, sex, smoking status, chronic obstructive pulmonary disease co-morbidity, and utilisation of short-acting β2-agonists and oral corticosteroids all correlated with step; however, no differences in these characteristics were evident between low-dose combination therapy and medium-dose ICS alone at step 3. Further studies are needed to evaluate prescribing decisions in asthma. Guideline recommendations regarding the use of ICS dose escalation versus combination therapy need to be clarified relative to the published evidence.
Automatic initial and final segmentation in cleft palate speech of Mandarin speakers
Liu, Yin; Yin, Heng; Zhang, Junpeng; Zhang, Jing; Zhang, Jiang
2017-01-01
The speech unit segmentation is an important pre-processing step in the analysis of cleft palate speech. In Mandarin, one syllable is composed of two parts: initial and final. In cleft palate speech, the resonance disorders occur at the finals and the voiced initials, while the articulation disorders occur at the unvoiced initials. Thus, the initials and finals are the minimum speech units, which could reflect the characteristics of cleft palate speech disorders. In this work, an automatic initial/final segmentation method is proposed. It is an important preprocessing step in cleft palate speech signal processing. The tested cleft palate speech utterances are collected from the Cleft Palate Speech Treatment Center in the Hospital of Stomatology, Sichuan University, which has the largest cleft palate patients in China. The cleft palate speech data includes 824 speech segments, and the control samples contain 228 speech segments. The syllables are extracted from the speech utterances firstly. The proposed syllable extraction method avoids the training stage, and achieves a good performance for both voiced and unvoiced speech. Then, the syllables are classified into with “quasi-unvoiced” or with “quasi-voiced” initials. Respective initial/final segmentation methods are proposed to these two types of syllables. Moreover, a two-step segmentation method is proposed. The rough locations of syllable and initial/final boundaries are refined in the second segmentation step, in order to improve the robustness of segmentation accuracy. The experiments show that the initial/final segmentation accuracies for syllables with quasi-unvoiced initials are higher than quasi-voiced initials. For the cleft palate speech, the mean time error is 4.4ms for syllables with quasi-unvoiced initials, and 25.7ms for syllables with quasi-voiced initials, and the correct segmentation accuracy P30 for all the syllables is 91.69%. For the control samples, P30 for all the syllables is 91.24%. PMID:28926572
Squires, Janet E; Grimshaw, Jeremy M; Taljaard, Monica; Linklater, Stefanie; Chassé, Michaël; Shemie, Sam D; Knoll, Gregory A
2014-06-20
A shortage of transplantable organs is a global problem. There are two types of organ donation: living and deceased. Deceased organ donation can occur following neurological determination of death (NDD) or cardiocirculatory death. Donation after cardiocirculatory death (DCD) accounts for the largest increments in deceased organ donation worldwide. Variations in the use of DCD exist, however, within Canada and worldwide. Reasons for these discrepancies are largely unknown. The purpose of this study is to develop, implement, and evaluate a theory-based knowledge translation intervention to provide practical guidance about how to increase the numbers of DCD organ donors without reducing the numbers of standard NDD donors. We will use a mixed method three-step approach. In step one, we will conduct semi-structured interviews, informed by the Theoretical Domains Framework, to identify and describe stakeholders' beliefs and attitudes about DCD and their perceptions of the multi-level factors that influence DCD. We will identify: determinants of the evidence-practice gap; specific behavioural changes and/or process changes needed to increase DCD; specific group(s) of clinicians or organizations (e.g., provincial donor organizations) in need of behaviour change; and specific targets for interventions. In step two, using the principles of intervention mapping, we will develop a theory-based knowledge translation intervention that encompasses behavior change techniques to overcome the identified barriers and enhance the enablers to DCD. In step three, we will roll out the intervention in hospitals across the 10 Canadian provinces and evaluate its effectiveness using a multiple interrupted time series design. We will adopt a behavioural approach to define and test novel, theory-based, and ethically-acceptable knowledge translation strategies to increase the numbers of available DCD organ donors in Canada. If successful, this study will ultimately lead to more transplantations, reducing patient morbidity and mortality at a population-level.
Steps to achieve quantitative measurements of microRNA using two step droplet digital PCR.
Stein, Erica V; Duewer, David L; Farkas, Natalia; Romsos, Erica L; Wang, Lili; Cole, Kenneth D
2017-01-01
Droplet digital PCR (ddPCR) is being advocated as a reference method to measure rare genomic targets. It has consistently been proven to be more sensitive and direct at discerning copy numbers of DNA than other quantitative methods. However, one of the largest obstacles to measuring microRNA (miRNA) using ddPCR is that reverse transcription efficiency depends upon the target, meaning small RNA nucleotide composition directly effects primer specificity in a manner that prevents traditional quantitation optimization strategies. Additionally, the use of reagents that are optimized for miRNA measurements using quantitative real-time PCR (qRT-PCR) appear to either cause false positive or false negative detection of certain targets when used with traditional ddPCR quantification methods. False readings are often related to using inadequate enzymes, primers and probes. Given that two-step miRNA quantification using ddPCR relies solely on reverse transcription and uses proprietary reagents previously optimized only for qRT-PCR, these barriers are substantial. Therefore, here we outline essential controls, optimization techniques, and an efficacy model to improve the quality of ddPCR miRNA measurements. We have applied two-step principles used for miRNA qRT-PCR measurements and leveraged the use of synthetic miRNA targets to evaluate ddPCR following cDNA synthesis with four different commercial kits. We have identified inefficiencies and limitations as well as proposed ways to circumvent identified obstacles. Lastly, we show that we can apply these criteria to a model system to confidently quantify miRNA copy number. Our measurement technique is a novel way to quantify specific miRNA copy number in a single sample, without using standard curves for individual experiments. Our methodology can be used for validation and control measurements, as well as a diagnostic technique that allows scientists, technicians, clinicians, and regulators to base miRNA measures on a single unit of measurement rather than a ratio of values.
Steps to achieve quantitative measurements of microRNA using two step droplet digital PCR
Duewer, David L.; Farkas, Natalia; Romsos, Erica L.; Wang, Lili; Cole, Kenneth D.
2017-01-01
Droplet digital PCR (ddPCR) is being advocated as a reference method to measure rare genomic targets. It has consistently been proven to be more sensitive and direct at discerning copy numbers of DNA than other quantitative methods. However, one of the largest obstacles to measuring microRNA (miRNA) using ddPCR is that reverse transcription efficiency depends upon the target, meaning small RNA nucleotide composition directly effects primer specificity in a manner that prevents traditional quantitation optimization strategies. Additionally, the use of reagents that are optimized for miRNA measurements using quantitative real-time PCR (qRT-PCR) appear to either cause false positive or false negative detection of certain targets when used with traditional ddPCR quantification methods. False readings are often related to using inadequate enzymes, primers and probes. Given that two-step miRNA quantification using ddPCR relies solely on reverse transcription and uses proprietary reagents previously optimized only for qRT-PCR, these barriers are substantial. Therefore, here we outline essential controls, optimization techniques, and an efficacy model to improve the quality of ddPCR miRNA measurements. We have applied two-step principles used for miRNA qRT-PCR measurements and leveraged the use of synthetic miRNA targets to evaluate ddPCR following cDNA synthesis with four different commercial kits. We have identified inefficiencies and limitations as well as proposed ways to circumvent identified obstacles. Lastly, we show that we can apply these criteria to a model system to confidently quantify miRNA copy number. Our measurement technique is a novel way to quantify specific miRNA copy number in a single sample, without using standard curves for individual experiments. Our methodology can be used for validation and control measurements, as well as a diagnostic technique that allows scientists, technicians, clinicians, and regulators to base miRNA measures on a single unit of measurement rather than a ratio of values. PMID:29145448
2014-01-01
Background A shortage of transplantable organs is a global problem. There are two types of organ donation: living and deceased. Deceased organ donation can occur following neurological determination of death (NDD) or cardiocirculatory death. Donation after cardiocirculatory death (DCD) accounts for the largest increments in deceased organ donation worldwide. Variations in the use of DCD exist, however, within Canada and worldwide. Reasons for these discrepancies are largely unknown. The purpose of this study is to develop, implement, and evaluate a theory-based knowledge translation intervention to provide practical guidance about how to increase the numbers of DCD organ donors without reducing the numbers of standard NDD donors. Methods We will use a mixed method three-step approach. In step one, we will conduct semi-structured interviews, informed by the Theoretical Domains Framework, to identify and describe stakeholders’ beliefs and attitudes about DCD and their perceptions of the multi-level factors that influence DCD. We will identify: determinants of the evidence-practice gap; specific behavioural changes and/or process changes needed to increase DCD; specific group(s) of clinicians or organizations (e.g., provincial donor organizations) in need of behaviour change; and specific targets for interventions. In step two, using the principles of intervention mapping, we will develop a theory-based knowledge translation intervention that encompasses behavior change techniques to overcome the identified barriers and enhance the enablers to DCD. In step three, we will roll out the intervention in hospitals across the 10 Canadian provinces and evaluate its effectiveness using a multiple interrupted time series design. Discussion We will adopt a behavioural approach to define and test novel, theory-based, and ethically-acceptable knowledge translation strategies to increase the numbers of available DCD organ donors in Canada. If successful, this study will ultimately lead to more transplantations, reducing patient morbidity and mortality at a population-level. PMID:24950719
Automatic initial and final segmentation in cleft palate speech of Mandarin speakers.
He, Ling; Liu, Yin; Yin, Heng; Zhang, Junpeng; Zhang, Jing; Zhang, Jiang
2017-01-01
The speech unit segmentation is an important pre-processing step in the analysis of cleft palate speech. In Mandarin, one syllable is composed of two parts: initial and final. In cleft palate speech, the resonance disorders occur at the finals and the voiced initials, while the articulation disorders occur at the unvoiced initials. Thus, the initials and finals are the minimum speech units, which could reflect the characteristics of cleft palate speech disorders. In this work, an automatic initial/final segmentation method is proposed. It is an important preprocessing step in cleft palate speech signal processing. The tested cleft palate speech utterances are collected from the Cleft Palate Speech Treatment Center in the Hospital of Stomatology, Sichuan University, which has the largest cleft palate patients in China. The cleft palate speech data includes 824 speech segments, and the control samples contain 228 speech segments. The syllables are extracted from the speech utterances firstly. The proposed syllable extraction method avoids the training stage, and achieves a good performance for both voiced and unvoiced speech. Then, the syllables are classified into with "quasi-unvoiced" or with "quasi-voiced" initials. Respective initial/final segmentation methods are proposed to these two types of syllables. Moreover, a two-step segmentation method is proposed. The rough locations of syllable and initial/final boundaries are refined in the second segmentation step, in order to improve the robustness of segmentation accuracy. The experiments show that the initial/final segmentation accuracies for syllables with quasi-unvoiced initials are higher than quasi-voiced initials. For the cleft palate speech, the mean time error is 4.4ms for syllables with quasi-unvoiced initials, and 25.7ms for syllables with quasi-voiced initials, and the correct segmentation accuracy P30 for all the syllables is 91.69%. For the control samples, P30 for all the syllables is 91.24%.
Keil, Holger; Beisemann, Nils; Schnetzke, Marc; Vetter, Sven Yves; Swartman, Benedict; Grützner, Paul Alfred; Franke, Jochen
2018-04-10
In acetabular fractures, the assessment of reduction and implant placement has limitations in conventional 2D intraoperative imaging. 3D imaging offers the opportunity to acquire CT-like images and thus to improve the results. However, clinical experience shows that even 3D imaging has limitations, especially regarding artifacts when implants are placed. The purpose of this study was to assess the difference between intraoperative 3D imaging and postoperative CT regarding reduction and implant placement. Twenty consecutive cases of acetabular fractures were selected with a complete set of intraoperative 3D imaging and postoperative CT data. The largest detectable step and the largest detectable gap were measured in all three standard planes. These values were compared between the 3D data sets and CT data sets. Additionally, possible correlations between the possible confounders age and BMI and the difference between 3D and CT values were tested. The mean difference of largest visible step between the 3D imaging and CT scan was 2.0 ± 1.8 mm (0.0-5.8, p = 0.02) in the axial, 1.3 ± 1.4 mm (0.0-3.7, p = 0.15) in the sagittal and 1.9 ± 2.4 mm (0.0-7.4, p = 0.22) in the coronal views. The mean difference of largest visible gap between the 3D imaging and CT scan was 3.1 ± 3.6 mm (0.0-14.1, p = 0.03) in the axial, 4.6 ± 2.7 mm (1.2-8.7, p = 0.001) in the sagittal and 3.5 ± 4.0 mm (0.0-15.4, p = 0.06) in the coronal views. A positive correlation between the age and the difference in gap measurements in the sagittal view was shown (rho = 0.556, p = 0.011). Intraoperative 3D imaging is a valuable adjunct in assessing reduction and implant placement in acetabular fractures but has limitations due to artifacts caused by implant material. This can lead to missed malreduction and impairment of clinical outcome, so postoperative CT should be considered in these cases.
Methods, systems and devices for detecting and locating ferromagnetic objects
Roybal, Lyle Gene [Idaho Falls, ID; Kotter, Dale Kent [Shelley, ID; Rohrbaugh, David Thomas [Idaho Falls, ID; Spencer, David Frazer [Idaho Falls, ID
2010-01-26
Methods for detecting and locating ferromagnetic objects in a security screening system. One method includes a step of acquiring magnetic data that includes magnetic field gradients detected during a period of time. Another step includes representing the magnetic data as a function of the period of time. Another step includes converting the magnetic data to being represented as a function of frequency. Another method includes a step of sensing a magnetic field for a period of time. Another step includes detecting a gradient within the magnetic field during the period of time. Another step includes identifying a peak value of the gradient detected during the period of time. Another step includes identifying a portion of time within the period of time that represents when the peak value occurs. Another step includes configuring the portion of time over the period of time to represent a ratio.
Effects of Socket Size on Metrics of Socket Fit in Trans-Tibial Prosthesis Users
Sanders, Joan E; Youngblood, Robert T; Hafner, Brian J; Cagle, John C; McLean, Jake B; Redd, Christian B; Dietrich, Colin R; Ciol, Marcia A; Allyn, Katheryn J
2017-01-01
The purpose of this research was to conduct a preliminary effort to identify quantitative metrics to distinguish a good socket from an oversized socket in people with trans-tibial amputation. Results could be used to inform clinical practices related to socket replacement. A cross-over study was conducted on community ambulators (K-level 3 or 4) with good residual limb sensation. Participants were each provided with two sockets, a duplicate of their as-prescribed socket and a modified socket that was enlarged or reduced by 1.8 mm (~6% of the socket volume) based on the fit quality of the as-prescribed socket. The two sockets were termed a larger socket and a smaller socket. Activity was monitored while participants wore each socket for 4wk. Participants’ gait; self-reported satisfaction, quality of fit, and performance; socket comfort; and morning-to-afternoon limb fluid volume changes were assessed. Visual analysis of plots and estimated effect sizes (measure as mean difference divided by standard deviation) showed largest effects for step time asymmetry, step width asymmetry, anterior and anterior-distal morning-to-afternoon fluid volume change, socket comfort scores, and self-reported measures of utility, satisfaction, and residual limb health. These variables may be viable metrics for early detection of deterioration in socket fit, and should be tested in a larger clinical study. PMID:28373013
Effects of socket size on metrics of socket fit in trans-tibial prosthesis users.
Sanders, Joan E; Youngblood, Robert T; Hafner, Brian J; Cagle, John C; McLean, Jake B; Redd, Christian B; Dietrich, Colin R; Ciol, Marcia A; Allyn, Katheryn J
2017-06-01
The purpose of this research was to conduct a preliminary effort to identify quantitative metrics to distinguish a good socket from an oversized socket in people with trans-tibial amputation. Results could be used to inform clinical practices related to socket replacement. A cross-over study was conducted on community ambulators (K-level 3 or 4) with good residual limb sensation. Participants were each provided with two sockets, a duplicate of their as-prescribed socket and a modified socket that was enlarged or reduced by 1.8mm (∼6% of the socket volume) based on the fit quality of the as-prescribed socket. The two sockets were termed a larger socket and a smaller socket. Activity was monitored while participants wore each socket for 4 weeks. Participants' gait; self-reported satisfaction, quality of fit, and performance; socket comfort; and morning-to-afternoon limb fluid volume changes were assessed. Visual analysis of plots and estimated effect sizes (measured as mean difference divided by standard deviation) showed largest effects for step time asymmetry, step width asymmetry, anterior and anterior-distal morning-to-afternoon fluid volume change, socket comfort score, and self-reported utility. These variables may be viable metrics for early detection of deterioration in socket fit, and should be tested in a larger clinical study. Copyright © 2017 IPEM. Published by Elsevier Ltd. All rights reserved.
Prevalence of dry methods in granite countertop fabrication in Oklahoma.
Phillips, Margaret L; Johnson, Andrew C
2012-01-01
Granite countertop fabricators are at risk of exposure to respirable crystalline silica, which may cause silicosis and other lung conditions. The purpose of this study was to estimate the prevalence of exposure control methods, especially wet methods, in granite countertop fabrication in Oklahoma to assess how many workers might be at risk of overexposure to crystalline silica in this industry. Granite fabrication shops in the three largest metropolitan areas in Oklahoma were enumerated, and 47 of the 52 shops participated in a survey on fabrication methods. Countertop shops were small businesses with average work forces of fewer than 10 employees. Ten shops (21%) reported using exclusively wet methods during all fabrication steps. Thirty-five shops (74%) employing a total of about 200 workers reported using dry methods all or most of the time in at least one fabrication step. The tasks most often performed dry were edge profiling (17% of shops), cutting of grooves for reinforcing rods (62% of shops), and cutting of sink openings (45% of shops). All shops reported providing either half-face or full-face respirators for use during fabrication, but none reported doing respirator fit testing. Few shops reported using any kind of dust collection system. These findings suggest that current consumer demand for granite countertops is giving rise to a new wave of workers at risk of silicosis due to potential overexposure to granite dust.
Marketing the health care experience: eight steps to infuse brand essence into your organization.
Lofgren, Diane Gage; Rhodes, Sonia; Miller, Todd; Solomon, Jared
2006-01-01
One of the most elusive challenges in health care marketing is hitting on a strategy to substantially differentiate your organization in the community and drive profitable business. This article describes how Sharp HealthCare, the largest integrated health care delivery system in San Diego, has proven that focusing first on improving the health care experience for patients, physicians, and employees can provide the impetus for a vital marketing strategy that can lead to increased market share and net revenue. Over the last five years, this nonprofit health system has transformed the health care experience into tangible actions that are making a difference in the lives of all those the system serves. That difference has become Sharp's "brand essence"--a promise to the community that has been made through marketing, public relations, and advertising and then delivered through the dedicated work of Sharp's 14,000 team members. They call this performance improvement strategy The Sharp Experience. This article outlines the eight-step journey that led the organization to this brand essence marketing campaign, a campaign whose centerpiece is an award-winning 30-minute television documentary that use real-time patient stories to demonstrate Sharp's focus on service and patient-centered care against a backdrop of clinical quality and state-of-the-art technology, and documentary-style radio and television commercials.
Statistical Analysis of Variation in the Human Plasma Proteome
Corzett, Todd H.; Fodor, Imola K.; Choi, Megan W.; ...
2010-01-01
Quantifying the variation in the human plasma proteome is an essential prerequisite for disease-specific biomarker detection. We report here on the longitudinal and individual variation in human plasma characterized by two-dimensional difference gel electrophoresis (2-D DIGE) using plasma samples from eleven healthy subjects collected three times over a two week period. Fixed-effects modeling was used to remove dye and gel variability. Mixed-effects modeling was then used to quantitate the sources of proteomic variation. The subject-to-subject variation represented the largest variance component, while the time-within-subject variation was comparable to the experimental variation found in a previous technical variability study where onemore » human plasma sample was processed eight times in parallel and each was then analyzed by 2-D DIGE in triplicate. Here, 21 protein spots had larger than 50% CV, suggesting that these proteins may not be appropriate as biomarkers and should be carefully scrutinized in future studies. Seventy-eight protein spots showing differential protein levels between different individuals or individual collections were identified by mass spectrometry and further characterized using hierarchical clustering. The results present a first step toward understanding the complexity of longitudinal and individual variation in the human plasma proteome, and provide a baseline for improved biomarker discovery.« less
Statistical analysis of variation in the human plasma proteome.
Corzett, Todd H; Fodor, Imola K; Choi, Megan W; Walsworth, Vicki L; Turteltaub, Kenneth W; McCutchen-Maloney, Sandra L; Chromy, Brett A
2010-01-01
Quantifying the variation in the human plasma proteome is an essential prerequisite for disease-specific biomarker detection. We report here on the longitudinal and individual variation in human plasma characterized by two-dimensional difference gel electrophoresis (2-D DIGE) using plasma samples from eleven healthy subjects collected three times over a two week period. Fixed-effects modeling was used to remove dye and gel variability. Mixed-effects modeling was then used to quantitate the sources of proteomic variation. The subject-to-subject variation represented the largest variance component, while the time-within-subject variation was comparable to the experimental variation found in a previous technical variability study where one human plasma sample was processed eight times in parallel and each was then analyzed by 2-D DIGE in triplicate. Here, 21 protein spots had larger than 50% CV, suggesting that these proteins may not be appropriate as biomarkers and should be carefully scrutinized in future studies. Seventy-eight protein spots showing differential protein levels between different individuals or individual collections were identified by mass spectrometry and further characterized using hierarchical clustering. The results present a first step toward understanding the complexity of longitudinal and individual variation in the human plasma proteome, and provide a baseline for improved biomarker discovery.
Locating active-site hydrogen atoms in d-xylose isomerase: Time-of-flight neutron diffraction
Katz, Amy K.; Li, Xinmin; Carrell, H. L.; Hanson, B. Leif; Langan, Paul; Coates, Leighton; Schoenborn, Benno P.; Glusker, Jenny P.; Bunick, Gerard J.
2006-01-01
Time-of-flight neutron diffraction has been used to locate hydrogen atoms that define the ionization states of amino acids in crystals of d-xylose isomerase. This enzyme, from Streptomyces rubiginosus, is one of the largest enzymes studied to date at high resolution (1.8 Å) by this method. We have determined the position and orientation of a metal ion-bound water molecule that is located in the active site of the enzyme; this water has been thought to be involved in the isomerization step in which d-xylose is converted to d-xylulose or d-glucose to d-fructose. It is shown to be water (rather than a hydroxyl group) under the conditions of measurement (pH 8.0). Our analyses also reveal that one lysine probably has an −NH2-terminal group (rather than NH3+). The ionization state of each histidine residue also was determined. High-resolution x-ray studies (at 0.94 Å) indicate disorder in some side chains when a truncated substrate is bound and suggest how some side chains might move during catalysis. This combination of time-of-flight neutron diffraction and x-ray diffraction can contribute greatly to the elucidation of enzyme mechanisms. PMID:16707576
An Initial Evaluation of the Impact of Pokémon GO on Physical Activity.
Xian, Ying; Xu, Hanzhang; Xu, Haolin; Liang, Li; Hernandez, Adrian F; Wang, Tracy Y; Peterson, Eric D
2017-05-16
Pokémon GO is a location-based augmented reality game. Using GPS and the camera on a smartphone, the game requires players to travel in real world to capture animated creatures, called Pokémon. We examined the impact of Pokémon GO on physical activity (PA). A pre-post observational study of 167 Pokémon GO players who were self-enrolled through recruitment flyers or online social media was performed. Participants were instructed to provide screenshots of their step counts recorded by the iPhone Health app between June 15 and July 31, 2016, which was 3 weeks before and 3 weeks after the Pokémon GO release date. Of 167 participants, the median age was 25 years (interquartile range, 21-29 years). The daily average steps of participants at baseline was 5678 (SD, 2833; median, 5718 [interquartile range, 3675-7279]). After initiation of Pokémon GO, daily activity rose to 7654 steps (SD, 3616; median, 7232 [interquartile range, 5041-9744], pre-post change: 1976; 95% CI, 1494-2458, or a 34.8% relative increase [ P <0.001]). On average, 10 000 "XP" points (a measure of game progression) was associated with 2134 additional steps per day (95% CI, 1673-2595), suggesting a potential dose-response relationship. The number of participants achieving a goal of 10 000+ steps per day increased from 15.3% before to 27.5% after (odds ratio, 2.06; 95% CI, 1.70-2.50). Increased PA was also observed in subgroups, with the largest increases seen in participants who spent more time playing Pokémon GO, those who were overweight/obese, or those with a lower baseline PA level. Pokémon GO participation was associated with a significant increase in PA among young adults. Incorporating PA into gameplay may provide an alternative way to promote PA in persons who are attracted to the game. URL: http://www.clinicaltrials.gov. Unique identifier: NCT02888314. © 2017 The Authors. Published on behalf of the American Heart Association, Inc., by Wiley.
Barriers and Prospects of Carbon Sequestration in India.
Gupta, Anjali; Nema, Arvind K
2014-04-01
Carbon sequestration is considered a leading technology for reducing carbon dioxide (CO2) emissions from fossil-fuel based electricity generating power plants and could permit the continued use of coal and gas whilst meeting greenhouse gas targets. India will become the world's third largest emitter of CO2 by 2015. Considering the dependence of health of the Indian global economy, there is an imperative need to develop a global approach which could address the capturing and securely storing carbon dioxide emitted from an array of energy. Therefore technology such as carbon sequestration will deliver significant CO2 reductions in a timely fashion. Considerable energy is required for the capture, compression, transport and storage steps. With the availability of potential technical storage methods for carbon sequestration like forest, mineral and geological storage options with India, it would facilitate achieving stabilization goal in the near future. This paper examines the potential carbon sequestration options available in India and evaluates them with respect to their strengths, weakness, threats and future prospects.
Examining the development of attention and executive functions in children with a novel paradigm.
Klimkeit, Ester I; Mattingley, Jason B; Sheppard, Dianne M; Farrow, Maree; Bradshaw, John L
2004-09-01
The development of attention and executive functions in normal children (7-12 years) was investigated using a novel selective reaching task, which involved reaching as rapidly as possible towards a target, while at times having to ignore a distractor. The information processing paradigm allowed the measurement of various distinct dimensions of behaviour within a single task. The largest improvements in vigilance, set-shifting, response inhibition, selective attention, and impulsive responding were observed to occur between the ages of 8 and 10, with a plateau in performance between 10 and 12 years of age. These findings, consistent with a step-wise model of development, coincide with the observed developmental spurt in frontal brain functions between 7 and 10 years of age, and indicate that attention and executive functions develop in parallel. This task appears to be a useful research tool in the assessment of attention and executive functions, within a single task. Thus it may have a role in determining which cognitive functions are most affected in different childhood disorders.
Rapid, automated mosaicking of the human corneal subbasal nerve plexus.
Vaishnav, Yash J; Rucker, Stuart A; Saharia, Keshav; McNamara, Nancy A
2017-11-27
Corneal confocal microscopy (CCM) is an in vivo technique used to study corneal nerve morphology. The largest proportion of nerves innervating the cornea lie within the subbasal nerve plexus, where their morphology is altered by refractive surgery, diabetes and dry eye. The main limitations to clinical use of CCM as a diagnostic tool are the small field of view of CCM images and the lengthy time needed to quantify nerves in collected images. Here, we present a novel, rapid, fully automated technique to mosaic individual CCM images into wide-field maps of corneal nerves. We implemented an OpenCV image stitcher that accounts for corneal deformation and uses feature detection to stitch CCM images into a montage. The method takes 3-5 min to process and stitch 40-100 frames on an Amazon EC2 Micro instance. The speed, automation and ease of use conferred by this technique is the first step toward point of care evaluation of wide-field subbasal plexus (SBP) maps in a clinical setting.
Vibration assessment and structural monitoring of the Basilica of Maxentius in Rome
NASA Astrophysics Data System (ADS)
Pau, Annamaria; Vestroni, Fabrizio
2013-12-01
The present paper addresses the analysis of the ambient vibrations of the Basilica of Maxentius in Rome. This monument, in the city centre and close to busy roads, was the largest vaulted structure in the Roman Empire. Today, only one aisle of the structure remains, suffering from a complex crack scenario. The ambient vibration response is used to investigate traffic induced vibration and compare this to values that could be a potential cause of structural damage according to international standards. Using output-only methods, natural frequencies and mode shapes are obtained from the response, allowing comparison with predictions made with a finite element model. Notwithstanding simplifications regarding material behavior and crack pattern in the finite element model, an agreement between numerical and experimental results is reached once selected mechanical parameters are adjusted. A knowledge of modal characteristics and the availability of an updated model may be a first step of a structural monitoring program that could reveal any decay over time in the structural integrity of the monument.
Zhang, Hong; Zapol, Peter; Dixon, David A.; ...
2015-11-17
The Shift-and-invert parallel spectral transformations (SIPs), a computational approach to solve sparse eigenvalue problems, is developed for massively parallel architectures with exceptional parallel scalability and robustness. The capabilities of SIPs are demonstrated by diagonalization of density-functional based tight-binding (DFTB) Hamiltonian and overlap matrices for single-wall metallic carbon nanotubes, diamond nanowires, and bulk diamond crystals. The largest (smallest) example studied is a 128,000 (2000) atom nanotube for which ~330,000 (~5600) eigenvalues and eigenfunctions are obtained in ~190 (~5) seconds when parallelized over 266,144 (16,384) Blue Gene/Q cores. Weak scaling and strong scaling of SIPs are analyzed and the performance of SIPsmore » is compared with other novel methods. Different matrix ordering methods are investigated to reduce the cost of the factorization step, which dominates the time-to-solution at the strong scaling limit. As a result, a parallel implementation of assembling the density matrix from the distributed eigenvectors is demonstrated.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Hong; Zapol, Peter; Dixon, David A.
The Shift-and-invert parallel spectral transformations (SIPs), a computational approach to solve sparse eigenvalue problems, is developed for massively parallel architectures with exceptional parallel scalability and robustness. The capabilities of SIPs are demonstrated by diagonalization of density-functional based tight-binding (DFTB) Hamiltonian and overlap matrices for single-wall metallic carbon nanotubes, diamond nanowires, and bulk diamond crystals. The largest (smallest) example studied is a 128,000 (2000) atom nanotube for which ~330,000 (~5600) eigenvalues and eigenfunctions are obtained in ~190 (~5) seconds when parallelized over 266,144 (16,384) Blue Gene/Q cores. Weak scaling and strong scaling of SIPs are analyzed and the performance of SIPsmore » is compared with other novel methods. Different matrix ordering methods are investigated to reduce the cost of the factorization step, which dominates the time-to-solution at the strong scaling limit. As a result, a parallel implementation of assembling the density matrix from the distributed eigenvectors is demonstrated.« less
Freudenberg, Nicholas; Manzo, Luis; Mongiello, Lorraine; Jones, Hollie; Boeri, Natascia; Lamberson, Patricia
2013-01-01
Changing demographics of college students and new insights into the developmental trajectory of chronic diseases present universities with opportunities to improve population health and reduce health inequalities. The reciprocal relationships between better health and improved educational achievement also offer university health programs a chance to improve retention and graduation rates, a key objective for higher education. In 2007, City University of New York (CUNY), the nation's largest urban public university, launched Healthy CUNY, an initiative designed to offer life-time protection against chronic diseases and reduce health-related barriers to educational achievement. In its first 5 years, Healthy CUNY has shown that universities can mobilize students, faculty, and other constituencies to modify environments and policies that influence health. New policies on tobacco and campus food, enrollment of needy students in public food and housing assistance programs, and a dialogue on the role of health in academic achievement are first steps towards healthier universities.
Prospective Optimization with Limited Resources
Snider, Joseph; Lee, Dongpyo; Poizner, Howard; Gepshtein, Sergei
2015-01-01
The future is uncertain because some forthcoming events are unpredictable and also because our ability to foresee the myriad consequences of our own actions is limited. Here we studied how humans select actions under such extrinsic and intrinsic uncertainty, in view of an exponentially expanding number of prospects on a branching multivalued visual stimulus. A triangular grid of disks of different sizes scrolled down a touchscreen at a variable speed. The larger disks represented larger rewards. The task was to maximize the cumulative reward by touching one disk at a time in a rapid sequence, forming an upward path across the grid, while every step along the path constrained the part of the grid accessible in the future. This task captured some of the complexity of natural behavior in the risky and dynamic world, where ongoing decisions alter the landscape of future rewards. By comparing human behavior with behavior of ideal actors, we identified the strategies used by humans in terms of how far into the future they looked (their “depth of computation”) and how often they attempted to incorporate new information about the future rewards (their “recalculation period”). We found that, for a given task difficulty, humans traded off their depth of computation for the recalculation period. The form of this tradeoff was consistent with a complete, brute-force exploration of all possible paths up to a resource-limited finite depth. A step-by-step analysis of the human behavior revealed that participants took into account very fine distinctions between the future rewards and that they abstained from some simple heuristics in assessment of the alternative paths, such as seeking only the largest disks or avoiding the smaller disks. The participants preferred to reduce their depth of computation or increase the recalculation period rather than sacrifice the precision of computation. PMID:26367309
NASA Astrophysics Data System (ADS)
Huang, Xinyan; Rein, Guillermo
2013-04-01
Smouldering combustion of soil organic matter (SOM) such as peatlands leads to the largest fires on Earth and posses a possible positive feedback mechanism to climate change. In this work, a kinetic model, including 3-step chemical reactions and 1-step water evaporation is proposed to describe drying, pyrolysis and oxidation behaviour of peat. Peat is chosen as the most important type of SOM susceptible to smoudering, and a Chinese boreal peat sample is selected from the literature. A lumped model of mass loss based on four Arrhenius-type reactions is developed to predict its thermal and oxidative degradation under a range of heating rates. A genetic algorithm is used to solve the inverse problem, and find a group of kinetic and stoichiometric parameters for this peat that provides the best match to the thermogravimetric (TG) data from literature. A multi-objective fitness function is defined using the measurements of both mass loss and mass-loss rate in inert and normal atmospheres under a range of heating rates. Piece-wise optimization is conducted to separate the low temperature drying (<450 K) from the higher temperature pyrolysis and oxidation reaction (>450 K). Modelling results shows the proposed 3-step chemistry is the unique simplest scheme to satisfy all given TG data of this particular peat type. Afterward, this kinetic model and its kinetic parameters are incorporated into a simple one-dimensional species model to study the relative position of each reaction inside a smoulder front. Computational results show that the species model agrees with experimental observations. This is the first time that the smouldering kinetics of SOM is explained and predicted, thus helping to understanding this important natural and widespread phenomenon.
Prospective Optimization with Limited Resources.
Snider, Joseph; Lee, Dongpyo; Poizner, Howard; Gepshtein, Sergei
2015-09-01
The future is uncertain because some forthcoming events are unpredictable and also because our ability to foresee the myriad consequences of our own actions is limited. Here we studied how humans select actions under such extrinsic and intrinsic uncertainty, in view of an exponentially expanding number of prospects on a branching multivalued visual stimulus. A triangular grid of disks of different sizes scrolled down a touchscreen at a variable speed. The larger disks represented larger rewards. The task was to maximize the cumulative reward by touching one disk at a time in a rapid sequence, forming an upward path across the grid, while every step along the path constrained the part of the grid accessible in the future. This task captured some of the complexity of natural behavior in the risky and dynamic world, where ongoing decisions alter the landscape of future rewards. By comparing human behavior with behavior of ideal actors, we identified the strategies used by humans in terms of how far into the future they looked (their "depth of computation") and how often they attempted to incorporate new information about the future rewards (their "recalculation period"). We found that, for a given task difficulty, humans traded off their depth of computation for the recalculation period. The form of this tradeoff was consistent with a complete, brute-force exploration of all possible paths up to a resource-limited finite depth. A step-by-step analysis of the human behavior revealed that participants took into account very fine distinctions between the future rewards and that they abstained from some simple heuristics in assessment of the alternative paths, such as seeking only the largest disks or avoiding the smaller disks. The participants preferred to reduce their depth of computation or increase the recalculation period rather than sacrifice the precision of computation.
Bolink, S A A N; Grimm, B; Heyligers, I C
2015-12-01
Outcome assessment of total knee arthroplasty (TKA) by subjective patient reported outcome measures (PROMs) may not fully capture the functional (dis-)abilities of relevance. Objective performance-based outcome measures could provide distinct information. An ambulant inertial measurement unit (IMU) allows kinematic assessment of physical performance and could potentially be used for routine follow-up. To investigate the responsiveness of IMU measures in patients following TKA and compare outcomes with conventional PROMs. Patients with end stage knee OA (n=20, m/f=7/13; age=67.4 standard deviation 7.7 years) were measured preoperatively and one year postoperatively. IMU measures were derived during gait, sit-stand transfers and block step-up transfers. PROMs were assessed by using the Western Ontario and McMaster Universities Osteoarthritis Index (WOMAC) and Knee Society Score (KSS). Responsiveness was calculated by the effect size, correlations were calculated with Spearman's rho correlation coefficient. One year after TKA, patients performed significantly better at gait, sit-to-stand transfers and block step-up transfers. Measures of time and kinematic IMU measures demonstrated significant improvements postoperatively for each performance-based test. The largest improvement was found in block step-up transfers (effect size=0.56-1.20). WOMAC function score and KSS function score demonstrated moderate correlations (Spearman's rho=0.45-0.74) with some of the physical performance-based measures pre- and postoperatively. To characterize the changes in physical function after TKA, PROMs could be supplemented by performance-based measures, assessing function during different activities and allowing kinematic characterization with an ambulant IMU. Copyright © 2015 Elsevier B.V. All rights reserved.
ERIC Educational Resources Information Center
Collaborative for Academic, Social, and Emotional Learning, 2017
2017-01-01
Six years ago the Collaborative for Academic, Social, and Emotional Learning (CASEL) took the unprecedented step of launching an effort to study and scale high-quality, evidence-based academic, social, and emotional learning in eight of the largest and most complex school systems in the country: Anchorage, Austin, Chicago, Cleveland, Nashville,…
Fusion energy: Status and prospects
NASA Astrophysics Data System (ADS)
Salomaa, Rainer
A review of the present state of the international fusion research is given. In the largest tokamak devices (JET, TFTR, JT-60) fusion relevant temperatures are routinely obtained and the scientific feasibility of plasma confinement has been demonstrated. Plans concerning the next step are described. A critical view is presented on questions as to what extent the generic advantages of fusion (availability, sufficiency, safety, environmental acceptability, etc.) can be exploited in a practical power reactor where the formidable technological problems call for compromises.
Ng, David; Vail, Gord; Thomas, Sophia; Schmidt, Nicki
2010-01-01
In recognition of patient wait times, and deteriorating patient and staff satisfaction, we set out to improve these measures in our emergency department (ED) without adding any new funding or beds. In 2005 all staff in the ED at Hôtel-Dieu Grace Hospital began a transformation, employing Toyota Lean manufacturing principles to improve ED wait times and quality of care. Lean techniques such as value-stream mapping, just-in-time delivery techniques, workplace organization, reduction of systemic wastes, use of the worker as the source of quality improvement and ongoing refinement of our process steps formed the basis of our project. Our ED has achieved major improvements in departmental flow without adding any additional ED or inpatient beds. The mean registration to physician time has decreased from 111 minutes to 78 minutes. The number of patients who left without being seen has decreased from 7.1% to 4.3%. The length of stay (LOS) for discharged patients has decreased from a mean of 3.6 to 2.8 hours, with the largest decrease seen in our patients triaged at levels 4 or 5 using the Canadian Emergency Department Triage and Acuity Scale. We noted an improvement in ED patient satisfaction scores following the implementation of Lean principles. Lean manufacturing principles can improve the flow of patients through the ED, resulting in greater patient satisfaction along with reduced time spent by the patient in the ED.
First LHCb measurement with data from the LHC Run 2
NASA Astrophysics Data System (ADS)
Anderlini, L.; Amerio, S.
2017-01-01
LHCb has recently introduced a novel real-time detector alignment and calibration strategy for the Run 2. Data collected at the start of each LHC fill are processed in few minutes and used to update the alignment. On the other hand, the calibration constants will be evaluated for each run of data taking. An increase in the CPU and disk capacity of the event filter farm, combined with improvements to the reconstruction software, allow for efficient, exclusive selections already in the first stage of the High Level Trigger (HLT1), while the second stage, HLT2, performs complete, offline-quality, event reconstruction. In Run 2, LHCb will collect the largest data sample of charm mesons ever recorded. Novel data processing and analysis techniques are required to maximise the physics potential of this data sample with the available computing resources, taking into account data preservation constraints. In this write-up, we describe the full analysis chain used to obtain important results analysing the data collected in proton-proton collisions in 2015, such as the J/ψ and open charm production cross-sections, and consider the further steps required to obtain real-time results after the LHCb upgrade.
When did Carcharocles megalodon become extinct? A new analysis of the fossil record.
Pimiento, Catalina; Clements, Christopher F
2014-01-01
Carcharocles megalodon ("Megalodon") is the largest shark that ever lived. Based on its distribution, dental morphology, and associated fauna, it has been suggested that this species was a cosmopolitan apex predator that fed on marine mammals from the middle Miocene to the Pliocene (15.9-2.6 Ma). Prevailing theory suggests that the extinction of apex predators affects ecosystem dynamics. Accordingly, knowing the time of extinction of C. megalodon is a fundamental step towards understanding the effects of such an event in ancient communities. However, the time of extinction of this important species has never been quantitatively assessed. Here, we synthesize the most recent records of C. megalodon from the literature and scientific collections and infer the date of its extinction by making a novel use of the Optimal Linear Estimation (OLE) model. Our results suggest that C. megalodon went extinct around 2.6 Ma. Furthermore, when contrasting our results with known ecological and macroevolutionary trends in marine mammals, it became evident that the modern composition and function of modern gigantic filter-feeding whales was established after the extinction of C. megalodon. Consequently, the study of the time of extinction of C. megalodon provides the basis to improve our understanding of the responses of marine species to the removal of apex predators, presenting a deep-time perspective for the conservation of modern ecosystems.
When Did Carcharocles megalodon Become Extinct? A New Analysis of the Fossil Record
Pimiento, Catalina; Clements, Christopher F.
2014-01-01
Carcharocles megalodon (“Megalodon”) is the largest shark that ever lived. Based on its distribution, dental morphology, and associated fauna, it has been suggested that this species was a cosmopolitan apex predator that fed on marine mammals from the middle Miocene to the Pliocene (15.9–2.6 Ma). Prevailing theory suggests that the extinction of apex predators affects ecosystem dynamics. Accordingly, knowing the time of extinction of C. megalodon is a fundamental step towards understanding the effects of such an event in ancient communities. However, the time of extinction of this important species has never been quantitatively assessed. Here, we synthesize the most recent records of C. megalodon from the literature and scientific collections and infer the date of its extinction by making a novel use of the Optimal Linear Estimation (OLE) model. Our results suggest that C. megalodon went extinct around 2.6 Ma. Furthermore, when contrasting our results with known ecological and macroevolutionary trends in marine mammals, it became evident that the modern composition and function of modern gigantic filter-feeding whales was established after the extinction of C. megalodon. Consequently, the study of the time of extinction of C. megalodon provides the basis to improve our understanding of the responses of marine species to the removal of apex predators, presenting a deep-time perspective for the conservation of modern ecosystems. PMID:25338197
Okubo, Yoshiro; Menant, Jasmine; Udyavar, Manasa; Brodie, Matthew A; Barry, Benjamin K; Lord, Stephen R; L Sturnieks, Daina
2017-05-01
Although step training improves the ability of quick stepping, some home-based step training systems train limited stepping directions and may cause harm by reducing stepping performance in untrained directions. This study examines the possible transfer effects of step training on stepping performance in untrained directions in older people. Fifty four older adults were randomized into: forward step training (FT); lateral plus forward step training (FLT); or no training (NT) groups. FT and FLT participants undertook a 15-min training session involving 200 step repetitions. Prior to and post training, choice stepping reaction time and stepping kinematics in untrained, diagonal and lateral directions were assessed. Significant interactions of group and time (pre/post-assessment) were evident for the first step after training indicating negative (delayed response time) and positive (faster peak stepping speed) transfer effects in the diagonal direction in the FT group. However, when the second to the fifth steps after training were included in the analysis, there were no significant interactions of group and time for measures in the diagonal stepping direction. Step training only in the forward direction improved stepping speed but may acutely slow response times in the untrained diagonal direction. However, this acute effect appears to dissipate after a few repeated step trials. Step training in both forward and lateral directions appears to induce no negative transfer effects in diagonal stepping. These findings suggest home-based step training systems present low risk of harm through negative transfer effects in untrained stepping directions. ANZCTR 369066. Copyright © 2017 Elsevier B.V. All rights reserved.
Differences in visible and near-infrared light reflectance between orange fruit and leaves
NASA Technical Reports Server (NTRS)
Gausman, H. W.; Escobar, D. E.; Berumen, A.
1975-01-01
The objective was to find the best time during the season (April 26, 1972 to January 8, 1973) to distinguish orange fruit from leaves by spectrophotometrically determining at 10-day intervals when the difference in visible (550- and 650-nm wavelengths) and near-infrared (850-nm wavelength) light reflectance between fruit and nearby leaves was largest. December 5 to January 8 was the best time to distinguish fruit from leaves. During this period the fruit's color was rapidly changing from green to yellow, and the difference in visible light reflectance between fruit and leaves was largest. The difference in near-infrared reflectance between leaves and fruit remained essentially constant during ripening when the difference in visible light reflectance between leaves and fruit was largest.
New clinical grading scales and objective measurement for conjunctival injection.
Park, In Ki; Chun, Yeoun Sook; Kim, Kwang Gi; Yang, Hee Kyung; Hwang, Jeong-Min
2013-08-05
To establish a new clinical grading scale and objective measurement method to evaluate conjunctival injection. Photographs of conjunctival injection with variable ocular diseases in 429 eyes were reviewed. Seventy-three images with concordance by three ophthalmologists were classified into a 4-step and 10-step subjective grading scale, and used as standard photographs. Each image was quantified in four ways: the relative magnitude of the redness component of each red-green-blue (RGB) pixel; two different algorithms based on the occupied area by blood vessels (K-means clustering with LAB color model and contrast-limited adaptive histogram equalization [CLAHE] algorithm); and the presence of blood vessel edges, based on the Canny edge-detection algorithm. Area under the receiver operating characteristic curves (AUCs) were calculated to summarize diagnostic accuracies of the four algorithms. The RGB color model, K-means clustering with LAB color model, and CLAHE algorithm showed good correlation with the clinical 10-step grading scale (R = 0.741, 0.784, 0.919, respectively) and with the clinical 4-step grading scale (R = 0.645, 0.702, 0.838, respectively). The CLAHE method showed the largest AUC, best distinction power (P < 0.001, ANOVA, Bonferroni multiple comparison test), and high reproducibility (R = 0.996). CLAHE algorithm showed the best correlation with the 10-step and 4-step subjective clinical grading scales together with high distinction power and reproducibility. CLAHE algorithm can be a useful for method for assessment of conjunctival injection.
Buffering PV output during cloud transients with energy storage
NASA Astrophysics Data System (ADS)
Moumouni, Yacouba
Consideration of the use of the major types of energy storage is attempted in this thesis in order to mitigate the effects of power output transients associated with grid-tied CPV systems due to fast-moving cloud coverage. The approach presented here is to buffer intermittency of CPV output power with an energy storage device (used batteries) purchased cheaply from EV owners or battery leasers. When the CPV is connected to the grid with the proper energy storage, the main goal is to smooth out the intermittent solar power and fluctuant load of the grid with a convenient control strategy. This thesis provides a detailed analysis with appropriate Matlab codes to put onto the grid during the day time a constant amount of power on one hand and on the other, shift the less valuable off-peak electricity to the on-peak time, i.e. between 1pm to 7pm, where the electricity price is much better. In this study, a range of base constant power levels were assumed including 15kW, 20kW, 21kW, 22kW, 23kW, 24kW and 25kW. The hypothesis based on an iterative solution was that the capacity of the battery was increased by steps of 5 while the base supply was decreased by the same step size until satisfactorily results were achieved. Hence, it turned out with the chosen battery capacity of 54kWh coupled to the data from the Amonix CPV 7700 unit for Las Vegas for a 3-month period, it was found that 20kW was the largest constant load the system can supply uninterruptedly to the utility company. Simulated results are presented to show the feasibility of the proposed scheme.
A novel peak detection approach with chemical noise removal using short-time FFT for prOTOF MS data.
Zhang, Shuqin; Wang, Honghui; Zhou, Xiaobo; Hoehn, Gerard T; DeGraba, Thomas J; Gonzales, Denise A; Suffredini, Anthony F; Ching, Wai-Ki; Ng, Michael K; Wong, Stephen T C
2009-08-01
Peak detection is a pivotal first step in biomarker discovery from MS data and can significantly influence the results of downstream data analysis steps. We developed a novel automatic peak detection method for prOTOF MS data, which does not require a priori knowledge of protein masses. Random noise is removed by an undecimated wavelet transform and chemical noise is attenuated by an adaptive short-time discrete Fourier transform. Isotopic peaks corresponding to a single protein are combined by extracting an envelope over them. Depending on the S/N, the desired peaks in each individual spectrum are detected and those with the highest intensity among their peak clusters are recorded. The common peaks among all the spectra are identified by choosing an appropriate cut-off threshold in the complete linkage hierarchical clustering. To remove the 1 Da shifting of the peaks, the peak corresponding to the same protein is determined as the detected peak with the largest number among its neighborhood. We validated this method using a data set of serial peptide and protein calibration standards. Compared with MoverZ program, our new method detects more peaks and significantly enhances S/N of the peak after the chemical noise removal. We then successfully applied this method to a data set from prOTOF MS spectra of albumin and albumin-bound proteins from serum samples of 59 patients with carotid artery disease compared to vascular disease-free patients to detect peaks with S/N> or =2. Our method is easily implemented and is highly effective to define peaks that will be used for disease classification or to highlight potential biomarkers.
Three Short Stories about Hexaarylbenzene-Porphyrin Scaffolds.
Lungerich, Dominik; Hitzenberger, Jakob F; Donaubauer, Wolfgang; Drewello, Thomas; Jux, Norbert
2016-11-14
A feasible two-step synthesis and characterization of a full series of hexaarylbenzene (HAB) substituted porphyrins and tetrabenzoporphyrins is presented. Key steps represent the microwave-assisted porphyrin condensation and the statistical Diels-Alder reaction to the desired HAB-porphyrins. Regarding their applications, they proved to be easily accessible and effective high molecular mass calibrants for (MA)LDI mass spectrometry. The free-base and zinc(II) porphyrin systems, as well as the respective tetrabenzoporphyrins, demonstrate in solid state experiments strong red- and near-infrared-light emission and are potentially interesting for the application in "truly organic" light-emitting devices. Lastly, they represent facile precursors to large polycyclic aromatic hydrocarbon (PAH) substituted porphyrins. We prepared the first tetra-hexa-peri-hexabenzocoronene substituted porphyrin, which represents the largest prepared PAH-porphyrin conjugate to date. © 2016 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.
No Time for Complacency: Teen Births in California.
ERIC Educational Resources Information Center
Constantine, Norman A.; Nevarez, Carmen R.
California's recent investment in teen pregnancy prevention has contributed to the largest decline in teen birth rates and the second largest percentage reduction of all 50 states. California's annual teen birth rate is now similar to the national rate. This occurred while the highest teen birth rate group, Latinas, increased as a proportion of…
Mount St. Helens 30 years later: a landscape reconfigured.
Rhonda Mazza
2010-01-01
On May 18, 1980, after two months of tremors, Mount St. Helens erupted spectacularly and profoundly changed a vast area surrounding the volcano. The north slope of the mountain catastrophically failed, forming the largest landslide witnessed in modern times. The largest lobe of this debris avalanche raced 14 miles down the Toutle River...
NASA Astrophysics Data System (ADS)
Mues, A.; Kuenen, J.; Hendriks, C.; Manders, A.; Segers, A.; Scholz, Y.; Hueglin, C.; Builtjes, P.; Schaap, M.
2014-01-01
In this study the sensitivity of the model performance of the chemistry transport model (CTM) LOTOS-EUROS to the description of the temporal variability of emissions was investigated. Currently the temporal release of anthropogenic emissions is described by European average diurnal, weekly and seasonal time profiles per sector. These default time profiles largely neglect the variation of emission strength with activity patterns, region, species, emission process and meteorology. The three sources dealt with in this study are combustion in energy and transformation industries (SNAP1), nonindustrial combustion (SNAP2) and road transport (SNAP7). First of all, the impact of neglecting the temporal emission profiles for these SNAP categories on simulated concentrations was explored. In a second step, we constructed more detailed emission time profiles for the three categories and quantified their impact on the model performance both separately as well as combined. The performance in comparison to observations for Germany was quantified for the pollutants NO2, SO2 and PM10 and compared to a simulation using the default LOTOS-EUROS emission time profiles. The LOTOS-EUROS simulations were performed for the year 2006 with a temporal resolution of 1 h and a horizontal resolution of approximately 25 × 25km2. In general the largest impact on the model performance was found when neglecting the default time profiles for the three categories. The daily average correlation coefficient for instance decreased by 0.04 (NO2), 0.11 (SO2) and 0.01 (PM10) at German urban background stations compared to the default simulation. A systematic increase in the correlation coefficient is found when using the new time profiles. The size of the increase depends on the source category, component and station. Using national profiles for road transport showed important improvements in the explained variability over the weekdays as well as the diurnal cycle for NO2. The largest impact of the SNAP1 and 2 profiles were found for SO2. When using all new time profiles simultaneously in one simulation, the daily average correlation coefficient increased by 0.05 (NO2), 0.07 (SO2) and 0.03 (PM10) at urban background stations in Germany. This exercise showed that to improve the performance of a CTM, a better representation of the distribution of anthropogenic emission in time is recommendable. This can be done by developing a dynamical emission model that takes into account regional specific factors and meteorology.
NASA Astrophysics Data System (ADS)
Ficchì, Andrea; Perrin, Charles; Andréassian, Vazken
2016-07-01
Hydro-climatic data at short time steps are considered essential to model the rainfall-runoff relationship, especially for short-duration hydrological events, typically flash floods. Also, using fine time step information may be beneficial when using or analysing model outputs at larger aggregated time scales. However, the actual gain in prediction efficiency using short time-step data is not well understood or quantified. In this paper, we investigate the extent to which the performance of hydrological modelling is improved by short time-step data, using a large set of 240 French catchments, for which 2400 flood events were selected. Six-minute rain gauge data were available and the GR4 rainfall-runoff model was run with precipitation inputs at eight different time steps ranging from 6 min to 1 day. Then model outputs were aggregated at seven different reference time scales ranging from sub-hourly to daily for a comparative evaluation of simulations at different target time steps. Three classes of model performance behaviour were found for the 240 test catchments: (i) significant improvement of performance with shorter time steps; (ii) performance insensitivity to the modelling time step; (iii) performance degradation as the time step becomes shorter. The differences between these groups were analysed based on a number of catchment and event characteristics. A statistical test highlighted the most influential explanatory variables for model performance evolution at different time steps, including flow auto-correlation, flood and storm duration, flood hydrograph peakedness, rainfall-runoff lag time and precipitation temporal variability.
NASA Astrophysics Data System (ADS)
Catling, David C.; Glein, Christopher R.; Zahnle, Kevin J.; McKay, Christopher P.
2005-06-01
Life is constructed from a limited toolkit: the Periodic Table. The reduction of oxygen provides the largest free energy release per electron transfer, except for the reduction of fluorine and chlorine. However, the bonding of O2 ensures that it is sufficiently stable to accumulate in a planetary atmosphere, whereas the more weakly bonded halogen gases are far too reactive ever to achieve significant abundance. Consequently, an atmosphere rich in O2 provides the largest feasible energy source. This universal uniqueness suggests that abundant O2 is necessary for the high-energy demands of complex life anywhere, i.e., for actively mobile organisms of ~10-1-100 m size scale with specialized, differentiated anatomy comparable to advanced metazoans. On Earth, aerobic metabolism provides about an order of magnitude more energy for a given intake of food than anaerobic metabolism. As a result, anaerobes do not grow beyond the complexity of uniseriate filaments of cells because of prohibitively low growth efficiencies in a food chain. The biomass cumulative number density, n, at a particular mass, m, scales as n (>m)~m-1 for aquatic aerobes, and we show that for anaerobes the predicted scaling is n~m -1.5, close to a growth-limited threshold. Even with aerobic metabolism, the partial pressure of atmospheric O2 (PO2) must exceed ~103 Pa to allow organisms that rely on O2 diffusion to evolve to a size ~10-3 m. PO2 in the range ~103-104 Pa is needed to exceed the threshold of ~10-2 m size for complex life with circulatory physiology. In terrestrial life, O2 also facilitates hundreds of metabolic pathways, including those that make specialized structural molecules found only in animals. The time scale to reach PO2 ~104 Pa, or "oxygenation time," was long on the Earth (~3.9 billion years), within almost a factor of 2 of the Sun's main sequence lifetime. Consequently, we argue that the oxygenation time is likely to be a key rate-limiting step in the evolution of complex life on other habitable planets. The oxygenation time could preclude complex life on Earth-like planets orbiting short-lived stars that end their main sequence lives before planetary oxygenation takes place. Conversely, Earth-like planets orbiting long-lived stars are potentially favorable habitats for complex life.
NASA Astrophysics Data System (ADS)
Demaria, E. M.; Valdes, J. B.; Wi, S.; Serrat-Capdevila, A.; Valdés-Pineda, R.; Durcik, M.
2016-12-01
In under-instrumented basins around the world, accurate and timely forecasts of river streamflows have the potential of assisting water and natural resource managers in their management decisions. The Upper Zambezi river basin is the largest basin in southern Africa and its water resources are critical to sustainable economic growth and poverty reduction in eight riparian countries. We present a real-time streamflow forecast for the basin using a multi-model-multi-satellite approach that allows accounting for model and input uncertainties. Three distributed hydrologic models with different levels of complexity: VIC, HYMOD_DS, and HBV_DS are setup at a daily time step and a 0.25 degree spatial resolution for the basin. The hydrologic models are calibrated against daily observed streamflows at the Katima-Mulilo station using a Genetic Algorithm. Three real-time satellite products: Climate Prediction Center's morphing technique (CMORPH), Precipitation Estimation from Remotely Sensed Information using Artificial Neural Networks (PERSIANN), and Tropical Rainfall Measuring Mission (TRMM-3B42RT) are bias-corrected with daily CHIRPS estimates. Uncertainty bounds for predicted flows are estimated with the Inverse Variance Weighting method. Because concentration times in the basin range from a few days to more than a week, we include the use of precipitation forecasts from the Global Forecasting System (GFS) to predict daily streamflows in the basin with a 10-days lead time. The skill of GFS-predicted streamflows is evaluated and the usefulness of the forecasts for short term water allocations is presented.
The Relaxation of Vicinal (001) with ZigZag [110] Steps
NASA Astrophysics Data System (ADS)
Hawkins, Micah; Hamouda, Ajmi Bh; González-Cabrera, Diego Luis; Einstein, Theodore L.
2012-02-01
This talk presents a kinetic Monte Carlo study of the relaxation dynamics of [110] steps on a vicinal (001) simple cubic surface. This system is interesting because [110] steps have different elementary excitation energetics and favor step diffusion more than close-packed [100] steps. In this talk we show how this leads to relaxation dynamics showing greater fluctuations on a shorter time scale for [110] steps as well as 2-bond breaking processes being rate determining in contrast to 3-bond breaking processes for [100] steps. The existence of a steady state is shown via the convergence of terrace width distributions at times much longer than the relaxation time. In this time regime excellent fits to the modified generalized Wigner distribution (as well as to the Berry-Robnik model when steps can overlap) were obtained. Also, step-position correlation function data show diffusion-limited increase for small distances along the step as well as greater average step displacement for zigzag steps compared to straight steps for somewhat longer distances along the step. Work supported by NSF-MRSEC Grant DMR 05-20471 as well as a DOE-CMCSN Grant.
The horizontal and vertical cervico-ocular reflexes of the rabbit.
Barmack, N H; Nastos, M A; Pettorossi, V E
1981-11-16
Horizontal and vertical cervico-ocular reflexes of the rabbit (HCOR, VCOR) were evoked by sinusoidal oscillation of the body about the vertical and longitudinal axes while the head was fixed. These reflexes were studied over a frequency range of 0.005-0.800 Hz and at stimulus amplitudes of +/- 10 degrees. When the body of the rabbit was rotated horizontally clockwise around the fixed head, clockwise conjugate eye movements were evoked. When the body was rotated about the longitudinal axis onto the right side, the right eye rotated down and the left eye rotated up. The mean gain of the HCOR (eye velocity/body velocity) rose from 0.21 and 0.005 Hz to 0.27 at 0.020 Hz and then declined to 0.06 at 0.3Hz. The gain of the VCOR was less than the gain of the HCOR by a factor of 2-3. The HCOR was measured separately and in combination with the horizontal vestibulo-ocular reflex (HVOR). These reflexes combine linearly. The relative movements of the first 3 cervical vertebrae during stimulation of the HCOR and VCOR were measured. For the HCOR, the largest angular displacement (74%) occurs between C1 and C2. For the VCOR, the largest relative angular displacement (45%) occurs between C2 and C3. Step horizontal clockwise rotation of the head and body (HVOR) evoked low velocity counterclockwise eye movements followed by fast clockwise (resetting) eye movements. Step horizontal clockwise rotation of the body about the fixed head (HCOR) evoked low velocity clockwise eye movements which were followed by fast clockwise eye movements. Step horizontal clockwise rotation of the head about the fixed body (HCOR + HVOR) evoked low velocity counterclockwise eye movements which were not interrupted by fast clockwise eye movements. These data provide further evidence for a linear combination of independent HCOR and HVOR signals.
Tibiofemoral contact forces during walking, running and sidestepping.
Saxby, David J; Modenese, Luca; Bryant, Adam L; Gerus, Pauline; Killen, Bryce; Fortin, Karine; Wrigley, Tim V; Bennell, Kim L; Cicuttini, Flavia M; Lloyd, David G
2016-09-01
We explored the tibiofemoral contact forces and the relative contributions of muscles and external loads to those contact forces during various gait tasks. Second, we assessed the relationships between external gait measures and contact forces. A calibrated electromyography-driven neuromusculoskeletal model estimated the tibiofemoral contact forces during walking (1.44±0.22ms(-1)), running (4.38±0.42ms(-1)) and sidestepping (3.58±0.50ms(-1)) in healthy adults (n=60, 27.3±5.4years, 1.75±0.11m, and 69.8±14.0kg). Contact forces increased from walking (∼1-2.8 BW) to running (∼3-8 BW), sidestepping had largest maximum total (8.47±1.57 BW) and lateral contact forces (4.3±1.05 BW), while running had largest maximum medial contact forces (5.1±0.95 BW). Relative muscle contributions increased across gait tasks (up to 80-90% of medial contact forces), and peaked during running for lateral contact forces (∼90%). Knee adduction moment (KAM) had weak relationships with tibiofemoral contact forces (all R(2)<0.36) and the relationships were gait task-specific. Step-wise regression of multiple external gait measures strengthened relationships (0.20
NASA Astrophysics Data System (ADS)
Uchide, Takahiko; Song, Seok Goo
2018-03-01
The 2016 Gyeongju earthquake (ML 5.8) was the largest instrumentally recorded inland event in South Korea. It occurred in the southeast of the Korean Peninsula and was preceded by a large ML 5.1 foreshock. The aftershock seismicity data indicate that these earthquakes occurred on two closely collocated parallel faults that are oblique to the surface trace of the Yangsan fault. We investigate the rupture properties of these earthquakes using finite-fault slip inversion analyses. The obtained models indicate that the ruptures propagated NNE-ward and SSW-ward for the main shock and the large foreshock, respectively. This indicates that these earthquakes occurred on right-step faults and were initiated around a fault jog. The stress drops were up to 62 and 43 MPa for the main shock and the largest foreshock, respectively. These high stress drops imply high strength excess, which may be overcome by the stress concentration around the fault jog.
NASA Astrophysics Data System (ADS)
de Graaf, Inge
2015-04-01
The world's largest assessable source of freshwater is hidden underground, but we do not know what is happening to it yet. In many places of the world groundwater is abstracted at unsustainable rates: more water is used than being recharged, leading to decreasing river discharges and declining groundwater levels. It is predicted that for many regions of the world unsustainable water use will increase, due to increasing human water use under changing climate. It would not be long before shortage causes widespread droughts and the first water war begins. Improving our knowledge about our hidden water is the first step to stop this. The world largest aquifers are mapped, but these maps do not mention how much water they contain or how fast water levels decline. If we can add a third dimension to the aquifer maps, so a thickness, and add geohydrological information we can estimate how much water is stored. Also data on groundwater age and how fast it is refilled is needed to predict the impact of human water use and climate change on the groundwater resource.
Consistency of internal fluxes in a hydrological model running at multiple time steps
NASA Astrophysics Data System (ADS)
Ficchi, Andrea; Perrin, Charles; Andréassian, Vazken
2016-04-01
Improving hydrological models remains a difficult task and many ways can be explored, among which one can find the improvement of spatial representation, the search for more robust parametrization, the better formulation of some processes or the modification of model structures by trial-and-error procedure. Several past works indicate that model parameters and structure can be dependent on the modelling time step, and there is thus some rationale in investigating how a model behaves across various modelling time steps, to find solutions for improvements. Here we analyse the impact of data time step on the consistency of the internal fluxes of a rainfall-runoff model run at various time steps, by using a large data set of 240 catchments. To this end, fine time step hydro-climatic information at sub-hourly resolution is used as input of a parsimonious rainfall-runoff model (GR) that is run at eight different model time steps (from 6 minutes to one day). The initial structure of the tested model (i.e. the baseline) corresponds to the daily model GR4J (Perrin et al., 2003), adapted to be run at variable sub-daily time steps. The modelled fluxes considered are interception, actual evapotranspiration and intercatchment groundwater flows. Observations of these fluxes are not available, but the comparison of modelled fluxes at multiple time steps gives additional information for model identification. The joint analysis of flow simulation performance and consistency of internal fluxes at different time steps provides guidance to the identification of the model components that should be improved. Our analysis indicates that the baseline model structure is to be modified at sub-daily time steps to warrant the consistency and realism of the modelled fluxes. For the baseline model improvement, particular attention is devoted to the interception model component, whose output flux showed the strongest sensitivity to modelling time step. The dependency of the optimal model complexity on time step is also analysed. References: Perrin, C., Michel, C., Andréassian, V., 2003. Improvement of a parsimonious model for streamflow simulation. Journal of Hydrology, 279(1-4): 275-289. DOI:10.1016/S0022-1694(03)00225-7
NASA Astrophysics Data System (ADS)
Lee, Ji-Seok; Song, Ki-Won
2015-11-01
The objective of the present study is to systematically elucidate the time-dependent rheological behavior of concentrated xanthan gum systems in complicated step-shear flow fields. Using a strain-controlled rheometer (ARES), step-shear flow behaviors of a concentrated xanthan gum model solution have been experimentally investigated in interrupted shear flow fields with a various combination of different shear rates, shearing times and rest times, and step-incremental and step-reductional shear flow fields with various shearing times. The main findings obtained from this study are summarized as follows. (i) In interrupted shear flow fields, the shear stress is sharply increased until reaching the maximum stress at an initial stage of shearing times, and then a stress decay towards a steady state is observed as the shearing time is increased in both start-up shear flow fields. The shear stress is suddenly decreased immediately after the imposed shear rate is stopped, and then slowly decayed during the period of a rest time. (ii) As an increase in rest time, the difference in the maximum stress values between the two start-up shear flow fields is decreased whereas the shearing time exerts a slight influence on this behavior. (iii) In step-incremental shear flow fields, after passing through the maximum stress, structural destruction causes a stress decay behavior towards a steady state as an increase in shearing time in each step shear flow region. The time needed to reach the maximum stress value is shortened as an increase in step-increased shear rate. (iv) In step-reductional shear flow fields, after passing through the minimum stress, structural recovery induces a stress growth behavior towards an equilibrium state as an increase in shearing time in each step shear flow region. The time needed to reach the minimum stress value is lengthened as a decrease in step-decreased shear rate.
NASA Astrophysics Data System (ADS)
Hurricane, O. A.; Callahan, D. A.; Edwards, M. J.; Casey, D.; Doeppner, T.; Hohenberger, M.; Hinkel, D.; Berzak Hopkins, L.; Le Pape, S.; MacLaren, S.; Masse, L.; Thomas, C.; Zylstra, A.
2017-10-01
Post NIC (2012), more stable and lower convergence implosions were developed and used as part of a `basecamp' strategy to identify obstacles to further performance. From 2013-2015 by probing away from a conservative working implosion in-steps towards conditions of higher velocity and compression, `Fuel Gain' and alpha-heating were obtained. In the process, performance cliffs unrelated to `mix' were identified the most impactful of which were symmetry control of the implosion and hydro seeded by engineering features. From 2015-2017 we focused on mitigating poor symmetry control and engineering improvements on fill-tubes and capsule mounting techniques. The results were more efficient implosions that can obtain the same performance levels as the earlier implosions, but with less laser energy. Presently, the best of these implosions is poised to step into a burning plasma state. Here, we describe the next step in our strategy that involves using the data we've acquired across parameter space to make a step to the largest symmetric implosions that can be fielded on NIF with the energy available. We describe the key principles that form the foundation of this approach. Performed under the auspices of U.S. Dept. of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344.
45. FINISHING STANDS, 98INCH CONTINUOUS HOT STRIP MILL, WORLD'S LARGEST ...
45. FINISHING STANDS, 98-INCH CONTINUOUS HOT STRIP MILL, WORLD'S LARGEST AT THE TIME OF INSTALLATION IN 1937. THE MILL WAS REPLACED BY A NEW 84-INCH MILL IN 1971 AND IS SEEN HERE PARTIALLY DISMANTLED IN PREPARATION FOR DEMOLITION. - Corrigan, McKinney Steel Company, 3100 East Forty-fifth Street, Cleveland, Cuyahoga County, OH
Rebich, Richard A; Houston, Natalie A; Mize, Scott V; Pearson, Daniel K; Ging, Patricia B; Evan Hornig, C
2011-01-01
Abstract SPAtially Referenced Regressions On Watershed attributes (SPARROW) models were developed to estimate nutrient inputs [total nitrogen (TN) and total phosphorus (TP)] to the northwestern part of the Gulf of Mexico from streams in the South-Central United States (U.S.). This area included drainages of the Lower Mississippi, Arkansas-White-Red, and Texas-Gulf hydrologic regions. The models were standardized to reflect nutrient sources and stream conditions during 2002. Model predictions of nutrient loads (mass per time) and yields (mass per area per time) generally were greatest in streams in the eastern part of the region and along reaches near the Texas and Louisiana shoreline. The Mississippi River and Atchafalaya River watersheds, which drain nearly two-thirds of the conterminous U.S., delivered the largest nutrient loads to the Gulf of Mexico, as expected. However, the three largest delivered TN yields were from the Trinity River/Galveston Bay, Calcasieu River, and Aransas River watersheds, while the three largest delivered TP yields were from the Calcasieu River, Mermentau River, and Trinity River/Galveston Bay watersheds. Model output indicated that the three largest sources of nitrogen from the region were atmospheric deposition (42%), commercial fertilizer (20%), and livestock manure (unconfined, 17%). The three largest sources of phosphorus were commercial fertilizer (28%), urban runoff (23%), and livestock manure (confined and unconfined, 23%). PMID:22457582
Economic Impact of Blood Transfusions: Balancing Cost and Benefits
Oge, Tufan; Kilic, Cemil Hakan; Kilic, Gokhan Sami
2014-01-01
Blood transfusions may be lifesaving, but they inherit their own risks. Risk of transfusion to benefit is a delicate balance. In addition, blood product transfusions purchases are one of the largest line items among the hospital and laboratory charges. In this review, we aimed to discuss the transfusion strategies and share our transfusion protocol as well as the steps for hospitals to build-up a blood management program while all these factors weight in. Moreover, we evaluate the financial burden to the health care system. PMID:25610294
An adaptive time-stepping strategy for solving the phase field crystal model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Zhengru, E-mail: zrzhang@bnu.edu.cn; Ma, Yuan, E-mail: yuner1022@gmail.com; Qiao, Zhonghua, E-mail: zqiao@polyu.edu.hk
2013-09-15
In this work, we will propose an adaptive time step method for simulating the dynamics of the phase field crystal (PFC) model. The numerical simulation of the PFC model needs long time to reach steady state, and then large time-stepping method is necessary. Unconditionally energy stable schemes are used to solve the PFC model. The time steps are adaptively determined based on the time derivative of the corresponding energy. It is found that the use of the proposed time step adaptivity cannot only resolve the steady state solution, but also the dynamical development of the solution efficiently and accurately. Themore » numerical experiments demonstrate that the CPU time is significantly saved for long time simulations.« less
Three-dimensional measurement of femur based on structured light scanning
NASA Astrophysics Data System (ADS)
Li, Jie; Ouyang, Jianfei; Qu, Xinghua
2009-12-01
Osteometry is fundamental to study the human skeleton. It has been widely used in palaeoanthropology, bionics, and criminal investigation for more than 200 years. The traditional osteometry is a simple 1-dimensional measurement that can only get 1D size of the bones in manual step-by-step way, even though there are more than 400 parameters to be measured. For today's research and application it is significant and necessary to develop an advanced 3-dimensional osteometry technique. In this paper a new 3D osteometry is presented, which focuses on measurement of the femur, the largest tubular bone in human body. 3D measurement based on the structured light scanning is developed to create fast and precise measurement of the entire body of the femur. The cloud data and geometry model of the sample femur is established in mathematic, accurate and fast way. More than 30 parameters are measured and compared with each other. The experiment shows that the proposed method can meet traditional osteometry and obtain all 1D geometric parameters of the bone at the same time by the mathematics model, such as trochanter-lateral condyle length, superior breadth of shaft, and collo-diaphyseal angle, etc. In the best way, many important geometric parameters that are very difficult to measure by existing osteometry, such as volume, surface area, and curvature of the bone, can be obtained very easily. The overall measuring error is less than 0.1mm.
Three-dimensional measurement of femur based on structured light scanning
NASA Astrophysics Data System (ADS)
Li, Jie; Ouyang, Jianfei; Qu, Xinghua
2010-03-01
Osteometry is fundamental to study the human skeleton. It has been widely used in palaeoanthropology, bionics, and criminal investigation for more than 200 years. The traditional osteometry is a simple 1-dimensional measurement that can only get 1D size of the bones in manual step-by-step way, even though there are more than 400 parameters to be measured. For today's research and application it is significant and necessary to develop an advanced 3-dimensional osteometry technique. In this paper a new 3D osteometry is presented, which focuses on measurement of the femur, the largest tubular bone in human body. 3D measurement based on the structured light scanning is developed to create fast and precise measurement of the entire body of the femur. The cloud data and geometry model of the sample femur is established in mathematic, accurate and fast way. More than 30 parameters are measured and compared with each other. The experiment shows that the proposed method can meet traditional osteometry and obtain all 1D geometric parameters of the bone at the same time by the mathematics model, such as trochanter-lateral condyle length, superior breadth of shaft, and collo-diaphyseal angle, etc. In the best way, many important geometric parameters that are very difficult to measure by existing osteometry, such as volume, surface area, and curvature of the bone, can be obtained very easily. The overall measuring error is less than 0.1mm.
Hawai‘i Physician Workforce Assessment 2010
Dall, Tim; Sakamoto, David
2012-01-01
Background National policy experts have estimated that the United States will be 15–20% short of physicians by the year 2020. In 2008, the Big Island of Hawai‘i was found to be 15% short of physicians. The current article describes research to determine the physician supply and demand across the State of Hawai‘i. Methods The researchers utilized licensure lists, all available sources of physician practice location information, and contacted provider offices to develop a database of practicing physicians in Hawai‘i. A statistical model based on national utilization of physician services by age, ethnicity, gender, insurance, and obesity rates was used to estimate demand for services. Using number of new state licenses per year, the researchers estimated the number of physicians who enter the Hawai‘i workforce annually. Physician age data were used to estimate retirements. Results Researchers found 2,860 full time equivalents of practicing, non-military, patient-care physicians in Hawai‘i (excluding those still in residency or fellowship programs). The calculated demand for physician services by specialty indicates a current shortage of physicians of over 600. This shortage may grow by 50 to 100 physicians per year if steps are not taken to reverse this trend. Physician retirement is the single largest element in the loss of physicians, with population growth and aging playing a significant role in increasing demand. Discussion Study findings indicate that Hawai‘i is 20% short of physicians and the situation is likely to worsen if mitigating steps are not taken immediately. PMID:22737636
Acoustic investigation of wall jet over a backward-facing step using a microphone phased array
NASA Astrophysics Data System (ADS)
Perschke, Raimund F.; Ramachandran, Rakesh C.; Raman, Ganesh
2015-02-01
The acoustic properties of a wall jet over a hard-walled backward-facing step of aspect ratios 6, 3, 2, and 1.5 are studied using a 24-channel microphone phased array at Mach numbers up to M=0.6. The Reynolds number based on inflow velocity and step height assumes values from Reh = 3.0 ×104 to 7.2 ×105. Flow without and with side walls is considered. The experimental setup is open in the wall-normal direction and the expansion ratio is effectively 1. In case of flow through a duct, symmetry of the flow in the spanwise direction is lost downstream of separation at all but the largest aspect ratio as revealed by oil paint flow visualization. Hydrodynamic scattering of turbulence from the trailing edge of the step contributes significantly to the radiated sound. Reflection of acoustic waves from the bottom plate results in a modulation of power spectral densities. Acoustic source localization has been conducted using a 24-channel microphone phased array. Convective mean-flow effects on the apparent source origin have been assessed by placing a loudspeaker underneath a perforated flat plate and evaluating the displacement of the beamforming peak with inflow Mach number. Two source mechanisms are found near the step. One is due to interaction of the turbulent wall jet with the convex edge of the step. Free-stream turbulence sound is found to be peaked downstream of the step. Presence of the side walls increases free-stream sound. Results of the flow visualization are correlated with acoustic source maps. Trailing-edge sound and free-stream turbulence sound can be discriminated using source localization.
NASA Astrophysics Data System (ADS)
Densmore, Jeffery D.; Warsa, James S.; Lowrie, Robert B.; Morel, Jim E.
2009-09-01
The Fokker-Planck equation is a widely used approximation for modeling the Compton scattering of photons in high energy density applications. In this paper, we perform a stability analysis of three implicit time discretizations for the Compton-Scattering Fokker-Planck equation. Specifically, we examine (i) a Semi-Implicit (SI) scheme that employs backward-Euler differencing but evaluates temperature-dependent coefficients at their beginning-of-time-step values, (ii) a Fully Implicit (FI) discretization that instead evaluates temperature-dependent coefficients at their end-of-time-step values, and (iii) a Linearized Implicit (LI) scheme, which is developed by linearizing the temperature dependence of the FI discretization within each time step. Our stability analysis shows that the FI and LI schemes are unconditionally stable and cannot generate oscillatory solutions regardless of time-step size, whereas the SI discretization can suffer from instabilities and nonphysical oscillations for sufficiently large time steps. With the results of this analysis, we present time-step limits for the SI scheme that prevent undesirable behavior. We test the validity of our stability analysis and time-step limits with a set of numerical examples.
Nutt, John G.; Horak, Fay B.
2011-01-01
Background. This study asked whether older adults were more likely than younger adults to err in the initial direction of their anticipatory postural adjustment (APA) prior to a step (indicating a motor program error), whether initial motor program errors accounted for reaction time differences for step initiation, and whether initial motor program errors were linked to inhibitory failure. Methods. In a stepping task with choice reaction time and simple reaction time conditions, we measured forces under the feet to quantify APA onset and step latency and we used body kinematics to quantify forward movement of center of mass and length of first step. Results. Trials with APA errors were almost three times as common for older adults as for younger adults, and they were nine times more likely in choice reaction time trials than in simple reaction time trials. In trials with APA errors, step latency was delayed, correlation between APA onset and step latency was diminished, and forward motion of the center of mass prior to the step was increased. Participants with more APA errors tended to have worse Stroop interference scores, regardless of age. Conclusions. The results support the hypothesis that findings of slow choice reaction time step initiation in older adults are attributable to inclusion of trials with incorrect initial motor preparation and that these errors are caused by deficits in response inhibition. By extension, the results also suggest that mixing of trials with correct and incorrect initial motor preparation might explain apparent choice reaction time slowing with age in upper limb tasks. PMID:21498431
Molecular dynamics based enhanced sampling of collective variables with very large time steps.
Chen, Pei-Yang; Tuckerman, Mark E
2018-01-14
Enhanced sampling techniques that target a set of collective variables and that use molecular dynamics as the driving engine have seen widespread application in the computational molecular sciences as a means to explore the free-energy landscapes of complex systems. The use of molecular dynamics as the fundamental driver of the sampling requires the introduction of a time step whose magnitude is limited by the fastest motions in a system. While standard multiple time-stepping methods allow larger time steps to be employed for the slower and computationally more expensive forces, the maximum achievable increase in time step is limited by resonance phenomena, which inextricably couple fast and slow motions. Recently, we introduced deterministic and stochastic resonance-free multiple time step algorithms for molecular dynamics that solve this resonance problem and allow ten- to twenty-fold gains in the large time step compared to standard multiple time step algorithms [P. Minary et al., Phys. Rev. Lett. 93, 150201 (2004); B. Leimkuhler et al., Mol. Phys. 111, 3579-3594 (2013)]. These methods are based on the imposition of isokinetic constraints that couple the physical system to Nosé-Hoover chains or Nosé-Hoover Langevin schemes. In this paper, we show how to adapt these methods for collective variable-based enhanced sampling techniques, specifically adiabatic free-energy dynamics/temperature-accelerated molecular dynamics, unified free-energy dynamics, and by extension, metadynamics, thus allowing simulations employing these methods to employ similarly very large time steps. The combination of resonance-free multiple time step integrators with free-energy-based enhanced sampling significantly improves the efficiency of conformational exploration.
Molecular dynamics based enhanced sampling of collective variables with very large time steps
NASA Astrophysics Data System (ADS)
Chen, Pei-Yang; Tuckerman, Mark E.
2018-01-01
Enhanced sampling techniques that target a set of collective variables and that use molecular dynamics as the driving engine have seen widespread application in the computational molecular sciences as a means to explore the free-energy landscapes of complex systems. The use of molecular dynamics as the fundamental driver of the sampling requires the introduction of a time step whose magnitude is limited by the fastest motions in a system. While standard multiple time-stepping methods allow larger time steps to be employed for the slower and computationally more expensive forces, the maximum achievable increase in time step is limited by resonance phenomena, which inextricably couple fast and slow motions. Recently, we introduced deterministic and stochastic resonance-free multiple time step algorithms for molecular dynamics that solve this resonance problem and allow ten- to twenty-fold gains in the large time step compared to standard multiple time step algorithms [P. Minary et al., Phys. Rev. Lett. 93, 150201 (2004); B. Leimkuhler et al., Mol. Phys. 111, 3579-3594 (2013)]. These methods are based on the imposition of isokinetic constraints that couple the physical system to Nosé-Hoover chains or Nosé-Hoover Langevin schemes. In this paper, we show how to adapt these methods for collective variable-based enhanced sampling techniques, specifically adiabatic free-energy dynamics/temperature-accelerated molecular dynamics, unified free-energy dynamics, and by extension, metadynamics, thus allowing simulations employing these methods to employ similarly very large time steps. The combination of resonance-free multiple time step integrators with free-energy-based enhanced sampling significantly improves the efficiency of conformational exploration.
Melzer, Itshak; Goldring, Melissa; Melzer, Yehudit; Green, Elad; Tzedek, Irit
2010-12-01
If balance is lost, quick step execution can prevent falls. Research has shown that speed of voluntary stepping was able to predict future falls in old adults. The aim of the study was to investigate voluntary stepping behavior, as well as to compare timing and leg push-off force-time relation parameters of involved and uninvolved legs in stroke survivors during single- and dual-task conditions. We also aimed to compare timing and leg push-off force-time relation parameters between stroke survivors and healthy individuals in both task conditions. Ten stroke survivors performed a voluntary step execution test with their involved and uninvolved legs under two conditions: while focusing only on the stepping task and while a separate attention-demanding task was performed simultaneously. Temporal parameters related to the step time were measured including the duration of the step initiation phase, the preparatory phase, the swing phase, and the total step time. In addition, force-time parameters representing the push-off power during stepping were calculated from ground reaction data and compared with 10 healthy controls. The involved legs of stroke survivors had a significantly slower stepping time than uninvolved legs due to increased swing phase duration during both single- and dual-task conditions. For dual compared to single task, the stepping time increased significantly due to a significant increase in the duration of step initiation. In general, the force time parameters were significantly different in both legs of stroke survivors as compared to healthy controls, with no significant effect of dual compared with single-task conditions in both groups. The inability of stroke survivors to swing the involved leg quickly may be the most significant factor contributing to the large number of falls to the paretic side. The results suggest that stroke survivors were unable to rapidly produce muscle force in fast actions. This may be the mechanism of delayed execution of a fast step when balance is lost, thus increasing the likelihood of falls in stroke survivors. Copyright © 2010 Elsevier Ltd. All rights reserved.
Surface electric fields for North America during historical geomagnetic storms
Wei, Lisa H.; Homeier, Nichole; Gannon, Jennifer L.
2013-01-01
To better understand the impact of geomagnetic disturbances on the electric grid, we recreate surface electric fields from two historical geomagnetic storms—the 1989 “Quebec” storm and the 2003 “Halloween” storms. Using the Spherical Elementary Current Systems method, we interpolate sparsely distributed magnetometer data across North America. We find good agreement between the measured and interpolated data, with larger RMS deviations at higher latitudes corresponding to larger magnetic field variations. The interpolated magnetic field data are combined with surface impedances for 25 unique physiographic regions from the United States Geological Survey and literature to estimate the horizontal, orthogonal surface electric fields in 1 min time steps. The induced horizontal electric field strongly depends on the local surface impedance, resulting in surprisingly strong electric field amplitudes along the Atlantic and Gulf Coast. The relative peak electric field amplitude of each physiographic region, normalized to the value in the Interior Plains region, varies by a factor of 2 for different input magnetic field time series. The order of peak electric field amplitudes (largest to smallest), however, does not depend much on the input. These results suggest that regions at lower magnetic latitudes with high ground resistivities are also at risk from the effect of geomagnetically induced currents. The historical electric field time series are useful for estimating the flow of the induced currents through long transmission lines to study power flow and grid stability during geomagnetic disturbances.
Residue-level resolution of alphavirus envelope protein interactions in pH-dependent fusion.
Zeng, Xiancheng; Mukhopadhyay, Suchetana; Brooks, Charles L
2015-02-17
Alphavirus envelope proteins, organized as trimers of E2-E1 heterodimers on the surface of the pathogenic alphavirus, mediate the low pH-triggered fusion of viral and endosomal membranes in human cells. The lack of specific treatment for alphaviral infections motivates our exploration of potential antiviral approaches by inhibiting one or more fusion steps in the common endocytic viral entry pathway. In this work, we performed constant pH molecular dynamics based on an atomic model of the alphavirus envelope with icosahedral symmetry. We have identified pH-sensitive residues that cause the largest shifts in thermodynamic driving forces under neutral and acidic pH conditions for various fusion steps. A series of conserved interdomain His residues is identified to be responsible for the pH-dependent conformational changes in the fusion process, and ligand binding sites in their vicinity are anticipated to be potential drug targets aimed at inhibiting viral infections.
New Antennas and Methods for the Low Frequency Stellar and Planetary Radio Astronomy
NASA Astrophysics Data System (ADS)
Konovalenko, A. A.; Falkovich, I. S.; Rucker, H. O.; Lecacheux, A.; Zarka, Ph.; Koliadin, V. L.; Zakharenko, V. V.; Stanislavsky, A. A.; Melnik, V. N.; Litvinenko, G. V.; Gridin, A. A.; Bubnov, I. N.; Kalinichenko, N. N.; Reznik, A. P.; Sidorchuk, M. A.; Stepkin, S. V.; Mukha, D. V.; Nikolajenko, V. S.; Karlsson, R.; Thide, B.
According to the special Program of the National Academy of Sciences of Ukraine, creation of the new giant Ukrainian radio telescope (GURT) was started a few years ago on the UTR-2 radio telescope observatory. The main goal is to reach maximum band at the lowest frequencies (10-70 MHz), effective area (step-by-step up to 100,000 sq.m), and high interference immunity for resolving many astrophysical tasks when the sensitivity is less limited by the confusion effects. These tasks include stellar radio astronomy (the Sun, solar wind, flare stars, pulsars, transients) and planetary one (Jupiter, planetary lightnings, Earth ionosphere, the Moon, exoplanets). This array should be complementary to the LOFAR, E-LOFAR systems. The first stages of the GURT (6 x 25 cross dipole active elements) and broad-band digital registration of the impulsive and sporadic events were tested in comparison with the existing largest decameter array UTR-2.
Regional Classification of Traditional Japanese Folk Songs
NASA Astrophysics Data System (ADS)
Kawase, Akihiro; Tokosumi, Akifumi
In this study, we focus on the melodies of Japanese folk songs, and examine the basic structures of Japanese folk songs that represent the characteristics of different regions. We sample the five largest song genres within the music corpora of the Nihon Min-yo Taikan (Anthology of Japanese Folk Songs), consisting of 202,246 tones from 1,794 song pieces from 45 prefectures in Japan. Then, we calculate the probabilities of 24 transition patterns that fill the interval of the perfect fourth pitch, which is the interval that maintains most of the frequency for one-step and two-step pitch transitions within 11 regions, in order to determine the parameters for cluster analysis. As a result, we successively classify the regions into two basic groups, eastern Japan and western Japan, which corresponds to geographical factors and cultural backgrounds, and also match accent distributions in the Japanese language.
Scaling behavior for random walks with memory of the largest distance from the origin
NASA Astrophysics Data System (ADS)
Serva, Maurizio
2013-11-01
We study a one-dimensional random walk with memory. The behavior of the walker is modified with respect to the simple symmetric random walk only when he or she is at the maximum distance ever reached from his or her starting point (home). In this case, having the choice to move farther or to move closer, the walker decides with different probabilities. If the probability of a forward step is higher then the probability of a backward step, the walker is bold, otherwise he or she is timorous. We investigate the asymptotic properties of this bold-timorous random walk, showing that the scaling behavior varies continuously from subdiffusive (timorous) to superdiffusive (bold). The scaling exponents are fully determined with a new mathematical approach based on a decomposition of the dynamics in active journeys (the walker is at the maximum distance) and lazy journeys (the walker is not at the maximum distance).
CD4+ Cell Count and HIV Load as Predictors of Size of Anal Warts Over Time in HIV-Infected Women
Luu, Hung N.; Amirian, E. Susan; Chan, Wenyaw; Beasley, R. Palmer; Piller, Linda B.
2012-01-01
Background. Little is known about the associations between CD4+ cell counts, human immunodeficiency virus (HIV) load, and human papillomavirus “low-risk” types in noncancerous clinical outcomes. This study examined whether CD4+ count and HIV load predict the size of the largest anal warts in 976 HIV-infected women in an ongoing cohort. Methods. A linear mixed model was used to determine the association between size of anal wart and CD4+ count and HIV load. Results. The incidence of anal warts was 4.15 cases per 100 person-years (95% confidence interval [CI], 3.83–4.77) and 1.30 cases per 100 person-years (95% CI, 1.00–1.58) in HIV-infected and HIV-uninfected women, respectively. There appeared to be an inverse association between size of the largest anal warts and CD4+ count at baseline; however, this was not statistically significant. There was no association between size of the largest anal warts and CD4+ count or HIV load over time. Conclusions. There was no evidence for an association between size of the largest anal warts and CD4+ count or HIV load over time. Further exploration on the role of immune response on the development of anal warts is warranted in a larger study. PMID:22246682
Analysis on burnup step effect for evaluating reactor criticality and fuel breeding ratio
DOE Office of Scientific and Technical Information (OSTI.GOV)
Saputra, Geby; Purnama, Aditya Rizki; Permana, Sidik
Criticality condition of the reactors is one of the important factors for evaluating reactor operation and nuclear fuel breeding ratio is another factor to show nuclear fuel sustainability. This study analyzes the effect of burnup steps and cycle operation step for evaluating the criticality condition of the reactor as well as the performance of nuclear fuel breeding or breeding ratio (BR). Burnup step is performed based on a day step analysis which is varied from 10 days up to 800 days and for cycle operation from 1 cycle up to 8 cycles reactor operations. In addition, calculation efficiency based onmore » the variation of computer processors to run the analysis in term of time (time efficiency in the calculation) have been also investigated. Optimization method for reactor design analysis which is used a large fast breeder reactor type as a reference case was performed by adopting an established reactor design code of JOINT-FR. The results show a criticality condition becomes higher for smaller burnup step (day) and for breeding ratio becomes less for smaller burnup step (day). Some nuclides contribute to make better criticality when smaller burnup step due to individul nuclide half-live. Calculation time for different burnup step shows a correlation with the time consuming requirement for more details step calculation, although the consuming time is not directly equivalent with the how many time the burnup time step is divided.« less
Webb, Thomas J; Vanden Berghe, Edward; O'Dor, Ron
2010-08-02
Understanding the distribution of marine biodiversity is a crucial first step towards the effective and sustainable management of marine ecosystems. Recent efforts to collate location records from marine surveys enable us to assemble a global picture of recorded marine biodiversity. They also effectively highlight gaps in our knowledge of particular marine regions. In particular, the deep pelagic ocean--the largest biome on Earth--is chronically under-represented in global databases of marine biodiversity. We use data from the Ocean Biogeographic Information System to plot the position in the water column of ca 7 million records of marine species occurrences. Records from relatively shallow waters dominate this global picture of recorded marine biodiversity. In addition, standardising the number of records from regions of the ocean differing in depth reveals that regardless of ocean depth, most records come either from surface waters or the sea bed. Midwater biodiversity is drastically under-represented. The deep pelagic ocean is the largest habitat by volume on Earth, yet it remains biodiversity's big wet secret, as it is hugely under-represented in global databases of marine biological records. Given both its value in the provision of a range of ecosystem services, and its vulnerability to threats including overfishing and climate change, there is a pressing need to increase our knowledge of Earth's largest ecosystem.
Knechtle, B; Nikolaidis, P T
2018-01-01
In road runners, the age-related performance decline has been well investigated for marathoners, but little is known for half-marathoners. We analysed data from 138,616 runners (48,148 women and 90,469 men) competing between 2014 and 2016 in GöteborgsVarvet, the world's largest half-marathon. The men-to-women ratio in participants increased with age, the fastest race times were observed in age groups ˂35 and 35-39 years in women and in age group 35-39 years in men, the main effect of sex and the sex × age group interaction on race time were trivial, and the competitiveness was denser in men and in the younger age groups. In summary, in half-marathon running in the largest half-marathon in the world, the GöteborgsVarvet, women achieved the fastest race time at an earlier age compared to men where the fastest race times were observed in women in age groups ˂35 and 35-39 years and in men in age group 35-39 years.
USDA-ARS?s Scientific Manuscript database
Soil organic matter (SOM) is a very important compartment of the biosphere: it represents the largest dynamic carbon (C) pool where the C is stored for the longest time period. Root inputs, as exudates and root slush, represent a major, where not the largest, annual contribution to soil C input. Roo...
The general alcoholics anonymous tools of recovery: the adoption of 12-step practices and beliefs.
Greenfield, Brenna L; Tonigan, J Scott
2013-09-01
Working the 12 steps is widely prescribed for Alcoholics Anonymous (AA) members although the relative merits of different methods for measuring step work have received minimal attention and even less is known about how step work predicts later substance use. The current study (1) compared endorsements of step work on an face-valid or direct measure, the Alcoholics Anonymous Inventory (AAI), with an indirect measure of step work, the General Alcoholics Anonymous Tools of Recovery (GAATOR); (2) evaluated the underlying factor structure of the GAATOR and changes in step work over time; (3) examined changes in the endorsement of step work over time; and (4) investigated how, if at all, 12-step work predicted later substance use. New AA affiliates (N = 130) completed assessments at intake, 3, 6, and 9 months. Significantly more participants endorsed step work on the GAATOR than on the AAI for nine of the 12 steps. An exploratory factor analysis revealed a two-factor structure for the GAATOR comprising behavioral step work and spiritual step work. Behavioral step work did not change over time, but was predicted by having a sponsor, while Spiritual step work decreased over time and increases were predicted by attending 12-step meetings or treatment. Behavioral step work did not prospectively predict substance use. In contrast, spiritual step work predicted percent days abstinent. Behavioral step work and spiritual step work appear to be conceptually distinct components of step work that have distinct predictors and unique impacts on outcomes. PsycINFO Database Record (c) 2013 APA, all rights reserved.
Adjei, Nicholas Kofi; Brand, Tilman; Zeeb, Hajo
2017-01-01
Background Paradoxically, despite their longer life expectancy, women report poorer health than men. Time devoted to differing social roles could be an explanation for the observed gender differences in health among the elderly. The objective of this study was to explain gender differences in self-reported health among the elderly by taking time use activities, socio-economic positions, family characteristics and cross-national differences into account. Methods Data from the Multinational Time Use Study (MTUS) on 13,223 men and 18,192 women from Germany, Italy, Spain, UK and the US were analyzed. Multiple binary logistic regression models were used to examine the association between social factors and health for men and women separately. We further identified the relative contribution of different factors to total gender inequality in health using the Blinder-Oaxaca decomposition method. Results Whereas time allocated to paid work, housework and active leisure activities were positively associated with health, time devoted to passive leisure and personal activities were negatively associated with health among both men and women, but the magnitude of the association varied by gender and country. We found significant gender differences in health in Germany, Italy and Spain, but not in the other countries. The decomposition showed that differences in the time allocated to active leisure and level of educational attainment accounted for the largest health gap. Conclusions Our study represents a first step in understanding cross-national differences in the association between health status and time devoted to role-related activities among elderly men and women. The results, therefore, demonstrate the need of using an integrated framework of social factors in analyzing and explaining the gender and cross-national differences in the health of the elderly population. PMID:28949984
Adjei, Nicholas Kofi; Brand, Tilman; Zeeb, Hajo
2017-01-01
Paradoxically, despite their longer life expectancy, women report poorer health than men. Time devoted to differing social roles could be an explanation for the observed gender differences in health among the elderly. The objective of this study was to explain gender differences in self-reported health among the elderly by taking time use activities, socio-economic positions, family characteristics and cross-national differences into account. Data from the Multinational Time Use Study (MTUS) on 13,223 men and 18,192 women from Germany, Italy, Spain, UK and the US were analyzed. Multiple binary logistic regression models were used to examine the association between social factors and health for men and women separately. We further identified the relative contribution of different factors to total gender inequality in health using the Blinder-Oaxaca decomposition method. Whereas time allocated to paid work, housework and active leisure activities were positively associated with health, time devoted to passive leisure and personal activities were negatively associated with health among both men and women, but the magnitude of the association varied by gender and country. We found significant gender differences in health in Germany, Italy and Spain, but not in the other countries. The decomposition showed that differences in the time allocated to active leisure and level of educational attainment accounted for the largest health gap. Our study represents a first step in understanding cross-national differences in the association between health status and time devoted to role-related activities among elderly men and women. The results, therefore, demonstrate the need of using an integrated framework of social factors in analyzing and explaining the gender and cross-national differences in the health of the elderly population.
Two-step chlorination: A new approach to disinfection of a primary sewage effluent.
Li, Yu; Yang, Mengting; Zhang, Xiangru; Jiang, Jingyi; Liu, Jiaqi; Yau, Cie Fu; Graham, Nigel J D; Li, Xiaoyan
2017-01-01
Sewage disinfection aims at inactivating pathogenic microorganisms and preventing the transmission of waterborne diseases. Chlorination is extensively applied for disinfecting sewage effluents. The objective of achieving a disinfection goal and reducing disinfectant consumption and operational costs remains a challenge in sewage treatment. In this study, we have demonstrated that, for the same chlorine dosage, a two-step addition of chlorine (two-step chlorination) was significantly more efficient in disinfecting a primary sewage effluent than a one-step addition of chlorine (one-step chlorination), and shown how the two-step chlorination was optimized with respect to time interval and dosage ratio. Two-step chlorination of the sewage effluent attained its highest disinfection efficiency at a time interval of 19 s and a dosage ratio of 5:1. Compared to one-step chlorination, two-step chlorination enhanced the disinfection efficiency by up to 0.81- or even 1.02-log for two different chlorine doses and contact times. An empirical relationship involving disinfection efficiency, time interval and dosage ratio was obtained by best fitting. Mechanisms (including a higher overall Ct value, an intensive synergistic effect, and a shorter recovery time) were proposed for the higher disinfection efficiency of two-step chlorination in the sewage effluent disinfection. Annual chlorine consumption costs in one-step and two-step chlorination of the primary sewage effluent were estimated. Compared to one-step chlorination, two-step chlorination reduced the cost by up to 16.7%. Copyright © 2016 Elsevier Ltd. All rights reserved.
The Multivariate Largest Lyapunov Exponent as an Age-Related Metric of Quiet Standing Balance
Liu, Kun; Wang, Hongrui; Xiao, Jinzhuang
2015-01-01
The largest Lyapunov exponent has been researched as a metric of the balance ability during human quiet standing. However, the sensitivity and accuracy of this measurement method are not good enough for clinical use. The present research proposes a metric of the human body's standing balance ability based on the multivariate largest Lyapunov exponent which can quantify the human standing balance. The dynamic multivariate time series of ankle, knee, and hip were measured by multiple electrical goniometers. Thirty-six normal people of different ages participated in the test. With acquired data, the multivariate largest Lyapunov exponent was calculated. Finally, the results of the proposed approach were analysed and compared with the traditional method, for which the largest Lyapunov exponent and power spectral density from the centre of pressure were also calculated. The following conclusions can be obtained. The multivariate largest Lyapunov exponent has a higher degree of differentiation in differentiating balance in eyes-closed conditions. The MLLE value reflects the overall coordination between multisegment movements. Individuals of different ages can be distinguished by their MLLE values. The standing stability of human is reduced with the increment of age. PMID:26064182
NASA Technical Reports Server (NTRS)
Chao, W. C.
1982-01-01
With appropriate modifications, a recently proposed explicit-multiple-time-step scheme (EMTSS) is incorporated into the UCLA model. In this scheme, the linearized terms in the governing equations that generate the gravity waves are split into different vertical modes. Each mode is integrated with an optimal time step, and at periodic intervals these modes are recombined. The other terms are integrated with a time step dictated by the CFL condition for low-frequency waves. This large time step requires a special modification of the advective terms in the polar region to maintain stability. Test runs for 72 h show that EMTSS is a stable, efficient and accurate scheme.
Lee, X J; Lee, L Y; Foo, L P Y; Tan, K W; Hassell, D G
2012-01-01
The present work covers the preparation of carbon-based nanosorbents by ethylene decomposition on stainless steel mesh without the use of external catalyst for the treatment of water containing nickel ions (Ni2+). The reaction temperature was varied from 650 to 850 degrees C, while reaction time and ethylene to nitrogen flow ratio were maintained at 30 min and 1:1 cm3/min, respectively. Results show that nanosorbents synthesised at a reaction temperature of 650 degrees C had the smallest average diameter (75 nm), largest BET surface area (68.95 m2/g) and least amount of impurity (0.98 wt.% Fe). A series of batch-sorption tests were performed to evaluate the effects of initial pH, initial metal concentration and contact time on Ni2+ removal by the nanosorbents. The equilibrium data fitted well to Freundlich isotherm. The kinetic data were best correlated to a pseudo second-order model indicating that the process was of chemisorption type. Further analysis by the Boyd kinetic model revealed that boundary layer diffusion was the controlling step. This primary study suggests that the prepared material with Freundlich constants compared well with those in the literature, is a promising sorbent for the sequestration of Ni2+ in aqueous solutions.
The challenges associated with developing science-based landscape scale management plans
Szaro, Robert C.; Boyce, D.A.; Puchlerz, T.
2005-01-01
Planning activities over large landscapes poses a complex of challenges when trying to balance the implementation of a conservation strategy while still allowing for a variety of consumptive and nonconsumptive uses. We examine a case in southeast Alaska to illustrate the breadth of these challenges and an approach to developing a science-based resource plan. Not only was the planning area, the Tongass National Forest, USA, exceptionally large (approximately 17 million acres or 6.9 million ha), but it also is primarily an island archipelago environment. The water system surrounding and going through much of the forest provides access to facilitate the movement of people, animals, and plants but at the same time functions as a barrier to others. This largest temperate rainforest in the world is an exceptional example of the complexity of managing at such a scale but also illustrates the role of science in the planning process. As we enter the 21st century, the list of questions needing scientific investigation has not only changed dramatically, but the character of the questions also has changed. Questions are contentious, cover broad scales in space and time, and are highly complex and interdependent. The provision of unbiased and objective information to all stakeholders is an important step in informed decision-making.
Generic Schemes for Single-Molecule Kinetics. 2: Information Content of the Poisson Indicator.
Avila, Thomas R; Piephoff, D Evan; Cao, Jianshu
2017-08-24
Recently, we described a pathway analysis technique (paper 1) for analyzing generic schemes for single-molecule kinetics based upon the first-passage time distribution. Here, we employ this method to derive expressions for the Poisson indicator, a normalized measure of stochastic variation (essentially equivalent to the Fano factor and Mandel's Q parameter), for various renewal (i.e., memoryless) enzymatic reactions. We examine its dependence on substrate concentration, without assuming all steps follow Poissonian kinetics. Based upon fitting to the functional forms of the first two waiting time moments, we show that, to second order, the non-Poissonian kinetics are generally underdetermined but can be specified in certain scenarios. For an enzymatic reaction with an arbitrary intermediate topology, we identify a generic minimum of the Poisson indicator as a function of substrate concentration, which can be used to tune substrate concentration to the stochastic fluctuations and to estimate the largest number of underlying consecutive links in a turnover cycle. We identify a local maximum of the Poisson indicator (with respect to substrate concentration) for a renewal process as a signature of competitive binding, either between a substrate and an inhibitor or between multiple substrates. Our analysis explores the rich connections between Poisson indicator measurements and microscopic kinetic mechanisms.
GOTHIC: Gravitational oct-tree code accelerated by hierarchical time step controlling
NASA Astrophysics Data System (ADS)
Miki, Yohei; Umemura, Masayuki
2017-04-01
The tree method is a widely implemented algorithm for collisionless N-body simulations in astrophysics well suited for GPU(s). Adopting hierarchical time stepping can accelerate N-body simulations; however, it is infrequently implemented and its potential remains untested in GPU implementations. We have developed a Gravitational Oct-Tree code accelerated by HIerarchical time step Controlling named GOTHIC, which adopts both the tree method and the hierarchical time step. The code adopts some adaptive optimizations by monitoring the execution time of each function on-the-fly and minimizes the time-to-solution by balancing the measured time of multiple functions. Results of performance measurements with realistic particle distribution performed on NVIDIA Tesla M2090, K20X, and GeForce GTX TITAN X, which are representative GPUs of the Fermi, Kepler, and Maxwell generation of GPUs, show that the hierarchical time step achieves a speedup by a factor of around 3-5 times compared to the shared time step. The measured elapsed time per step of GOTHIC is 0.30 s or 0.44 s on GTX TITAN X when the particle distribution represents the Andromeda galaxy or the NFW sphere, respectively, with 224 = 16,777,216 particles. The averaged performance of the code corresponds to 10-30% of the theoretical single precision peak performance of the GPU.
Liu, Wei-hui; Wang, Tao; Yan, Hong-tao; Chen, Tao; Xu, Chuan; Ye, Ping; Zhang, Ning; Liu, Zheng-cai; Tang, Li-jun
2015-01-01
Aims Although we previously demonstrated abdominal paracentesis drainage (APD) preceding percutaneous catheter drainage (PCD) as the central step for treating patients with moderately severe (MSAP) or severe acute pancreatitis (SAP), the predictors leading to PCD after APD have not been studied. Methods Consecutive patients with MSAP or SAP were recruited between June 2011 and June 2013. As a step-up approach, all patients initially received medical management, later underwent ultrasound-guided APD before PCD, if necessary, followed by endoscopic necrosectomy through the path formed by PCD. APD primarily targeted fluid in the abdominal or pelvic cavities, whereas PCD aimed at (peri)pancreatic fluid. Results Of the 92 enrolled patients, 40 were managed with APD alone and 52 received PCD after APD (14 required necrosectomy after initial PCD). The overall mortality was 6.5%. Univariate analysis showed that among the 20 selected parameters, 13 factors significantly affected PCD intervention after APD. Multivariate analysis revealed that infected (peri)pancreatic collections (P = -0.001), maximum extent of necrosis of more than 30% of the pancreas (P = -0.024), size of the largest necrotic peri(pancreatic) collection (P = -0.007), and reduction of (peri)pancreatic fluid collections by <50% after APD (P = -0.008) were all independent predictors of PCD. Conclusions Infected (peri)pancreatic collections, a largest necrotic peri(pancreatic) collection of more than 100 ml, and reduction of (peri)pancreatic fluid collections by <50% after APD could effectively predict the need for PCD in the early course of the disease. PMID:25659143
NASA Astrophysics Data System (ADS)
Pineda, Gustavo; Atehortúa, Angélica; Iregui, Marcela; García-Arteaga, Juan D.; Romero, Eduardo
2017-11-01
External auditory cues stimulate motor related areas of the brain, activating motor ways parallel to the basal ganglia circuits and providing a temporary pattern for gait. In effect, patients may re-learn motor skills mediated by compensatory neuroplasticity mechanisms. However, long term functional gains are dependent on the nature of the pathology, follow-up is usually limited and reinforcement by healthcare professionals is crucial. Aiming to cope with these challenges, several researches and device implementations provide auditory or visual stimulation to improve Parkinsonian gait pattern, inside and outside clinical scenarios. The current work presents a semiautomated strategy for spatio-temporal feature extraction to study the relations between auditory temporal stimulation and spatiotemporal gait response. A protocol for auditory stimulation was built to evaluate the integrability of the strategy in the clinic practice. The method was evaluated in transversal measurement with an exploratory group of people with Parkinson's (n = 12 in stage 1, 2 and 3) and control subjects (n =6). The result showed a strong linear relation between auditory stimulation and cadence response in control subjects (R=0.98 +/-0.008) and PD subject in stage 2 (R=0.95 +/-0.03) and stage 3 (R=0.89 +/-0.05). Normalized step length showed a variable response between low and high gait velocity (0.2> R >0.97). The correlation between normalized mean velocity and stimulus was strong in all PD stage 2 (R>0.96) PD stage 3 (R>0.84) and controls (R>0.91) for all experimental conditions. Among participants, the largest variation from baseline was found in PD subject in stage 3 (53.61 +/-39.2 step/min, 0.12 +/- 0.06 in step length and 0.33 +/- 0.16 in mean velocity). In this group these values were higher than the own baseline. These variations are related with direct effect of metronome frequency on cadence and velocity. The variation of step length involves different regulation strategies and could need others specific external cues. In conclusion the current protocol (and their selected parameters, kind of sound time for training, step of variation, range of variation) provide a suitable gait facilitation method specially for patients with the highest gait disturbance (stage 2 and 3). The method should be adjusted for initial stages and evaluated in a rehabilitation program.
McCrorie, P Rw; Duncan, E; Granat, M H; Stansfield, B W
2012-11-01
Evidence suggests that behaviours such as standing are beneficial for our health. Unfortunately, little is known of the prevalence of this state, its importance in relation to time spent stepping or variation across seasons. The aim of this study was to quantify, in young adolescents, the prevalence and seasonal changes in time spent upright and not stepping (UNSt(time)) as well as time spent upright and stepping (USt(time)), and their contribution to overall upright time (U(time)). Thirty-three adolescents (12.2 ± 0.3 y) wore the activPAL activity monitor during four school days on two occasions: November/December (winter) and May/June (summer). UNSt(time) contributed 60% of daily U(time) at winter (Mean = 196 min) and 53% at summer (Mean = 171 min); a significant seasonal effect, p < 0.001. USt(time) was significantly greater in summer compared to winter (153 min versus 131 min, p < 0.001). The effects in UNSt(time) could be explained through significant seasonal differences during the school hours (09:00-16:00), whereas the effects in USt(time) could be explained through significant seasonal differences in the evening period (16:00-22:00). Adolescents spent a greater amount of time upright and not stepping than they did stepping, in both winter and summer. The observed seasonal effects for both UNSt(time) and USt(time) provide important information for behaviour change intervention programs.
NASA Astrophysics Data System (ADS)
Sohail, Maha
2017-12-01
A large proportion of the world's population resides in developing countries where there is a lack of rigorous studies in designing energy efficient buildings. This study is a step in designing a naturally ventilated high rise residential building in a tropical climatic context of the developing country, Pakistan. Karachi, the largest city of Pakistan, lies in the subtropical hot desert region with constant high temperature of average 32 °C throughout the summer and no particular winter season. The Design Builder software package is used to design a 25 storey high rise residential building relying primarily on natural ventilation. A final conceptual design is proposed after optimization of massing, geometry, orientation, and improved building envelope design including extensive shading devices in the form of trees. It has been observed that a reduction of 8 °C in indoor ambient temperature is possible to achieve with passive measures and use of night time ventilation. A fully naturally ventilated building can reduce the energy consumption for cooling and heating by 96 % compared to a building using air conditioning systems.
Mesoporous g-C₃N₄ Nanosheets: Synthesis, Superior Adsorption Capacity and Photocatalytic Activity.
Li, Dong-Feng; Huang, Wei-Qing; Zou, Lan-Rong; Pan, Anlian; Huang, Gui-Fang
2018-08-01
Elimination of pollutants from water is one of the greatest challenges in resolving global environmental issues. Herein, we report a high-surface-area mesoporous g-C3N4 nanosheet with remarkable high adsorption capacity and photocatalytic performance, which is prepared through directly polycondensation of urea followed by a consecutive one-step thermal exfoliation strategy. This one-pot method to prepare mesoporous g-C3N4 nanosheet is facile and rapid in comparison with others. The superior adsorption capacity of the fabricated mesoporous g-C3N4 nanostructures is demonstrated by a model organic pollutant-methylene blue (MB), which is up to 72.2 mg/g, about 6 times as that of the largest value of various g-C3N4 adsorbents reported so far. Moreover, this kind of porous g-C3N4 nanosheet exhibits high photocatalytic activity to MB and phenol degradation. Particularly, the regenerated samples show excellent performance of pollutant removal after consecutive adsorption/degradation cycles. Therefore, this mesoporous g-C3N4 nanosheet may be an attractive robust metal-free material with great promise for organic pollutant elimination.
Modeling Nucleation and Grain Growth in the Solar Nebula: Initial Progress Report
NASA Technical Reports Server (NTRS)
Nuth, Joseph A.; Paquette, J. A.; Ferguson, F. T.
2010-01-01
The primitive solar nebula was a violent and chaotic environment where high energy collisions, lightning, shocks and magnetic re-connection events rapidly vaporized some fraction of nebular dust, melted larger particles while leaving the largest grains virtually undisturbed. At the same time, some tiny grains containing very easily disturbed noble gas signatures (e.g., small, pre-solar graphite or SiC particles) never experienced this violence, yet can be found directly adjacent to much larger meteoritic components (chondrules or CAIs) that did. Additional components in the matrix of the most primitive carbonaceous chondrites and in some chondritic porous interplanetary dust particles include tiny nebular condensates, aggregates of condensates and partially annealed aggregates. Grains formed in violent transient events in the solar nebula did not come to equilibrium with their surroundings. To understand the formation and textures of these materials as well as their nebular abundances we must rely on Nucleation Theory and kinetic models of grain growth, coagulation and annealing. Such models have been very uncertain in the past: we will discuss the steps we are taking to increase their reliability.
Mass and momentum turbulent transport experiments with confined swirling coaxial jets
NASA Technical Reports Server (NTRS)
Roback, R.; Johnson, B. V.
1983-01-01
Swirling coaxial jets mixing downstream, discharging into an expanded duct was conducted to obtain data for the evaluation and improvement of turbulent transport models currently used in a variety of computational procedures throughout the combustion community. A combination of laser velocimeter (LV) and laser induced fluorescence (LIF) techniques was employed to obtain mean and fluctuating velocity and concentration distributions which were used to derive mass and momentum turbulent transport parameters currently incorporated into various combustor flow models. Flow visualization techniques were also employed to determine qualitatively the time dependent characteristics of the flow and the scale of turbulence. The results of these measurements indicated that the largest momentum turbulent transport was in the r-z plane. Peak momentum turbulent transport rates were approximately the same as those for the nonswirling flow condition. The mass turbulent transport process for swirling flow was complicated. Mixing occurred in several steps of axial and radial mass transport and was coupled with a large radial mean convective flux. Mixing for swirling flow was completed in one-third the length required for nonswirling flow.
Magnusson, Roger; Reeve, Belinda
2015-01-01
Strategies to reduce excess salt consumption play an important role in preventing cardiovascular disease, which is the largest contributor to global mortality from non-communicable diseases. In many countries, voluntary food reformulation programs seek to reduce salt levels across selected product categories, guided by aspirational targets to be achieved progressively over time. This paper evaluates the industry-led salt reduction programs that operate in the United Kingdom and Australia. Drawing on theoretical concepts from the field of regulatory studies, we propose a step-wise or “responsive” approach that introduces regulatory “scaffolds” to progressively increase levels of government oversight and control in response to industry inaction or under-performance. Our model makes full use of the food industry’s willingness to reduce salt levels in products to meet reformulation targets, but recognizes that governments remain accountable for addressing major diet-related health risks. Creative regulatory strategies can assist governments to fulfill their public health obligations, including in circumstances where there are political barriers to direct, statutory regulation of the food industry. PMID:26133973
The use of SESK as a trend parameter for localized bearing fault diagnosis in induction machines.
Saidi, Lotfi; Ben Ali, Jaouher; Benbouzid, Mohamed; Bechhoefer, Eric
2016-07-01
A critical work of bearing fault diagnosis is locating the optimum frequency band that contains faulty bearing signal, which is usually buried in the noise background. Now, envelope analysis is commonly used to obtain the bearing defect harmonics from the envelope signal spectrum analysis and has shown fine results in identifying incipient failures occurring in the different parts of a bearing. However, the main step in implementing envelope analysis is to determine a frequency band that contains faulty bearing signal component with the highest signal noise level. Conventionally, the choice of the band is made by manual spectrum comparison via identifying the resonance frequency where the largest change occurred. In this paper, we present a squared envelope based spectral kurtosis method to determine optimum envelope analysis parameters including the filtering band and center frequency through a short time Fourier transform. We have verified the potential of the spectral kurtosis diagnostic strategy in performance improvements for single-defect diagnosis using real laboratory-collected vibration data sets. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.
Mass-gathering Events: The Public Health Challenge of the Kumbh Mela 2013.
Dwivedi, Suresh; Cariappa, Mudera P
2015-12-01
Mass-gathering (MG) events pose challenges to the most adept of public health practitioners in ensuring the health safety of the population. These MGs can be for sporting events, musical festivals, or more commonly, have religious undertones. The Kumbh Mela 2013 at Allahabad, India may have been the largest gathering of humanity in history with nearly 120 million pilgrims having thronged the venue. The scale of the event posed a challenge to the maintenance of public health security and safety. A snapshot of the experience of managing the hygiene and sanitation aspects of this mega event is presented herein, highlighting the importance of proactive public health planning and preparedness. There having been no outbreaks of disease is vindication of the steps undertaken in planning and preparedness, notwithstanding obvious limitations of unsanitary behaviors and traditional beliefs of those attending the festival. The evident flaw on post-event analyses was the failure to cater adequately for environmental mopping-up operations after the festival. Besides, a system of real-time monitoring of disease and morbidity patterns, harnessing low cost technology alternatives, should be planned for at all such future events.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mohanpurkar, Manish; Luo, Yusheng; Hovsapian, Rob
Hydropower plant (HPP) generation comprises a considerable portion of bulk electricity generation and is delivered with a low-carbon footprint. In fact, HPP electricity generation provides the largest share from renewable energy resources, which include wind and solar. Increasing penetration levels of wind and solar lead to a lower inertia on the electric grid, which poses stability challenges. In recent years, breakthroughs in energy storage technologies have demonstrated the economic and technical feasibility of extensive deployments of renewable energy resources on electric grids. If integrated with scalable, multi-time-step energy storage so that the total output can be controlled, multiple run-of-the-river (ROR)more » HPPs can be deployed. Although the size of a single energy storage system is much smaller than that of a typical reservoir, the ratings of storages and multiple ROR HPPs approximately equal the rating of a large, conventional HPP. This paper proposes cohesively managing multiple sets of energy storage systems distributed in different locations. This paper also describes the challenges associated with ROR HPP system architecture and operation.« less
Defending a moving target: H1N1 preparedness training for the transit industry.
Faass, Josephine; Greenberg, Michael; Lowrie, Karen W
2013-01-01
To stem the spread of the novel H1N1 virus, U.S. public health officials put forth a variety of recommendations, ranging from practicing social distancing and frequent hand washing at the individual level, to furloughs and continual cleaning of commonly touched surfaces at the level of the organization. Although these steps are amenable to implementation in an office, school or hospital setting, they are nearly impossible to apply in the public transit environment, where large numbers of people remain in close quarters, with no running water and limited opportunities for disinfection. Recognizing the need to offer adequate protection from infection to employees and customers alike, transit officials expressed the need for H1N1-specific training, tailored to industry needs and limitations, to Rutgers University's Center for Transportation Safety, Security and Risk. The resulting course, which was informed through a combination of literature-based and primary research, combined the most current public health data with best practices gleaned from some of the nation's largest transit agencies, in a just-in-time format.
Using biological data to test climate change refugia
NASA Astrophysics Data System (ADS)
Morelli, T. L.; Maher, S. P.
2015-12-01
The concept of refugia has been discussed from theoretical and paleontological perspectives to address how populations persisted during periods of unfavorable climate. Recently, several studies have applied the idea to contemporary landscapes to identify locations that are buffered from climate change effects so as to favor greater persistence of valued resources relative to other areas. Refugia are now being discussed among natural resource agencies as a potential adaptation option in the face of anthropogenic climate change. Using downscaled climate data, we identified hypothetical refugial meadows in the Sierra Nevada and then tested them using survey and genetic data from Belding's ground squirrel (Urocitellus beldingi) populations. We predicted that refugial meadows would show higher genetic diversity, higher rates of occupancy and lower rates of extirpation over time. At each step of the research, we worked with managers to ensure the largest impact. Although no panacea, identifying climate change refugia could be an important strategy for prioritizing habitats for management intervention in order to conserve populations. This research was supported by the California LCC, the Northeast Climate Science Center, and NSF.
2017-12-08
NASA's Fermi Closes on Source of Cosmic Rays New images from NASA's Fermi Gamma-ray Space Telescope show where supernova remnants emit radiation a billion times more energetic than visible light. The images bring astronomers a step closer to understanding the source of some of the universe's most energetic particles -- cosmic rays. This composite shows the Cassiopeia A supernova remnant across the spectrum: Gamma rays (magenta) from NASA's Fermi Gamma-ray Space Telescope; X-rays (blue, green) from NASA's Chandra X-ray Observatory; visible light (yellow) from the Hubble Space Telescope; infrared (red) from NASA's Spitzer Space Telescope; and radio (orange) from the Very Large Array near Socorro, N.M. Credit: NASA/DOE/Fermi LAT Collaboration, CXC/SAO/JPL-Caltech/Steward/O. Krause et al., and NRAO/AUI For more information: www.nasa.gov/mission_pages/GLAST/news/cosmic-rays-source.... NASA Goddard Space Flight Center is home to the nation's largest organization of combined scientists, engineers and technologists that build spacecraft, instruments and new technology to study the Earth, the sun, our solar system, and the universe. Follow us on Twitter Join us on Facebook
Yang, Wen-Chieh; Hsu, Wei-Li; Wu, Ruey-Meei; Lin, Kwan-Hwa
2016-10-01
Turning difficulty is common in people with Parkinson disease (PD). The clock-turn strategy is a cognitive movement strategy to improve turning performance in people with PD despite its effects are unverified. Therefore, this study aimed to investigate the effects of the clock-turn strategy on the pattern of turning steps, turning performance, and freezing of gait during a narrow turning, and how these effects were influenced by concurrent performance of a cognitive task (dual task). Twenty-five people with PD were randomly assigned to the clock-turn or usual-turn group. Participants performed the Timed Up and Go test with and without concurrent cognitive task during the medication OFF period. The clock-turn group performed the Timed Up and Go test using the clock-turn strategy, whereas participants in the usual-turn group performed in their usual manner. Measurements were taken during the 180° turn of the Timed Up and Go test. The pattern of turning steps was evaluated by step time variability and step time asymmetry. Turning performance was evaluated by turning time and number of turning steps. The number and duration of freezing of gait were calculated by video review. The clock-turn group had lower step time variability and step time asymmetry than the usual-turn group. Furthermore, the clock-turn group turned faster with fewer freezing of gait episodes than the usual-turn group. Dual task increased the step time variability and step time asymmetry in both groups but did not affect turning performance and freezing severity. The clock-turn strategy reduces turning time and freezing of gait during turning, probably by lowering step time variability and asymmetry. Dual task compromises the effects of the clock-turn strategy, suggesting a competition for attentional resources.Video Abstract available for more insights from the authors (see Supplemental Digital Content 1, http://links.lww.com/JNPT/A141).
Rebich, R.A.; Houston, N.A.; Mize, S.V.; Pearson, D.K.; Ging, P.B.; Evan, Hornig C.
2011-01-01
SPAtially Referenced Regressions On Watershed attributes (SPARROW) models were developed to estimate nutrient inputs [total nitrogen (TN) and total phosphorus (TP)] to the northwestern part of the Gulf of Mexico from streams in the South-Central United States (U.S.). This area included drainages of the Lower Mississippi, Arkansas-White-Red, and Texas-Gulf hydrologic regions. The models were standardized to reflect nutrient sources and stream conditions during 2002. Model predictions of nutrient loads (mass per time) and yields (mass per area per time) generally were greatest in streams in the eastern part of the region and along reaches near the Texas and Louisiana shoreline. The Mississippi River and Atchafalaya River watersheds, which drain nearly two-thirds of the conterminous U.S., delivered the largest nutrient loads to the Gulf of Mexico, as expected. However, the three largest delivered TN yields were from the Trinity River/Galveston Bay, Calcasieu River, and Aransas River watersheds, while the three largest delivered TP yields were from the Calcasieu River, Mermentau River, and Trinity River/Galveston Bay watersheds. Model output indicated that the three largest sources of nitrogen from the region were atmospheric deposition (42%), commercial fertilizer (20%), and livestock manure (unconfined, 17%). The three largest sources of phosphorus were commercial fertilizer (28%), urban runoff (23%), and livestock manure (confined and unconfined, 23%). ?? 2011 American Water Resources Association. This article is a U.S. Government work and is in the public domain in the USA.
Wolves Recolonizing Islands: Genetic Consequences and Implications for Conservation and Management.
Plumer, Liivi; Keis, Marju; Remm, Jaanus; Hindrikson, Maris; Jõgisalu, Inga; Männil, Peep; Kübarsepp, Marko; Saarma, Urmas
2016-01-01
After a long and deliberate persecution, the grey wolf (Canis lupus) is slowly recolonizing its former areas in Europe, and the genetic consequences of this process are of particular interest. Wolves, though present in mainland Estonia for a long time, have only recently started to recolonize the country's two largest islands, Saaremaa and Hiiumaa. The main objective of this study was to analyse wolf population structure and processes in Estonia, with particular attention to the recolonization of islands. Fifteen microsatellite loci were genotyped for 185 individuals across Estonia. As a methodological novelty, all putative wolf-dog hybrids were identified and removed (n = 17) from the dataset beforehand to avoid interference of dog alleles in wolf population analysis. After the preliminary filtering, our final dataset comprised of 168 "pure" wolves. We recommend using hybrid-removal step as a standard precautionary procedure not only for wolf population studies, but also for other taxa prone to hybridization. STRUCTURE indicated four genetic groups in Estonia. Spatially explicit DResD analysis identified two areas, one of them on Saaremaa island and the other in southwestern Estonia, where neighbouring individuals were genetically more similar than expected from an isolation-by-distance null model. Three blending areas and two contrasting transition zones were identified in central Estonia, where the sampled individuals exhibited strong local differentiation over relatively short distance. Wolves on the largest Estonian islands are part of human-wildlife conflict due to livestock depredation. Negative public attitude, especially on Saaremaa where sheep herding is widespread, poses a significant threat for island wolves. To maintain the long-term viability of the wolf population on Estonian islands, not only wolf hunting quota should be targeted with extreme care, but effective measures should be applied to avoid inbreeding and minimize conflicts with local communities and stakeholders.
The South Sandwich "Forgotten" Subduction Zone and Tsunami Hazard in the South Atlantic
NASA Astrophysics Data System (ADS)
Okal, E. A.; Hartnady, C. J. H.; Synolakis, C. E.
2009-04-01
While no large interplate thrust earthquakes are know at the "forgotten" South Sandwich subduction zone, historical catalogues include a number of events with reported magnitudes 7 or more. A detailed seismological study of the largest event (27 June 1929; M (G&R) = 8.3) is presented. The earthquake relocates 80 km North of the Northwestern corner of the arc and its mechanism, inverted using the PDFM method, features normal faulting on a steeply dipping fault plane (phi, delta, lambda = 71, 70, 272 deg. respectively). The seismic moment of 1.7*10**28 dyn*cm supports Gutenberg and Richter's estimate, and is 28 times the largest shallow CMT in the region. This event is interpreted as representing a lateral tear in the South Atlantic plate, comparable to similar earthquakes in Samoa and Loyalty, deemed "STEP faults" by Gover and Wortel [2005]. Hydrodynamic simulations were performed using the MOST method [Titov and Synolakis, 1997]. Computed deep-water tsunami amplitudes of 30cm and 20cm were found off the coast of Brazil and along the Gulf of Guinea (Ivory Coast, Ghana) respectively. The 1929 moment was assigned to the geometries of other know earthquakes in the region, namely outer-rise normal faulting events at the center of the arc and its southern extremity, and an interplate thrust fault at the Southern corner, where the youngest lithosphere is subducted. Tsunami hydrodynamic simulation of these scenarios revealed strong focusing of tsunami wave energy by the SAR, the SWIOR and the Agulhas Rise, in Ghana, Southern Mozambique and certain parts of the coast of South Africa. This study documents the potential tsunami hazard to South Atlantic shorelines from earthquakes in this region, principally normal faulting events.
South Sandwich: The Forgotten Subduction Zone and Tsunami Hazard in the South Atlantic
NASA Astrophysics Data System (ADS)
Okal, E. A.; Hartnady, C. J.
2008-12-01
While no large interplate thrust earthquakes are known at the South Sandwich subduction zone, historical catalogues include a number of earthquakes with reported magnitudes of 7 or more. We present a detailed seismological study of the largest one (27 June 1929; M (G&R) = 8.3). The earthquake relocates 80 km North of the Northwestern corner of the arc. Its mechanism, inverted using the PDFM method, features normal faulting on a steeply dipping fault plane (phi, delta, lambda = 71, 70, 272 deg.). The seismic moment, 1.7 10**28 dyn*cm, supports Gutenberg and Richter's estimate, and is 28 times the largest shallow CMT in the region. The 1929 event is interpreted as representing a lateral tear in the South Atlantic plate, comparable to similar earthquakes in Samoa and Loyalty, deemed "STEP faults" by Gover and Wortel [2005]. Hydrodynamic simulations using the MOST method [Titov and Synolakis, 1997] suggest deep-water tsunami amplitudes reaching 30 cm off the coast of Brazil, where it should have had observable run-up, and 20 cm along the Gulf of Guinea (Ivory Coast, Ghana). We also simulate a number of potential sources obtained by assigning the 1929 moment to the geometries of other known earthquakes in the region, namely outer-rise normal faulting events at the center of the arc and its southern extremity, and an interplate thrust fault at the Southern corner, where the youngest lithosphere is subducted. A common feature of these models is the strong focusing of tsunami waves by the SAR, the SWIOR, and the Agulhas Rise, resulting in amplitudes always enhanced in Ghana, Southern Mozambique and certain parts of the coast of South Africa. This study documents the potential tsunami hazard to South Atlantic shorelines from earthquakes in this region, principally normal faulting events.
Comparison of step-by-step kinematics in repeated 30m sprints in female soccer players.
van den Tillaar, Roland
2018-01-04
The aim of this study was to compare kinematics in repeated 30m sprints in female soccer players. Seventeen subjects performed seven 30m sprints every 30s in one session. Kinematics were measured with an infrared contact mat and laser gun, and running times with an electronic timing device. The main findings were that sprint times increased in the repeated sprint ability test. The main changes in kinematics during the repeated sprint ability test were increased contact time and decreased step frequency, while no change in step length was observed. The step velocity increased in almost each step until the 14, which occurred around 22m. After this, the velocity was stable until the last step, when it decreased. This increase in step velocity was mainly caused by the increased step length and decreased contact times. It was concluded that the fatigue induced in repeated 30m sprints in female soccer players resulted in decreased step frequency and increased contact time. Employing this approach in combination with a laser gun and infrared mat for 30m makes it very easy to analyse running kinematics in repeated sprints in training. This extra information gives the athlete, coach and sports scientist the opportunity to give more detailed feedback and help to target these changes in kinematics better to enhance repeated sprint performance.
Time Lapse of World’s Largest 3-D Printed Object
DOE Office of Scientific and Technical Information (OSTI.GOV)
None
2016-08-29
Researchers at the MDF have 3D-printed a large-scale trim tool for a Boeing 777X, the world’s largest twin-engine jet airliner. The additively manufactured tool was printed on the Big Area Additive Manufacturing, or BAAM machine over a 30-hour period. The team used a thermoplastic pellet comprised of 80% ABS plastic and 20% carbon fiber from local material supplier. The tool has proven to decrease time, labor, cost and errors associated with traditional manufacturing techniques and increased energy savings in preliminary testing and will undergo further, long term testing.
Infrared Time Lapse of World’s Largest 3D-Printed Object
DOE Office of Scientific and Technical Information (OSTI.GOV)
None
Researchers at Oak Ridge National Laboratory have 3D-printed a large-scale trim tool for a Boeing 777X, the world’s largest twin-engine jet airliner. The additively manufactured tool was printed on the Big Area Additive Manufacturing, or BAAM machine over a 30-hour period. The team used a thermoplastic pellet comprised of 80% ABS plastic and 20% carbon fiber from local material supplier. The tool has proven to decrease time, labor, cost and errors associated with traditional manufacturing techniques and increased energy savings in preliminary testing and will undergo further, long term testing.
Implicit time accurate simulation of unsteady flow
NASA Astrophysics Data System (ADS)
van Buuren, René; Kuerten, Hans; Geurts, Bernard J.
2001-03-01
Implicit time integration was studied in the context of unsteady shock-boundary layer interaction flow. With an explicit second-order Runge-Kutta scheme, a reference solution to compare with the implicit second-order Crank-Nicolson scheme was determined. The time step in the explicit scheme is restricted by both temporal accuracy as well as stability requirements, whereas in the A-stable implicit scheme, the time step has to obey temporal resolution requirements and numerical convergence conditions. The non-linear discrete equations for each time step are solved iteratively by adding a pseudo-time derivative. The quasi-Newton approach is adopted and the linear systems that arise are approximately solved with a symmetric block Gauss-Seidel solver. As a guiding principle for properly setting numerical time integration parameters that yield an efficient time accurate capturing of the solution, the global error caused by the temporal integration is compared with the error resulting from the spatial discretization. Focus is on the sensitivity of properties of the solution in relation to the time step. Numerical simulations show that the time step needed for acceptable accuracy can be considerably larger than the explicit stability time step; typical ratios range from 20 to 80. At large time steps, convergence problems that are closely related to a highly complex structure of the basins of attraction of the iterative method may occur. Copyright
NASA Technical Reports Server (NTRS)
Desideri, J. A.; Steger, J. L.; Tannehill, J. C.
1978-01-01
The iterative convergence properties of an approximate-factorization implicit finite-difference algorithm are analyzed both theoretically and numerically. Modifications to the base algorithm were made to remove the inconsistency in the original implementation of artificial dissipation. In this way, the steady-state solution became independent of the time-step, and much larger time-steps can be used stably. To accelerate the iterative convergence, large time-steps and a cyclic sequence of time-steps were used. For a model transonic flow problem governed by the Euler equations, convergence was achieved with 10 times fewer time-steps using the modified differencing scheme. A particular form of instability due to variable coefficients is also analyzed.
Melatonin: a universal time messenger.
Erren, Thomas C; Reiter, Russel J
2015-01-01
Temporal organization plays a key role in humans, and presumably all species on Earth. A core building block of the chronobiological architecture is the master clock, located in the suprachi asmatic nuclei [SCN], which organizes "when" things happen in sub-cellular biochemistry, cells, organs and organisms, including humans. Conceptually, time messenging should follow a 5 step-cascade. While abundant evidence suggests how steps 1 through 4 work, step 5 of "how is central time information transmitted througout the body?" awaits elucidation. Step 1: Light provides information on environmental (external) time; Step 2: Ocular interfaces between light and biological (internal) time are intrinsically photosensitive retinal ganglion cells [ipRGS] and rods and cones; Step 3: Via the retinohypothalamic tract external time information reaches the light-dependent master clock in the brain, viz the SCN; Step 4: The SCN translate environmental time information into biological time and distribute this information to numerous brain structures via a melanopsin-based network. Step 5: Melatonin, we propose, transmits, or is a messenger of, internal time information to all parts of the body to allow temporal organization which is orchestrated by the SCN. Key reasons why we expect melatonin to have such role include: First, melatonin, as the chemical expression of darkness, is centrally involved in time- and timing-related processes such as encoding clock and calendar information in the brain; Second, melatonin travels throughout the body without limits and is thus a ubiquitous molecule. The chemial conservation of melatonin in all tested species could make this molecule a candidate for a universal time messenger, possibly constituting a legacy of an all-embracing evolutionary history.
Comparison of step-by-step kinematics of resisted, assisted and unloaded 20-m sprint runs.
van den Tillaar, Roland; Gamble, Paul
2018-03-26
This investigation examined step-by-step kinematics of sprint running acceleration. Using a randomised counterbalanced approach, 37 female team handball players (age 17.8 ± 1.6 years, body mass 69.6 ± 9.1 kg, height 1.74 ± 0.06 m) performed resisted, assisted and unloaded 20-m sprints within a single session. 20-m sprint times and step velocity, as well as step length, step frequency, contact and flight times of each step were evaluated for each condition with a laser gun and an infrared mat. Almost all measured parameters were altered for each step under the resisted and assisted sprint conditions (η 2 ≥ 0.28). The exception was step frequency, which did not differ between assisted and normal sprints. Contact time, flight time and step frequency at almost each step were different between 'fast' vs. 'slow' sub-groups (η 2 ≥ 0.22). Nevertheless overall both groups responded similarly to the respective sprint conditions. No significant differences in step length were observed between groups for the respective condition. It is possible that continued exposure to assisted sprinting might allow the female team-sports players studied to adapt their coordination to the 'over-speed' condition and increase step frequency. It is notable that step-by-step kinematics in these sprints were easy to obtain using relatively inexpensive equipment with possibilities of direct feedback.
Biomechanical influences on balance recovery by stepping.
Hsiao, E T; Robinovitch, S N
1999-10-01
Stepping represents a common means for balance recovery after a perturbation to upright posture. Yet little is known regarding the biomechanical factors which determine whether a step succeeds in preventing a fall. In the present study, we developed a simple pendulum-spring model of balance recovery by stepping, and used this to assess how step length and step contact time influence the effort (leg contact force) and feasibility of balance recovery by stepping. We then compared model predictions of step characteristics which minimize leg contact force to experimentally observed values over a range of perturbation strengths. At all perturbation levels, experimentally observed step execution times were higher than optimal, and step lengths were smaller than optimal. However, the predicted increase in leg contact force associated with these deviations was substantial only for large perturbations. Furthermore, increases in the strength of the perturbation caused subjects to take larger, quicker steps, which reduced their predicted leg contact force. We interpret these data to reflect young subjects' desire to minimize recovery effort, subject to neuromuscular constraints on step execution time and step length. Finally, our model predicts that successful balance recovery by stepping is governed by a coupling between step length, step execution time, and leg strength, so that the feasibility of balance recovery decreases unless declines in one capacity are offset by enhancements in the others. This suggests that one's risk for falls may be affected more by small but diffuse neuromuscular impairments than by larger impairment in a single motor capacity.
The General Alcoholics Anonymous Tools of Recovery: The Adoption of 12-Step Practices and Beliefs
Greenfield, Brenna L.; Tonigan, J. Scott
2013-01-01
Working the 12 steps is widely prescribed for Alcoholics Anonymous (AA) members although the relative merits of different methods for measuring step-work have received minimal attention and even less is known about how step-work predicts later substance use. The current study (1) compared endorsements of step-work on an face-valid or direct measure, the Alcoholics Anonymous Inventory (AAI), with an indirect measure of step-work, the General Alcoholics Anonymous Tools of Recovery (GAATOR), (2) evaluated the underlying factor structure of the GAATOR and changes in step-work over time, (3) examined changes in the endorsement of step-work over time, and (4) investigated how, if at all, 12-step-work predicted later substance use. New AA affiliates (N = 130) completed assessments at intake, 3, 6, and 9 months. Significantly more participants endorsed step-work on the GAATOR than on the AAI for nine of the 12 steps. An exploratory factor analysis revealed a two-factor structure for the GAATOR comprising Behavioral Step-Work and Spiritual Step-Work. Behavioral Step-Work did not change over time, but was predicted by having a sponsor, while Spiritual Step-Work decreased over time and increases were predicted by attending 12-step meetings or treatment. Behavioral Step-Work did not prospectively predict substance use. In contrast, Spiritual Step-Work predicted percent days abstinent, an effect that is consistent with recent work on the mediating effects of spiritual growth, AA, and increased abstinence. Behavioral and Spiritual Step-Work appear to be conceptually distinct components of step-work that have distinct predictors and unique impacts on outcomes. PMID:22867293
DOT National Transportation Integrated Search
2012-06-01
The purpose of these step-by-step guidelines is to assist in planning, designing, and deploying a system that uses radio frequency identification (RFID) technology to measure the time needed for commercial vehicles to complete the northbound border c...
Mass imbalances in EPANET water-quality simulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Davis, Michael J.; Janke, Robert; Taxon, Thomas N.
EPANET is widely employed to simulate water quality in water distribution systems. However, the time-driven simulation approach used to determine concentrations of water-quality constituents provides accurate results, in general, only for small water-quality time steps; use of an adequately short time step may not be feasible. Overly long time steps can yield errors in concentrations and result in situations in which constituent mass is not conserved. Mass may not be conserved even when EPANET gives no errors or warnings. This paper explains how such imbalances can occur and provides examples of such cases; it also presents a preliminary event-driven approachmore » that conserves mass with a water-quality time step that is as long as the hydraulic time step. Results obtained using the current approach converge, or tend to converge, to those obtained using the new approach as the water-quality time step decreases. Improving the water-quality routing algorithm used in EPANET could eliminate mass imbalances and related errors in estimated concentrations.« less
Dhir, Sunny; Walia, Yashika; Zaidi, A A; Hallan, Vipin
2015-03-01
A simple method to amplify infective, complete genomes of single stranded RNA viruses by long distance PCR (LD PCR) from woody plant tissues is described in detail. The present protocol eliminates partial purification of viral particles and the amplification is achieved in three steps: (i) easy preparation of template RNA by incorporating a pre processing step before loading onto the column (ii) reverse transcription by AMV or Superscript reverse transcriptase and (iii) amplification of cDNA by LD PCR using LA or Protoscript Taq DNA polymerase. Incorporation of a preprocessing step helped to isolate consistent quality RNA from recalcitrant woody tissues such as apple, which was critical for efficient amplification of the complete genomes of Apple stem pitting virus (ASPV), Apple stem grooving virus (ASGV) and Apple chlorotic leaf spot virus (ACLSV). Complete genome of ASGV was cloned under T7 RNA polymerase promoter and was confirmed to be infectious through transcript inoculation producing symptoms similar to the wild type virus. This is the first report for the largest RNA virus genome amplified by PCR from total nucleic acid extracts of woody plant tissues. Copyright © 2014 Elsevier B.V. All rights reserved.
The TIPTEQ seismological network in Southern Chile - Studying the Seismogenic Coupling Zone
NASA Astrophysics Data System (ADS)
Haberland, C.; Rietbrock, A.; Lange, D.; Bataille, K.; Hofmann, S.; Dahm, T.; Scherbaum, F.; Tilman, F.; Hermosilla, G.; Group, T. S.
2005-12-01
Subduction zones generate the world's largest and most destructive earthquakes. Understanding the factors leading to these earthquakes in the coupling zone of convergent margins and their interrelation with surface deformation are the main aims of the international and interdisciplinary research initiative TIPTEQ (From The Incoming Plate To megaThrust EarthQuake Processes) which is financed by the German Ministry for Education and Research (BMBF). These aims shall be achieved by obtaining high resolution images of the seismogenic zone and the forearc structure, which will form the base for identifying the processes involved. Our studies focus spatially on the nucleation zone of the Mw=9.5 1960 Chile earthquake, the worldwide largest instrumentally ever recorded earthquake. Within this project a large temporary seismological network is installed in southern Chile since Nov. 2004, covering the forearc between 37° and 39°S. It consists of 120 digitally recording and continuously running seismic stations equipped with short period sensors. The network covers the forearc between 37° and 39°S. The onshore network is complemented by 10 ocean bottom seismometers/hydrophones (OBS/OBH), and the stations (except for 20 stations which will operate until October 2005) were in operation until July 2005. The network is characterized by very short station spacings in the centre which will assure an increased quantity of P and S phase onset times and which will achieve the observation of the whole wavefield (coherent waveforms). A second network of 20 onshore and 20 offshore stations is installed at and around Chiloe Island for a one year period. Until now we collected about 1.2 TByte of data. First steps of the data processing are the event detection, the onset time picking, and the localisation of the (local) earthquakes (catalog). Later steps include the determination of the velocity and attenuation structure (tomography), the analysis of the stress field by moment tensor inversion, the analysis of later phases such as guided waves and scattered/converted/reflected arrivals, the analysis of teleseismic recordings (receiver functions, anisotropy), and many more. The results will be jointly interpreted with the findings of controlled source seismic studies, magnetotelluric measurements, and surface geological studies. Each day 2 to 3 local earthquakes and several teleseismic events were recorded. We present first results including data examples, seismicity distribution, and first 1D velocity models of this ongoing research project. Most of the crustal seismicity in the northern network is concentrated in two clusters close to the coast line; Benioff seismicity can be found down to 100 km depth. Between 41.5° and 43.5° S (at and around Chiloe Island) we also found several earthquakes in the continental (forearc) crust and in the downgoing slab.
Adaptive Time Stepping for Transient Network Flow Simulation in Rocket Propulsion Systems
NASA Technical Reports Server (NTRS)
Majumdar, Alok K.; Ravindran, S. S.
2017-01-01
Fluid and thermal transients found in rocket propulsion systems such as propellant feedline system is a complex process involving fast phases followed by slow phases. Therefore their time accurate computation requires use of short time step initially followed by the use of much larger time step. Yet there are instances that involve fast-slow-fast phases. In this paper, we present a feedback control based adaptive time stepping algorithm, and discuss its use in network flow simulation of fluid and thermal transients. The time step is automatically controlled during the simulation by monitoring changes in certain key variables and by feedback. In order to demonstrate the viability of time adaptivity for engineering problems, we applied it to simulate water hammer and cryogenic chill down in pipelines. Our comparison and validation demonstrate the accuracy and efficiency of this adaptive strategy.
The stepping behavior analysis of pedestrians from different age groups via a single-file experiment
NASA Astrophysics Data System (ADS)
Cao, Shuchao; Zhang, Jun; Song, Weiguo; Shi, Chang'an; Zhang, Ruifang
2018-03-01
The stepping behavior of pedestrians with different age compositions in single-file experiment is investigated in this paper. The relation between step length, step width and stepping time are analyzed by using the step measurement method based on the calculation of curvature of the trajectory. The relations of velocity-step width, velocity-step length and velocity-stepping time for different age groups are discussed and compared with previous studies. Finally effects of pedestrian gender and height on stepping laws and fundamental diagrams are analyzed. The study is helpful for understanding pedestrian dynamics of movement. Meanwhile, it offers experimental data to develop a microscopic model of pedestrian movement by considering stepping behavior.
NASA Astrophysics Data System (ADS)
Goldberg, D.; Bock, Y.; Melgar, D.
2017-12-01
Rapid seismic magnitude assessment is a top priority for earthquake and tsunami early warning systems. For the largest earthquakes, seismic instrumentation tends to underestimate the magnitude, leading to an insufficient early warning, particularly in the case of tsunami evacuation orders. GPS instrumentation provides more accurate magnitude estimations using near-field stations, but isn't sensitive enough to detect the first seismic wave arrivals, thereby limiting solution speed. By optimally combining collocated seismic and GPS instruments, we demonstrate improved solution speed of earthquake magnitude for the largest seismic events. We present a real-time implementation of magnitude-scaling relations that adapts to consider the length of the recording, reflecting the observed evolution of ground motion with time.
Pierson, T.C.
2007-01-01
Dating of dynamic, young (<500 years) geomorphic landforms, particularly volcanofluvial features, requires higher precision than is possible with radiocarbon dating. Minimum ages of recently created landforms have long been obtained from tree-ring ages of the oldest trees growing on new surfaces. But to estimate the year of landform creation requires that two time corrections be added to tree ages obtained from increment cores: (1) the time interval between stabilization of the new landform surface and germination of the sampled trees (germination lag time or GLT); and (2) the interval between seedling germination and growth to sampling height, if the trees are not cored at ground level. The sum of these two time intervals is the colonization time gap (CTG). Such time corrections have been needed for more precise dating of terraces and floodplains in lowland river valleys in the Cascade Range, where significant eruption-induced lateral shifting and vertical aggradation of channels can occur over years to decades, and where timing of such geomorphic changes can be critical to emergency planning. Earliest colonizing Douglas fir (Pseudotsuga menziesii) were sampled for tree-ring dating at eight sites on lowland (<750 m a.s.l.), recently formed surfaces of known age near three Cascade volcanoes - Mount Rainier, Mount St. Helens and Mount Hood - in southwestern Washington and northwestern Oregon. Increment cores or stem sections were taken at breast height and, where possible, at ground level from the largest, oldest-looking trees at each study site. At least ten trees were sampled at each site unless the total of early colonizers was less. Results indicate that a correction of four years should be used for GLT and 10 years for CTG if the single largest (and presumed oldest) Douglas fir growing on a surface of unknown age is sampled. This approach would have a potential error of up to 20 years. Error can be reduced by sampling the five largest Douglas fir instead of the single largest. A GLT correction of 5 years should be added to the mean ring-count age of the five largest trees growing on the surface being dated, if the trees are cored at ground level. This correction would have an approximate error of ??5 years. If the trees are cored at about 1.4 m above the round surface (breast height), a CTG correction of 11 years should be added to the mean age of the five sampled trees (with an error of about ??7 years).
Self-Checking Cell-Based Assays for GPCR Desensitization and Resensitization.
Fisher, Gregory W; Fuhrman, Margaret H; Adler, Sally A; Szent-Gyorgyi, Christopher; Waggoner, Alan S; Jarvik, Jonathan W
2014-09-01
G protein-coupled receptors (GPCRs) play stimulatory or modulatory roles in numerous physiological states and processes, including growth and development, vision, taste and olfaction, behavior and learning, emotion and mood, inflammation, and autonomic functions such as blood pressure, heart rate, and digestion. GPCRs constitute the largest protein superfamily in the human and are the largest target class for prescription drugs, yet most are poorly characterized, and of the more than 350 nonolfactory human GPCRs, over 100 are orphans for which no endogenous ligand has yet been convincingly identified. We here describe new live-cell assays that use recombinant GPCRs to quantify two general features of GPCR cell biology-receptor desensitization and resensitization. The assays employ a fluorogen-activating protein (FAP) reporter that reversibly complexes with either of two soluble organic molecules (fluorogens) whose fluorescence is strongly enhanced when complexed with the FAP. Both assays require no wash or cleanup steps and are readily performed in microwell plates, making them adaptable to high-throughput drug discovery applications. © 2014 Society for Laboratory Automation and Screening.
Masaki, Mitsuhiro; Ikezoe, Tome; Kamiya, Midori; Araki, Kojiro; Isono, Ryo; Kato, Takehiro; Kusano, Ken; Tanaka, Masayo; Sato, Syunsuke; Hirono, Tetsuya; Kita, Kiyoshi; Tsuboyama, Tadao; Ichihashi, Noriaki
2018-04-19
This study aimed to examine the association of independence in ADL with the loads during step ascent motion and other motor functions in 32 nursing home-residing elderly individuals. Independence in ADL was assessed by using the functional independence measure (FIM). The loads at the upper (i.e., pulling up) and lower (i.e., pushing up) levels during step ascent task was measured on a step ascent platform. Hip extensor, knee extensor, plantar flexor muscle, and quadriceps setting strengths; lower extremity agility using the stepping test; and hip and knee joint pain severities were measured. One-legged stance and functional reach distance for balance, and maximal walking speed, timed up-and-go (TUG) time, five-chair-stand time, and step ascent time were also measured to assess mobility. Stepwise regression analysis revealed that the load at pushing up during step ascent motion and TUG time were significant and independent determinants of FIM score. FIM score decreased with decreased the load at pushing up and increased TUG time. The study results suggest that depending on task specificity, both one step up task's push up peak load during step ascent motion and TUG, can partially explain ADL's FIM score in the nursing home-residing elderly individuals. Lower extremity muscle strength, agility, pain or balance measures did not add to the prediction.
Karev, Georgy P; Wolf, Yuri I; Berezovskaya, Faina S; Koonin, Eugene V
2004-09-09
The size distribution of gene families in a broad range of genomes is well approximated by a generalized Pareto function. Evolution of ensembles of gene families can be described with Birth, Death, and Innovation Models (BDIMs). Analysis of the properties of different versions of BDIMs has the potential of revealing important features of genome evolution. In this work, we extend our previous analysis of stochastic BDIMs. In addition to the previously examined rational BDIMs, we introduce potentially more realistic logistic BDIMs, in which birth/death rates are limited for the largest families, and show that their properties are similar to those of models that include no such limitation. We show that the mean time required for the formation of the largest gene families detected in eukaryotic genomes is limited by the mean number of duplications per gene and does not increase indefinitely with the model degree. Instead, this time reaches a minimum value, which corresponds to a non-linear rational BDIM with the degree of approximately 2.7. Even for this BDIM, the mean time of the largest family formation is orders of magnitude greater than any realistic estimates based on the timescale of life's evolution. We employed the embedding chains technique to estimate the expected number of elementary evolutionary events (gene duplications and deletions) preceding the formation of gene families of the observed size and found that the mean number of events exceeds the family size by orders of magnitude, suggesting a highly dynamic process of genome evolution. The variance of the time required for the formation of the largest families was found to be extremely large, with the coefficient of variation > 1. This indicates that some gene families might grow much faster than the mean rate such that the minimal time required for family formation is more relevant for a realistic representation of genome evolution than the mean time. We determined this minimal time using Monte Carlo simulations of family growth from an ensemble of simultaneously evolving singletons. In these simulations, the time elapsed before the formation of the largest family was much shorter than the estimated mean time and was compatible with the timescale of evolution of eukaryotes. The analysis of stochastic BDIMs presented here shows that non-linear versions of such models can well approximate not only the size distribution of gene families but also the dynamics of their formation during genome evolution. The fact that only higher degree BDIMs are compatible with the observed characteristics of genome evolution suggests that the growth of gene families is self-accelerating, which might reflect differential selective pressure acting on different genes.
Algorithm for Training a Recurrent Multilayer Perceptron
NASA Technical Reports Server (NTRS)
Parlos, Alexander G.; Rais, Omar T.; Menon, Sunil K.; Atiya, Amir F.
2004-01-01
An improved algorithm has been devised for training a recurrent multilayer perceptron (RMLP) for optimal performance in predicting the behavior of a complex, dynamic, and noisy system multiple time steps into the future. [An RMLP is a computational neural network with self-feedback and cross-talk (both delayed by one time step) among neurons in hidden layers]. Like other neural-network-training algorithms, this algorithm adjusts network biases and synaptic-connection weights according to a gradient-descent rule. The distinguishing feature of this algorithm is a combination of global feedback (the use of predictions as well as the current output value in computing the gradient at each time step) and recursiveness. The recursive aspect of the algorithm lies in the inclusion of the gradient of predictions at each time step with respect to the predictions at the preceding time step; this recursion enables the RMLP to learn the dynamics. It has been conjectured that carrying the recursion to even earlier time steps would enable the RMLP to represent a noisier, more complex system.
Newmark local time stepping on high-performance computing architectures
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rietmann, Max, E-mail: max.rietmann@erdw.ethz.ch; Institute of Geophysics, ETH Zurich; Grote, Marcus, E-mail: marcus.grote@unibas.ch
In multi-scale complex media, finite element meshes often require areas of local refinement, creating small elements that can dramatically reduce the global time-step for wave-propagation problems due to the CFL condition. Local time stepping (LTS) algorithms allow an explicit time-stepping scheme to adapt the time-step to the element size, allowing near-optimal time-steps everywhere in the mesh. We develop an efficient multilevel LTS-Newmark scheme and implement it in a widely used continuous finite element seismic wave-propagation package. In particular, we extend the standard LTS formulation with adaptations to continuous finite element methods that can be implemented very efficiently with very strongmore » element-size contrasts (more than 100x). Capable of running on large CPU and GPU clusters, we present both synthetic validation examples and large scale, realistic application examples to demonstrate the performance and applicability of the method and implementation on thousands of CPU cores and hundreds of GPUs.« less
NASA Astrophysics Data System (ADS)
Keshavamurthy, Krishna N.; Leary, Owen P.; Merck, Lisa H.; Kimia, Benjamin; Collins, Scott; Wright, David W.; Allen, Jason W.; Brock, Jeffrey F.; Merck, Derek
2017-03-01
Traumatic brain injury (TBI) is a major cause of death and disability in the United States. Time to treatment is often related to patient outcome. Access to cerebral imaging data in a timely manner is a vital component of patient care. Current methods of detecting and quantifying intracranial pathology can be time-consuming and require careful review of 2D/3D patient images by a radiologist. Additional time is needed for image protocoling, acquisition, and processing. These steps often occur in series, adding more time to the process and potentially delaying time-dependent management decisions for patients with traumatic brain injury. Our team adapted machine learning and computer vision methods to develop a technique that rapidly and automatically detects CT-identifiable lesions. Specifically, we use scale invariant feature transform (SIFT)1 and deep convolutional neural networks (CNN)2 to identify important image features that can distinguish TBI lesions from background data. Our learning algorithm is a linear support vector machine (SVM)3. Further, we also employ tools from topological data analysis (TDA) for gleaning insights into the correlation patterns between healthy and pathological data. The technique was validated using 409 CT scans of the brain, acquired via the Progesterone for the Treatment of Traumatic Brain Injury phase III clinical trial (ProTECT_III) which studied patients with moderate to severe TBI4. CT data were annotated by a central radiologist and included patients with positive and negative scans. Additionally, the largest lesion on each positive scan was manually segmented. We reserved 80% of the data for training the SVM and used the remaining 20% for testing. Preliminary results are promising with 92.55% prediction accuracy (sensitivity = 91.15%, specificity = 93.45%), indicating the potential usefulness of this technique in clinical scenarios.
An urban metabolism and ecological footprint assessment of Metro Vancouver.
Moore, Jennie; Kissinger, Meidad; Rees, William E
2013-07-30
As the world urbanizes, the role of cities in determining sustainability outcomes grows in importance. Cities are the dominant form of human habitat, and most of the world's resources are either directly or indirectly consumed in cities. Sustainable city analysis and management requires understanding the demands a city places on a wider geographical area and its ecological resource base. We present a detailed, integrated urban metabolism of residential consumption and ecological footprint analysis of the Vancouver metropolitan region for the year 2006. Our overall goal is to demonstrate the application of a bottom-up ecological footprint analysis using an urban metabolism framework at a metropolitan, regional scale. Our specific objectives are: a) to quantify energy and material consumption using locally generated data and b) to relate these data to global ecological carrying capacity. Although water is the largest material flow through Metro Vancouver (424,860,000 m(3)), it has the smallest ecological footprint (23,100 gha). Food (2,636,850 tonnes) contributes the largest component to the ecological footprint (4,514,400 gha) which includes crop and grazing land as well as carbon sinks required to sequester emissions from food production and distribution. Transportation fuels (3,339,000 m(3)) associated with motor vehicle operation and passenger air travel comprises the second largest material flow through the region and the largest source of carbon dioxide emissions (7,577,000 tonnes). Transportation also accounts for the second largest component of the EF (2,323,200 gha). Buildings account for the largest electricity flow (17,515,150 MWh) and constitute the third largest component of the EF (1,779,240 gha). Consumables (2,400,000 tonnes) comprise the fourth largest component of the EF (1,414,440 gha). Metro Vancouver's total Ecological Footprint in 2006 was 10,071,670 gha, an area approximately 36 times larger than the region itself. The EFA reveals that cropland and carbon sinks (forested land required to sequester carbon dioxide emissions) account for 90% of Metro Vancouver's overall demand for biocapacity. The per capita ecological footprint is 4.76 gha, nearly three times the per capita global supply of biocapacity. Note that this value excludes national government services that operate outside the region and could account for up to an additional 2 gha/ca. Copyright © 2013 Elsevier Ltd. All rights reserved.
van Dierendonk, Roland C.H.; van Egmond, Maria A.N.E.; ten Hagen, Sjang L.; Kreuning, Jippe
2017-01-01
The dodo (Raphus cucullatus) might be the most enigmatic bird of all times. It is, therefore, highly remarkable that no consensus has yet been reached on its body mass; previous scientific estimates of its mass vary by more than 100%. Until now, the vast amount of bones stored at the Natural History Museum in Mauritius has not yet been studied morphometrically nor in relation to body mass. Here, a new estimate of the dodo’s mass is presented based on the largest sample of dodo femora ever measured (n = 174). In order to do this, we have used the regression method and chosen our variables based on biological, mathematical and physical arguments. The results indicate that the mean mass of the dodo was circa 12 kg, which is approximately five times as heavy as the largest living Columbidae (pigeons and doves), the clade to which the dodo belongs. PMID:29230358
Catling, David C; Glein, Christopher R; Zahnle, Kevin J; McKay, Christopher P
2005-06-01
Life is constructed from a limited toolkit: the Periodic Table. The reduction of oxygen provides the largest free energy release per electron transfer, except for the reduction of fluorine and chlorine. However, the bonding of O2 ensures that it is sufficiently stable to accumulate in a planetary atmosphere, whereas the more weakly bonded halogen gases are far too reactive ever to achieve significant abundance. Consequently, an atmosphere rich in O2 provides the largest feasible energy source. This universal uniqueness suggests that abundant O2 is necessary for the high-energy demands of complex life anywhere, i.e., for actively mobile organisms of approximately 10(-1)-10(0) m size scale with specialized, differentiated anatomy comparable to advanced metazoans. On Earth, aerobic metabolism provides about an order of magnitude more energy for a given intake of food than anaerobic metabolism. As a result, anaerobes do not grow beyond the complexity of uniseriate filaments of cells because of prohibitively low growth efficiencies in a food chain. The biomass cumulative number density, n, at a particular mass, m, scales as n (> m) proportional to m(-1) for aquatic aerobes, and we show that for anaerobes the predicted scaling is n proportional to m (-1.5), close to a growth-limited threshold. Even with aerobic metabolism, the partial pressure of atmospheric O2 (P(O2)) must exceed approximately 10(3) Pa to allow organisms that rely on O2 diffusion to evolve to a size approximately 10(3) m x P(O2) in the range approximately 10(3)-10(4) Pa is needed to exceed the threshold of approximately 10(2) m size for complex life with circulatory physiology. In terrestrial life, O(2) also facilitates hundreds of metabolic pathways, including those that make specialized structural molecules found only in animals. The time scale to reach P(O(2)) approximately 10(4) Pa, or "oxygenation time," was long on the Earth (approximately 3.9 billion years), within almost a factor of 2 of the Sun's main sequence lifetime. Consequently, we argue that the oxygenation time is likely to be a key rate-limiting step in the evolution of complex life on other habitable planets. The oxygenation time could preclude complex life on Earth-like planets orbiting short-lived stars that end their main sequence lives before planetary oxygenation takes place. Conversely, Earth-like planets orbiting long-lived stars are potentially favorable habitats for complex life.
Webb, Thomas J.; Vanden Berghe, Edward; O'Dor, Ron
2010-01-01
Background Understanding the distribution of marine biodiversity is a crucial first step towards the effective and sustainable management of marine ecosystems. Recent efforts to collate location records from marine surveys enable us to assemble a global picture of recorded marine biodiversity. They also effectively highlight gaps in our knowledge of particular marine regions. In particular, the deep pelagic ocean – the largest biome on Earth – is chronically under-represented in global databases of marine biodiversity. Methodology/Principal Findings We use data from the Ocean Biogeographic Information System to plot the position in the water column of ca 7 million records of marine species occurrences. Records from relatively shallow waters dominate this global picture of recorded marine biodiversity. In addition, standardising the number of records from regions of the ocean differing in depth reveals that regardless of ocean depth, most records come either from surface waters or the sea bed. Midwater biodiversity is drastically under-represented. Conclusions/Significance The deep pelagic ocean is the largest habitat by volume on Earth, yet it remains biodiversity's big wet secret, as it is hugely under-represented in global databases of marine biological records. Given both its value in the provision of a range of ecosystem services, and its vulnerability to threats including overfishing and climate change, there is a pressing need to increase our knowledge of Earth's largest ecosystem. PMID:20689845
A two step Bayesian approach for genomic prediction of breeding values.
Shariati, Mohammad M; Sørensen, Peter; Janss, Luc
2012-05-21
In genomic models that assign an individual variance to each marker, the contribution of one marker to the posterior distribution of the marker variance is only one degree of freedom (df), which introduces many variance parameters with only little information per variance parameter. A better alternative could be to form clusters of markers with similar effects where markers in a cluster have a common variance. Therefore, the influence of each marker group of size p on the posterior distribution of the marker variances will be p df. The simulated data from the 15th QTL-MAS workshop were analyzed such that SNP markers were ranked based on their effects and markers with similar estimated effects were grouped together. In step 1, all markers with minor allele frequency more than 0.01 were included in a SNP-BLUP prediction model. In step 2, markers were ranked based on their estimated variance on the trait in step 1 and each 150 markers were assigned to one group with a common variance. In further analyses, subsets of 1500 and 450 markers with largest effects in step 2 were kept in the prediction model. Grouping markers outperformed SNP-BLUP model in terms of accuracy of predicted breeding values. However, the accuracies of predicted breeding values were lower than Bayesian methods with marker specific variances. Grouping markers is less flexible than allowing each marker to have a specific marker variance but, by grouping, the power to estimate marker variances increases. A prior knowledge of the genetic architecture of the trait is necessary for clustering markers and appropriate prior parameterization.
Botschko, Yehudit; Yarkoni, Merav; Joshua, Mati
2018-01-01
When animal behavior is studied in a laboratory environment, the animals are often extensively trained to shape their behavior. A crucial question is whether the behavior observed after training is part of the natural repertoire of the animal or represents an outlier in the animal's natural capabilities. This can be investigated by assessing the extent to which the target behavior is manifested during the initial stages of training and the time course of learning. We explored this issue by examining smooth pursuit eye movements in monkeys naïve to smooth pursuit tasks. We recorded the eye movements of monkeys from the 1st days of training on a step-ramp paradigm. We used bright spots, monkey pictures and scrambled versions of the pictures as moving targets. We found that during the initial stages of training, the pursuit initiation was largest for the monkey pictures and in some direction conditions close to target velocity. When the pursuit initiation was large, the monkeys mostly continued to track the target with smooth pursuit movements while correcting for displacement errors with small saccades. Two weeks of training increased the pursuit eye velocity in all stimulus conditions, whereas further extensive training enhanced pursuit slightly more. The training decreased the coefficient of variation of the eye velocity. Anisotropies that grade pursuit across directions were observed from the 1st day of training and mostly persisted across training. Thus, smooth pursuit in the step-ramp paradigm appears to be part of the natural repertoire of monkeys' behavior and training adjusts monkeys' natural predisposed behavior.
Automatic Registration of Terrestrial Laser Scanner Point Clouds Using Natural Planar Surfaces
NASA Astrophysics Data System (ADS)
Theiler, P. W.; Schindler, K.
2012-07-01
Terrestrial laser scanners have become a standard piece of surveying equipment, used in diverse fields like geomatics, manufacturing and medicine. However, the processing of today's large point clouds is time-consuming, cumbersome and not automated enough. A basic step of post-processing is the registration of scans from different viewpoints. At present this is still done using artificial targets or tie points, mostly by manual clicking. The aim of this registration step is a coarse alignment, which can then be improved with the existing algorithm for fine registration. The focus of this paper is to provide such a coarse registration in a fully automatic fashion, and without placing any target objects in the scene. The basic idea is to use virtual tie points generated by intersecting planar surfaces in the scene. Such planes are detected in the data with RANSAC and optimally fitted using least squares estimation. Due to the huge amount of recorded points, planes can be determined very accurately, resulting in well-defined tie points. Given two sets of potential tie points recovered in two different scans, registration is performed by searching for the assignment which preserves the geometric configuration of the largest possible subset of all tie points. Since exhaustive search over all possible assignments is intractable even for moderate numbers of points, the search is guided by matching individual pairs of tie points with the help of a novel descriptor based on the properties of a point's parent planes. Experiments show that the proposed method is able to successfully coarse register TLS point clouds without the need for artificial targets.
Evolutionary History of the Plant Pathogenic Bacterium Xanthomonas axonopodis
Mhedbi-Hajri, Nadia; Hajri, Ahmed; Boureau, Tristan; Darrasse, Armelle; Durand, Karine; Brin, Chrystelle; Saux, Marion Fischer-Le; Manceau, Charles; Poussier, Stéphane; Pruvost, Olivier
2013-01-01
Deciphering mechanisms shaping bacterial diversity should help to build tools to predict the emergence of infectious diseases. Xanthomonads are plant pathogenic bacteria found worldwide. Xanthomonas axonopodis is a genetically heterogeneous species clustering, into six groups, strains that are collectively pathogenic on a large number of plants. However, each strain displays a narrow host range. We address the question of the nature of the evolutionary processes – geographical and ecological speciation – that shaped this diversity. We assembled a large collection of X. axonopodis strains that were isolated over a long period, over continents, and from various hosts. Based on the sequence analysis of seven housekeeping genes, we found that recombination occurred as frequently as point mutation in the evolutionary history of X. axonopodis. However, the impact of recombination was about three times greater than the impact of mutation on the diversity observed in the whole dataset. We then reconstructed the clonal genealogy of the strains using coalescent and genealogy approaches and we studied the diversification of the pathogen using a model of divergence with migration. The suggested scenario involves a first step of generalist diversification that spanned over the last 25 000 years. A second step of ecology-driven specialization occurred during the past two centuries. Eventually, secondary contacts between host-specialized strains probably occurred as a result of agricultural development and intensification, allowing genetic exchanges of virulence-associated genes. These transfers may have favored the emergence of novel pathotypes. Finally, we argue that the largest ecological entity within X. axonopodis is the pathovar. PMID:23505513
Muon Physics at Run-I and its upgrade plan
NASA Astrophysics Data System (ADS)
Benekos, Nektarios Chr.
2015-05-01
The Large Hadron Collider (LHC) and its multi-purpose Detector, ATLAS, has been operated successfully at record centre-of-mass energies of 7 and TeV. After this successful LHC Run-1, plans are actively advancing for a series of upgrades, culminating roughly 10 years from now in the high luminosity LHC (HL-LHC) project, delivering of order five times the LHC nominal instantaneous luminosity along with luminosity leveling. The final goal is to extend the data set from about few hundred fb-1 expected for LHC running to 3000 fb-1 by around 2030. To cope with the corresponding rate increase, the ATLAS detector needs to be upgraded. The upgrade will proceed in two steps: Phase I in the LHC shutdown 2018/19 and Phase II in 2023-25. The largest of the ATLAS Phase-1 upgrades concerns the replacement of the first muon station of the highrapidity region, the so called New Small Wheel. This configuration copes with the highest rates expected in Phase II and considerably enhances the performance of the forward muon system by adding triggering functionality to the first muon station. Prospects for the ongoing and future data taking are presented. This article presents the main muon physics results from LHC Run-1 based on a total luminosity of 30 fb^-1. Prospects for the ongoing and future data taking are also presented. We will conclude with an update of the status of the project and the steps towards a complete operational system, ready to be installed in ATLAS in 2018/19.
Impacts of changes in climate and landscape pattern on ecosystem services.
Hao, Ruifang; Yu, Deyong; Liu, Yupeng; Liu, Yang; Qiao, Jianmin; Wang, Xue; Du, Jinshen
2017-02-01
The restoration of degraded vegetation can effectively improve ecosystem services, increase human well-being, and promote regional sustainable development. Understanding the changing trends in ecosystem services and their drivers is an important step in informing decision makers for the development of reasonable landscape management measures. From 2001 to 2014, we analyzed the changing trends in five critical ecosystem services in the Xilingol Grassland, which is typical of grasslands in North China, including net primary productivity (NPP), soil conservation (SC), soil loss due to wind (SL), water yield (WY) and water retention (WR). Additionally, we quantified how climatic factors and landscape patterns affect the five ecosystem services on both annual and seasonal time scales. Overall, the results indicated that vegetation restoration can effectively improve the five grassland ecosystem services, and precipitation (PPT) is the most critical climatic factor. The impact of changes in the normalized difference vegetation index (NDVI) was most readily detectable on the annual time scale, whereas the impact of changes in landscape pattern was most readily detectable on the seasonal time scale. A win-win situation in terms of grassland ecosystem services (e.g., vegetation productivity, SC, WR and reduced SL) can be achieved by increasing grassland aggregation, partitioning the largest grasslands, dividing larger areas of farmland into smaller patches, and increasing the area of appropriate forest stands. Our work may aid policymakers in developing regional landscape management schemes. Copyright © 2016 Elsevier B.V. All rights reserved.
Liu, L; Luo, Y; Accensi, F; Ganges, L; Rodríguez, F; Shan, H; Ståhl, K; Qiu, H-J; Belák, S
2017-10-01
African swine fever (ASF) and classical swine fever (CSF) are two highly infectious transboundary animal diseases (TADs) that are serious threats to the pig industry worldwide, including in China, the world's largest pork producer. In this study, a duplex real-time PCR assay was developed for the rapid detection and differentiation of African swine fever virus (ASFV) and classical swine fever virus (CSFV). The assay was performed on a portable, battery-powered PCR thermocycler with a low sample throughput (termed as 'T-COR4 assay'). The feasibility and reliability of the T-COR4 assay as a possible field method was investigated by testing clinical samples collected in China. When evaluated with reference materials or samples from experimental infections, the assay performed in a reliable manner, producing results comparable to those obtained from stationary PCR platforms. Of 59 clinical samples, 41 had results identical to a two-step CSFV real-time PCR assay. No ASFV was detected in these samples. The T-COR4 assay was technically easy to perform and produced results within 3 h, including sample preparation. In combination with a simple sample preparation method, the T-COR4 assay provides a new tool for the field diagnosis and differentiation of ASF and CSF, which could be of particular value in remote areas. © 2016 Blackwell Verlag GmbH.
Green, Christopher F; Crawford, Victoria; Bresnen, Gaynor; Rowe, Philip H
2015-02-01
This study used a 'Lean' technique, the 'waste walk' to evaluate the activities of clinical pharmacists with reference to the seven wastes described in 'Lean' including 'defects', 'unnecessary motion', 'overproduction', 'transport of products or material', 'unnecessary waiting', 'unnecessary inventory' and 'inappropriate processing'. The objectives of the study were to categorise the activities of ward-based clinical pharmacists into waste and non-waste, provide detail around what constitutes waste activity and quantify the proportion of time attributed to each category. This study was carried out in a district general hospital in the North West of England. Staff were observed using work-sampling techniques, to categorise activity into waste and non-waste, with waste activities being allocated to each of the seven wastes described earlier and subdivided into recurrent themes. Twenty different pharmacists were observed for 1 h on two separate occasions. Of 1440 observations, 342 (23.8%) were categorised as waste with 'defects' and 'unnecessary motion' accounting for the largest proportions of waste activity. Observation of clinical pharmacists' activities has identified that a significant proportion of their time could be categorised as 'waste'. There are practical steps that could be implemented in order to ensure their time is used as productively as possible. Given the challenges facing the UK National Health Service, the adoption of 'Lean' techniques provides an opportunity to improve quality and productivity while reducing costs. © 2014 Royal Pharmaceutical Society.
NASA Astrophysics Data System (ADS)
Clark, Martyn P.; Kavetski, Dmitri
2010-10-01
A major neglected weakness of many current hydrological models is the numerical method used to solve the governing model equations. This paper thoroughly evaluates several classes of time stepping schemes in terms of numerical reliability and computational efficiency in the context of conceptual hydrological modeling. Numerical experiments are carried out using 8 distinct time stepping algorithms and 6 different conceptual rainfall-runoff models, applied in a densely gauged experimental catchment, as well as in 12 basins with diverse physical and hydroclimatic characteristics. Results show that, over vast regions of the parameter space, the numerical errors of fixed-step explicit schemes commonly used in hydrology routinely dwarf the structural errors of the model conceptualization. This substantially degrades model predictions, but also, disturbingly, generates fortuitously adequate performance for parameter sets where numerical errors compensate for model structural errors. Simply running fixed-step explicit schemes with shorter time steps provides a poor balance between accuracy and efficiency: in some cases daily-step adaptive explicit schemes with moderate error tolerances achieved comparable or higher accuracy than 15 min fixed-step explicit approximations but were nearly 10 times more efficient. From the range of simple time stepping schemes investigated in this work, the fixed-step implicit Euler method and the adaptive explicit Heun method emerge as good practical choices for the majority of simulation scenarios. In combination with the companion paper, where impacts on model analysis, interpretation, and prediction are assessed, this two-part study vividly highlights the impact of numerical errors on critical performance aspects of conceptual hydrological models and provides practical guidelines for robust numerical implementation.
Comparing an annual and daily time-step model for predicting field-scale phosphorus loss
USDA-ARS?s Scientific Manuscript database
Numerous models exist for describing phosphorus (P) losses from agricultural fields. The complexity of these models varies considerably ranging from simple empirically-based annual time-step models to more complex process-based daily time step models. While better accuracy is often assumed with more...
2017-01-01
The aim of this study was to evaluate the effects of the lateral amplitude and regularity of upper body fluctuation on step time variability. Return map analysis was used to clarify the relationship between step time variability and a history of falling. Eleven healthy, community-dwelling older adults and twelve younger adults participated in the study. All of the subjects walked 25 m at a comfortable speed. Trunk acceleration was measured using triaxial accelerometers attached to the third lumbar vertebrae (L3) and the seventh cervical vertebrae (C7). The normalized average magnitude of acceleration, the coefficient of determination ($R^2$) of the return map, and the step time variabilities, were calculated. Cluster analysis using the average fluctuation and the regularity of C7 fluctuation identified four walking patterns in the mediolateral (ML) direction. The participants with higher fluctuation and lower regularity showed significantly greater step time variability compared with the others. Additionally, elderly participants who had fallen in the past year had higher amplitude and a lower regularity of fluctuation during walking. In conclusion, by focusing on the time evolution of each step, it is possible to understand the cause of stride and/or step time variability that is associated with a risk of falls. PMID:28700633
Chidori, Kazuhiro; Yamamoto, Yuji
2017-01-01
The aim of this study was to evaluate the effects of the lateral amplitude and regularity of upper body fluctuation on step time variability. Return map analysis was used to clarify the relationship between step time variability and a history of falling. Eleven healthy, community-dwelling older adults and twelve younger adults participated in the study. All of the subjects walked 25 m at a comfortable speed. Trunk acceleration was measured using triaxial accelerometers attached to the third lumbar vertebrae (L3) and the seventh cervical vertebrae (C7). The normalized average magnitude of acceleration, the coefficient of determination ($R^2$) of the return map, and the step time variabilities, were calculated. Cluster analysis using the average fluctuation and the regularity of C7 fluctuation identified four walking patterns in the mediolateral (ML) direction. The participants with higher fluctuation and lower regularity showed significantly greater step time variability compared with the others. Additionally, elderly participants who had fallen in the past year had higher amplitude and a lower regularity of fluctuation during walking. In conclusion, by focusing on the time evolution of each step, it is possible to understand the cause of stride and/or step time variability that is associated with a risk of falls.
NASA Astrophysics Data System (ADS)
Gonzalez-Hidalgo, J. C.; Batalla, R.; Cerda, A.; de Luis, M.
2009-04-01
When Thornes and Brunsden wrote in 1977 "How often one hears the researcher (and no less the undergraduate) complain that after weeks of observation "nothing happened" only to learn that, the day after his departure, a flood caused unprecedent erosion and channel changes!" (Thornes and Brunsden, 1977, p. 57), they focussed on two different problems in geomorphological research: the effects of extreme events and the temporal compression of geomorphological processes. The time compression is one of the main characteristic of erosion processes. It means that an important amount of the total soil eroded is produced in very short temporal intervals, i.e. few events mostly related to extreme events. From magnitude-frequency analysis we know that few events, not necessarily extreme by magnitude, produce high amount of geomorphological work. Last but not least, extreme isolated events are a classical issue in geomorphology by their specific effects, and they are receiving permanent attention, increased at present because of scenarios of global change. Notwithstanding, the time compression of geomorphological processes could be focused not only on the analysis of extreme events and the traditional magnitude-frequency approach, but on new complementary approach based on the effects of largest events. The classical approach define extreme event as a rare event (identified by its magnitude and quantified by some deviation from central value), while we define largest events by the rank, whatever their magnitude. In a previous research on time compression of soil erosion, using USLE soil erosion database (Gonzalez-Hidalgo et al., EGU 2007), we described a relationship between the total amount of daily erosive events recorded by plot and the percentage contribution to total soil erosion of n-largest aggregated daily events. Now we offer a further refined analysis comparing different agricultural regions in USA. To do that we have analyzed data from 594 erosion plots from USLE database with different record periods, and located in different climatic regions. Results indicate that there are no significant differences in the mean contribution of aggregated 5-largest daily erosion events between different agricultural divisions (i.e. different regional climate), and the differences detected can be attributed to specific site and plots conditions. Expected contribution of 5-largest daily event for 100 total daily events recorded is estimated around 40% of total soil erosion. We discuss the possible causes of such results and the applicability of them to the design of field research on soil erosion plots.
NASA Astrophysics Data System (ADS)
Lafitte, Pauline; Melis, Ward; Samaey, Giovanni
2017-07-01
We present a general, high-order, fully explicit relaxation scheme which can be applied to any system of nonlinear hyperbolic conservation laws in multiple dimensions. The scheme consists of two steps. In a first (relaxation) step, the nonlinear hyperbolic conservation law is approximated by a kinetic equation with stiff BGK source term. Then, this kinetic equation is integrated in time using a projective integration method. After taking a few small (inner) steps with a simple, explicit method (such as direct forward Euler) to damp out the stiff components of the solution, the time derivative is estimated and used in an (outer) Runge-Kutta method of arbitrary order. We show that, with an appropriate choice of inner step size, the time step restriction on the outer time step is similar to the CFL condition for the hyperbolic conservation law. Moreover, the number of inner time steps is also independent of the stiffness of the BGK source term. We discuss stability and consistency, and illustrate with numerical results (linear advection, Burgers' equation and the shallow water and Euler equations) in one and two spatial dimensions.
NASA Astrophysics Data System (ADS)
Xavier, Prince K.; Petch, Jon C.; Klingaman, Nicholas P.; Woolnough, Steve J.; Jiang, Xianan; Waliser, Duane E.; Caian, Mihaela; Cole, Jason; Hagos, Samson M.; Hannay, Cecile; Kim, Daehyun; Miyakawa, Tomoki; Pritchard, Michael S.; Roehrig, Romain; Shindo, Eiki; Vitart, Frederic; Wang, Hailan
2015-05-01
An analysis of diabatic heating and moistening processes from 12 to 36 h lead time forecasts from 12 Global Circulation Models are presented as part of the "Vertical structure and physical processes of the Madden-Julian Oscillation (MJO)" project. A lead time of 12-36 h is chosen to constrain the large-scale dynamics and thermodynamics to be close to observations while avoiding being too close to the initial spin-up of the models as they adjust to being driven from the Years of Tropical Convection (YOTC) analysis. A comparison of the vertical velocity and rainfall with the observations and YOTC analysis suggests that the phases of convection associated with the MJO are constrained in most models at this lead time although the rainfall in the suppressed phase is typically overestimated. Although the large-scale dynamics is reasonably constrained, moistening and heating profiles have large intermodel spread. In particular, there are large spreads in convective heating and moistening at midlevels during the transition to active convection. Radiative heating and cloud parameters have the largest relative spread across models at upper levels during the active phase. A detailed analysis of time step behavior shows that some models show strong intermittency in rainfall and differences in the precipitation and dynamics relationship between models. The wealth of model outputs archived during this project is a very valuable resource for model developers beyond the study of the MJO. In addition, the findings of this study can inform the design of process model experiments, and inform the priorities for field experiments and future observing systems.
Modeling and clustering water demand patterns from real-world smart meter data
NASA Astrophysics Data System (ADS)
Cheifetz, Nicolas; Noumir, Zineb; Samé, Allou; Sandraz, Anne-Claire; Féliers, Cédric; Heim, Véronique
2017-08-01
Nowadays, drinking water utilities need an acute comprehension of the water demand on their distribution network, in order to efficiently operate the optimization of resources, manage billing and propose new customer services. With the emergence of smart grids, based on automated meter reading (AMR), a better understanding of the consumption modes is now accessible for smart cities with more granularities. In this context, this paper evaluates a novel methodology for identifying relevant usage profiles from the water consumption data produced by smart meters. The methodology is fully data-driven using the consumption time series which are seen as functions or curves observed with an hourly time step. First, a Fourier-based additive time series decomposition model is introduced to extract seasonal patterns from time series. These patterns are intended to represent the customer habits in terms of water consumption. Two functional clustering approaches are then used to classify the extracted seasonal patterns: the functional version of K-means, and the Fourier REgression Mixture (FReMix) model. The K-means approach produces a hard segmentation and K representative prototypes. On the other hand, the FReMix is a generative model and also produces K profiles as well as a soft segmentation based on the posterior probabilities. The proposed approach is applied to a smart grid deployed on the largest water distribution network (WDN) in France. The two clustering strategies are evaluated and compared. Finally, a realistic interpretation of the consumption habits is given for each cluster. The extensive experiments and the qualitative interpretation of the resulting clusters allow one to highlight the effectiveness of the proposed methodology.
New Cloud Science from the New ARM Cloud Radar Systems (Invited)
NASA Astrophysics Data System (ADS)
Wiscombe, W. J.
2010-12-01
The DOE ARM Program is deploying over $30M worth of scanning polarimetric Doppler radars at its four fixed and two mobile sites, with the object of advancing cloud lifecycle science, and cloud-aerosol-precipitation interaction science, by a quantum leap. As of 2011, there will be 13 scanning radar systems to complement its existing array of profiling cloud radars: C-band for precipitation, X-band for drizzle and precipitation, and two-frequency radars for cloud droplets and drizzle. This will make ARM the world’s largest science user of, and largest provider of data from, ground-based cloud radars. The philosophy behind this leap is actually quite simple, to wit: dimensionality really does matter. Just as 2D turbulence is fundamentally different from 3D turbulence, so observing clouds only at zenith provides a dimensionally starved, and sometimes misleading, picture of real clouds. In particular, the zenith view can say little or nothing about cloud lifecycle and the second indirect effect, nor about aerosol-precipitation interactions. It is not even particularly good at retrieving the cloud fraction (no matter how that slippery quantity is defined). This talk will review the history that led to this development and then discuss the aspirations for how this will propel cloud-aerosol-precipitation science forward. The step by step plan for translating raw radar data into information that is useful to cloud and aerosol scientists and climate modelers will be laid out, with examples from ARM’s recent scanning cloud radar deployments in the Azores and Oklahoma . In the end, the new systems should allow cloud systems to be understood as 4D coherent entities rather than dimensionally crippled 2D or 3D entities such as observed by satellites and zenith-pointing radars.
Craters of the Pluto-Charon system
NASA Astrophysics Data System (ADS)
Robbins, Stuart J.; Singer, Kelsi N.; Bray, Veronica J.; Schenk, Paul; Lauer, Tod R.; Weaver, Harold A.; Runyon, Kirby; McKinnon, William B.; Beyer, Ross A.; Porter, Simon; White, Oliver L.; Hofgartner, Jason D.; Zangari, Amanda M.; Moore, Jeffrey M.; Young, Leslie A.; Spencer, John R.; Binzel, Richard P.; Buie, Marc W.; Buratti, Bonnie J.; Cheng, Andrew F.; Grundy, William M.; Linscott, Ivan R.; Reitsema, Harold J.; Reuter, Dennis C.; Showalter, Mark R.; Tyler, G. Len; Olkin, Catherine B.; Ennico, Kimberly S.; Stern, S. Alan; New Horizons Lorri, Mvic Instrument Teams
2017-05-01
NASA's New Horizons flyby mission of the Pluto-Charon binary system and its four moons provided humanity with its first spacecraft-based look at a large Kuiper Belt Object beyond Triton. Excluding this system, multiple Kuiper Belt Objects (KBOs) have been observed for only 20 years from Earth, and the KBO size distribution is unconstrained except among the largest objects. Because small KBOs will remain beyond the capabilities of ground-based observatories for the foreseeable future, one of the best ways to constrain the small KBO population is to examine the craters they have made on the Pluto-Charon system. The first step to understanding the crater population is to map it. In this work, we describe the steps undertaken to produce a robust crater database of impact features on Pluto, Charon, and their two largest moons, Nix and Hydra. These include an examination of different types of images and image processing, and we present an analysis of variability among the crater mapping team, where crater diameters were found to average ± 10% uncertainty across all sizes measured (∼0.5-300 km). We also present a few basic analyses of the crater databases, finding that Pluto's craters' differential size-frequency distribution across the encounter hemisphere has a power-law slope of approximately -3.1 ± 0.1 over diameters D ≈ 15-200 km, and Charon's has a slope of -3.0 ± 0.2 over diameters D ≈ 10-120 km; it is significantly shallower on both bodies at smaller diameters. We also better quantify evidence of resurfacing evidenced by Pluto's craters in contrast with Charon's. With this work, we are also releasing our database of potential and probable impact craters: 5287 on Pluto, 2287 on Charon, 35 on Nix, and 6 on Hydra.
Craters of the Pluto-Charon System
NASA Technical Reports Server (NTRS)
Robbins, Stuart J.; Singer, Kelsi N.; Bray, Veronica J.; Schenk, Paul; Lauer, Todd R.; Weaver, Harold A.; Runyon, Kirby; Mckinnon, William B.; Beyer, Ross A.; Porter, Simon;
2016-01-01
NASA's New Horizons flyby mission of the Pluto-Charon binary system and its four moons provided humanity with its first spacecraft-based look at a large Kuiper Belt Object beyond Triton. Excluding this system, multiple Kuiper Belt Objects (KBOs) have been observed for only 20 years from Earth, and the KBO size distribution is unconstrained except among the largest objects. Because small KBOs will remain beyond the capabilities of ground-based observatories for the foreseeable future, one of the best ways to constrain the small KBO population is to examine the craters they have made on the Pluto-Charon system. The first step to understanding the crater population is to map it. In this work, we describe the steps undertaken to produce a robust crater database of impact features on Pluto, Charon, and their two largest moons, Nix and Hydra. These include an examination of different types of images and image processing, and we present an analysis of variability among the crater mapping team, where crater diameters were found to average +/-10% uncertainty across all sizes measured (approx.0.5-300 km). We also present a few basic analyses of the crater databases, finding that Pluto's craters' differential size-frequency distribution across the encounter hemisphere has a power-law slope of approximately -3.1 +/- 0.1 over diameters D approx. = 15-200 km, and Charon's has a slope of -3.0 +/- 0.2 over diameters D approx. = 10-120 km; it is significantly shallower on both bodies at smaller diameters. We also better quantify evidence of resurfacing evidenced by Pluto's craters in contrast with Charon's. With this work, we are also releasing our database of potential and probable impact craters: 5287 on Pluto, 2287 on Charon, 35 on Nix, and 6 on Hydra.
Smith, Brian T; Coiro, Daniel J; Finson, Richard; Betz, Randal R; McCarthy, James
2002-03-01
Force-sensing resistors (FSRs) were used to detect the transitions between five main phases of gait for the control of electrical stimulation (ES) while walking with seven children with spastic diplegia, cerebral palsy. The FSR positions within each child's insoles were customized based on plantar pressure profiles determined using a pressure-sensitive membrane array (Tekscan Inc., Boston, MA). The FSRs were placed in the insoles so that pressure transitions coincided with an ipsilateral or contralateral gait event. The transitions between the following gait phases were determined: loading response, mid- and terminal stance, and pre- and initial swing. Following several months of walking on a regular basis with FSR-triggered intramuscular ES to the hip and knee extensors, hip abductors, and ankle dorsi and plantar flexors, the accuracy and reliability of the FSRs to detect gait phase transitions were evaluated. Accuracy was evaluated with four of the subjects by synchronizing the output of the FSR detection scheme with a VICON (Oxford Metrics, U.K.) motion analysis system, which was used as the gait event reference. While mean differences between each FSR-detected gait event and that of the standard (VICON) ranged from +35 ms (indicating that the FSR detection scheme recognized the event before it actually happened) to -55 ms (indicating that the FSR scheme recognized the event after it occurred), the difference data was widely distributed, which appeared to be due in part to both intrasubject (step-to-step) and intersubject variability. Terminal stance exhibited the largest mean difference and standard deviation, while initial swing exhibited the smallest deviation and preswing the smallest mean difference. To determine step-to-step reliability, all seven children walked on a level walkway for at least 50 steps. Of 642 steps, there were no detection errors in 94.5% of the steps. Of the steps that contained a detection error, 80% were due to the failure of the FSR signal to reach the programmed threshold level during the transition to loading response. Recovery from an error always occurred one to three steps later.
Impact of climate change on river discharge in the Teteriv River basin (Ukraine)
NASA Astrophysics Data System (ADS)
Didovets, Iulii; Lobanova, Anastasia; Krysanova, Valentina; Snizhko, Sergiy; Bronstert, Axel
2016-04-01
The problem of water resources availability in the climate change context arises now in many countries. Ukraine is characterized by a relatively low availability of water resources compared to other countries. It is the 111th among 152 countries by the amount of domestic water resources available per capita. To ensure socio-economic development of the region and to adapt to climate change, a comprehensive assessment of potential changes in qualitative and quantitative characteristics of water resources in the region is needed. The focus of our study is the Teteriv River basin located in northern Ukraine within three administrative districts covering the area of 15,300 km2. The Teteriv is the right largest tributary of the Dnipro River, which is the fourth longest river in Europe. The water resources in the region are intensively used in industry, communal infrastructure, and agriculture. This is evidenced by a large number of dams and industrial objects which have been constructed from the early 20th century. For success of the study, it was necessary to apply a comprehensive hydrological model, tested in similar natural conditions. Therefore, an eco-hydrological model SWIM with the daily time step was applied, as this model was used previously for climate impact assessment in many similar river basins on the European territory. The model was set up, calibrated and validated for the gauge Ivankiv located close to the outlet of the Teteriv River. The Nash-Sutcliffe efficiency coefficient for the calibration period is 0.79 (0.86), and percent bias is 4,9% (-3.6%) with the daily (monthly) time step. The future climate scenarios were selected from the IMPRESSIONS (Impacts and Risks from High-End Scenarios: Strategies for Innovative Solutions, www.impressions-project.eu) project, which developed 7 climate scenarios under RCP4.5 and RCP8.5 based on GCMs and downscaled using RCMs. The results of climate impact assessment for the Teteriv River basin will be presented.
Evolution of Canada’s Boreal Forest Spatial Patterns as Seen from Space
Pickell, Paul D.; Coops, Nicholas C.; Gergel, Sarah E.; Andison, David W.; Marshall, Peter L.
2016-01-01
Understanding the development of landscape patterns over broad spatial and temporal scales is a major contribution to ecological sciences and is a critical area of research for forested land management. Boreal forests represent an excellent case study for such research because these forests have undergone significant changes over recent decades. We analyzed the temporal trends of four widely-used landscape pattern indices for boreal forests of Canada: forest cover, largest forest patch index, forest edge density, and core (interior) forest cover. The indices were computed over landscape extents ranging from 5,000 ha (n = 18,185) to 50,000 ha (n = 1,662) and across nine major ecozones of Canada. We used 26 years of Landsat satellite imagery to derive annualized trends of the landscape pattern indices. The largest declines in forest cover, largest forest patch index, and core forest cover were observed in the Boreal Shield, Boreal Plain, and Boreal Cordillera ecozones. Forest edge density increased at all landscape extents for all ecozones. Rapidly changing landscapes, defined as the 90th percentile of forest cover change, were among the most forested initially and were characterized by four times greater decrease in largest forest patch index, three times greater increase in forest edge density, and four times greater decrease in core forest cover compared with all 50,000 ha landscapes. Moreover, approximately 18% of all 50,000 ha landscapes did not change due to a lack of disturbance. The pattern database results provide important context for forest management agencies committed to implementing ecosystem-based management strategies. PMID:27383055
Evolution of Canada's Boreal Forest Spatial Patterns as Seen from Space.
Pickell, Paul D; Coops, Nicholas C; Gergel, Sarah E; Andison, David W; Marshall, Peter L
2016-01-01
Understanding the development of landscape patterns over broad spatial and temporal scales is a major contribution to ecological sciences and is a critical area of research for forested land management. Boreal forests represent an excellent case study for such research because these forests have undergone significant changes over recent decades. We analyzed the temporal trends of four widely-used landscape pattern indices for boreal forests of Canada: forest cover, largest forest patch index, forest edge density, and core (interior) forest cover. The indices were computed over landscape extents ranging from 5,000 ha (n = 18,185) to 50,000 ha (n = 1,662) and across nine major ecozones of Canada. We used 26 years of Landsat satellite imagery to derive annualized trends of the landscape pattern indices. The largest declines in forest cover, largest forest patch index, and core forest cover were observed in the Boreal Shield, Boreal Plain, and Boreal Cordillera ecozones. Forest edge density increased at all landscape extents for all ecozones. Rapidly changing landscapes, defined as the 90th percentile of forest cover change, were among the most forested initially and were characterized by four times greater decrease in largest forest patch index, three times greater increase in forest edge density, and four times greater decrease in core forest cover compared with all 50,000 ha landscapes. Moreover, approximately 18% of all 50,000 ha landscapes did not change due to a lack of disturbance. The pattern database results provide important context for forest management agencies committed to implementing ecosystem-based management strategies.
Pagels, Peter; Boldemann, Cecilia; Raustorp, Anders
2011-01-01
To compare pedometer steps with accelerometer counts and to analyse minutes of engagement in light, moderate and vigorous physical activity in 3- to 5-year-old children during preschool time. Physical activity was recorded during preschool time for five consecutive days in 55 three- to five-year-old children. The children wore a Yamax SW200 pedometer and an Actigraph GTIM Monitor. The average time spent at preschool was 7.22 h/day with an average step of 7313 (±3042). Steps during preschool time increased with increasing age. The overall correlation between mean step counts and mean accelerometer counts (r = 0.67, p < 0.001), as well as time in light to vigorous activity (r = 0.76, p < 0.001), were moderately high. Step counts and moderate to vigorous physical activity minutes were poorly correlated in 3 years old (r = 0.19, p < 0.191) and moderately correlated (r = 0.50, p < 0.001) for children 4 to 5 years old. Correlation between the preschool children's pedometer-determined step counts and total engagement in physical activity during preschool time was moderately high. Children's step counts at preschool were low, and the time spent in moderate and vigorous physical activity at preschool was very short. © 2010 The Author(s)/Journal Compilation © 2010 Foundation Acta Paediatrica.
Vieira, J; Cunha, M C
2011-01-01
This article describes a solution method of solving large nonlinear problems in two steps. The two steps solution approach takes advantage of handling smaller and simpler models and having better starting points to improve solution efficiency. The set of nonlinear constraints (named as complicating constraints) which makes the solution of the model rather complex and time consuming is eliminated from step one. The complicating constraints are added only in the second step so that a solution of the complete model is then found. The solution method is applied to a large-scale problem of conjunctive use of surface water and groundwater resources. The results obtained are compared with solutions determined with the direct solve of the complete model in one single step. In all examples the two steps solution approach allowed a significant reduction of the computation time. This potential gain of efficiency of the two steps solution approach can be extremely important for work in progress and it can be particularly useful for cases where the computation time would be a critical factor for having an optimized solution in due time.
Capture and quality control mechanisms for ATP binding
Li, Li; Martinis, Susan A.
2013-01-01
The catalytic events in members of the nucleotidylyl transferase superfamily are initiated by a millisecond binding of ATP in the active site. Through metadynamics simulations on a class I aminoacyl-tRNA synthetase (aaRSs), the largest group in the superfamily, we calculate the free energy landscape of ATP selection and binding. Mutagenesis studies and fluorescence spectroscopy validated the identification of the most populated intermediate states. The rapid first binding step involves formation of encounter complexes captured through a fly-casting mechanism that acts up on the triphosphate moiety of ATP. In the slower nucleoside binding step, a conserved histidine in the HxxH motif orients the incoming ATP through base-stacking interactions resulting in a deep minimum in the free energy surface. Mutation of this histidine significantly decreases the binding affinity measured experimentally and computationally. The metadynamics simulations further reveal an intermediate quality control state that the synthetases and most likely other members of the superfamily use to select ATP over other nucleoside triphosphates. PMID:23276298
Dang, Jing-Shuang; Wang, Wei-Wei; Zheng, Jia-Jia; Nagase, Shigeru; Zhao, Xiang
2017-10-05
Although the existence of Stone-Wales (5-7) defect at graphene edge has been clarified experimentally, theoretical study on the formation mechanism is still imperfect. In particular, the regioselectivity of multistep reactions at edge (self-reconstruction and growth with foreign carbon feedstock) is essential to understand the kinetic behavior of reactive boundaries but investigations are still lacking. Herein, by using finite-sized models, multistep reconstructions and carbon dimer additions of a bared zigzag edge are introduced using density functional theory calculations. The zigzag to 5-7 transformation is proved as a site-selective process to generate alternating 5-7 pairs sequentially and the first step with largest barrier is suggested as the rate-determining step. Conversely, successive C 2 insertions on the active edge are calculated to elucidate the formation of 5-7 edge during graphene growth. A metastable intermediate with a triple sequentially fused pentagon fragment is proved as the key structure for 5-7 edge formation. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.
Tailed giant Tupanvirus possesses the most complete translational apparatus of the known virosphere.
Abrahão, Jônatas; Silva, Lorena; Silva, Ludmila Santos; Khalil, Jacques Yaacoub Bou; Rodrigues, Rodrigo; Arantes, Thalita; Assis, Felipe; Boratto, Paulo; Andrade, Miguel; Kroon, Erna Geessien; Ribeiro, Bergmann; Bergier, Ivan; Seligmann, Herve; Ghigo, Eric; Colson, Philippe; Levasseur, Anthony; Kroemer, Guido; Raoult, Didier; La Scola, Bernard
2018-02-27
Here we report the discovery of two Tupanvirus strains, the longest tailed Mimiviridae members isolated in amoebae. Their genomes are 1.44-1.51 Mb linear double-strand DNA coding for 1276-1425 predicted proteins. Tupanviruses share the same ancestors with mimivirus lineages and these giant viruses present the largest translational apparatus within the known virosphere, with up to 70 tRNA, 20 aaRS, 11 factors for all translation steps, and factors related to tRNA/mRNA maturation and ribosome protein modification. Moreover, two sequences with significant similarity to intronic regions of 18 S rRNA genes are encoded by the tupanviruses and highly expressed. In this translation-associated gene set, only the ribosome is lacking. At high multiplicity of infections, tupanvirus is also cytotoxic and causes a severe shutdown of ribosomal RNA and a progressive degradation of the nucleus in host and non-host cells. The analysis of tupanviruses constitutes a new step toward understanding the evolution of giant viruses.
Analysis of progressive damage in thin circular laminates due to static-equivalent impact loads
NASA Technical Reports Server (NTRS)
Shivakumar, K. N.; Elber, W.; Illg, W.
1983-01-01
Clamped circular graphite/epoxy plates (25.4, 38.1, and 50.8 mm radii) with an 8-ply quasi-isotropic layup were analyzed for static-equivalent impact loads using the minimum-total-potential-energy method and the von Karman strain-displacement equations. A step-by-step incremental transverse displacement procedure was used to calculate plate load and ply stresses. The ply failure region was calculated using the Tsai-Wu criterion. The corresponding failure modes (splitting and fiber failure) were determined using the maximum stress criteria. The first-failure mode was splitting and initiated first in the bottom ply. The splitting-failure thresholds were relatively low and tended to be lower for larger plates than for small plates. The splitting-damage region in each ply was elongated in its fiber direction; the bottom ply had the largest damage region. The calculated damage region for the 25.4-mm-radius plate agreed with limited static test results from the literature.
Capture and quality control mechanisms for adenosine-5'-triphosphate binding.
Li, Li; Martinis, Susan A; Luthey-Schulten, Zaida
2013-04-24
The catalytic events in members of the nucleotidylyl transferase superfamily are initiated by a millisecond binding of ATP in the active site. Through metadynamics simulations on a class I aminoacyl-tRNA synthetase (aaRSs), the largest group in the superfamily, we calculate the free energy landscape of ATP selection and binding. Mutagenesis studies and fluorescence spectroscopy validated the identification of the most populated intermediate states. The rapid first binding step involves formation of encounter complexes captured through a fly casting mechanism that acts upon the triphosphate moiety of ATP. In the slower nucleoside binding step, a conserved histidine in the HxxH motif orients the incoming ATP through base-stacking interactions resulting in a deep minimum in the free energy surface. Mutation of this histidine significantly decreases the binding affinity measured experimentally and computationally. The metadynamics simulations further reveal an intermediate quality control state that the synthetases and most likely other members of the superfamily use to select ATP over other nucleoside triphosphates.
Finley, James M.; Long, Andrew; Bastian, Amy J.; Torres-Oviedo, Gelsy
2014-01-01
Background Step length asymmetry (SLA) is a common hallmark of gait post-stroke. Though conventionally viewed as a spatial deficit, SLA can result from differences in where the feet are placed relative to the body (spatial strategy), the timing between foot-strikes (step time strategy), or the velocity of the body relative to the feet (step velocity strategy). Objective The goal of this study was to characterize the relative contributions of each of these strategies to SLA. Methods We developed an analytical model that parses SLA into independent step position, step time, and step velocity contributions. This model was validated by reproducing SLA values for twenty-five healthy participants when their natural symmetric gait was perturbed on a split-belt treadmill moving at either a 2:1 or 3:1 belt-speed ratio. We then applied the validated model to quantify step position, step time, and step velocity contributions to SLA in fifteen stroke survivors while walking at their self-selected speed. Results SLA was predicted precisely by summing the derived contributions, regardless of the belt-speed ratio. Although the contributions to SLA varied considerably across our sample of stroke survivors, the step position contribution tended to oppose the other two – possibly as an attempt to minimize the overall SLA. Conclusions Our results suggest that changes in where the feet are placed or changes in interlimb timing could be used as compensatory strategies to reduce overall SLA in stroke survivors. These results may allow clinicians and researchers to identify patient-specific gait abnormalities and personalize their therapeutic approaches accordingly. PMID:25589580
Time scales involved in emergent market coherence
NASA Astrophysics Data System (ADS)
Kwapień, J.; Drożdż, S.; Speth, J.
2004-06-01
In addressing the question of the time scales characteristic for the market formation, we analyze high-frequency tick-by-tick data from the NYSE and from the German market. By using returns on various time scales ranging from seconds or minutes up to 2 days, we compare magnitude of the largest eigenvalue of the correlation matrix for the same set of securities but for different time scales. For various sets of stocks of different capitalization (and the average trading frequency), we observe a significant elevation of the largest eigenvalue with increasing time scale. Our results from the correlation matrix study can be considered as a manifestation of the so-called Epps effect. There is no unique explanation of this effect and it seems that many different factors play a role here. One of such factors is randomness in transaction moments for different stocks. Another interesting conclusion to be drawn from our results is that in the contemporary markets the emergence of significant correlations occurs on time scales much smaller than in the more distant history.
Diverging Destinies: Maternal Education and the Developmental Gradient in Time with Children*
Kalil, Ariel; Ryan, Rebecca; Corey, Michael
2016-01-01
Using data from the 2003–2007 American Time Use Surveys (ATUS), we compare mothers’ (N = 6,640) time spent in four parenting activities across maternal education and child age subgroups. We test the hypothesis that highly educated mothers not only spend more time in active child care than less educated mothers, but that they alter the composition of that time to suit children’s developmental needs more than less educated mothers. Results support this hypothesis: highly educated mothers not only invest more time in basic care and play when youngest children are infants or toddlers than when children are older, but differences across education groups in basic care and play time are largest among mothers with infants or toddlers; by contrast, highly educated mothers invest more time in management activities when children are six to 13 years old than when children are younger, and differences across education groups in management are largest among mothers with school-aged children. These patterns indicate that the education gradient in mothers’ time with children is characterized by a ‘developmental gradient.’ PMID:22886758
Diverging destinies: maternal education and the developmental gradient in time with children.
Kalil, Ariel; Ryan, Rebecca; Corey, Michael
2012-11-01
Using data from the 2003-2007 American Time Use Surveys (ATUS), we compare mothers' (N = 6,640) time spent in four parenting activities across maternal education and child age subgroups. We test the hypothesis that highly educated mothers not only spend more time in active child care than less-educated mothers but also alter the composition of that time to suit children's developmental needs more than less-educated mothers. Results support this hypothesis: not only do highly educated mothers invest more time in basic care and play when youngest children are infants or toddlers than when children are older, but differences across education groups in basic care and play time are largest among mothers with infants or toddlers; by contrast, highly educated mothers invest more time in management activities when children are 6 to 13 years old than when children are younger, and differences across education groups in management are largest among mothers with school-aged children. These patterns indicate that the education gradient in mothers' time with children is characterized by a "developmental gradient."
Gordia, Alex Pinheiro; Quadros, Teresa Maria Bianchini de; Silva, Luciana Rodrigues; Mota, Jorge
2016-09-01
The use of step count and TV viewing time to discriminate youngsters with hyperglycaemia is still a matter of debate. To establish cut-off values for step count and TV viewing time in children and adolescents using glycaemia as the reference criterion. A cross-sectional study was conducted on 1044 schoolchildren aged 6-18 years from Northeastern Brazil. Daily step counts were assessed with a pedometer over 1 week and TV viewing time by self-report. The area under the curve (AUC) ranged from 0.52-0.61 for step count and from 0.49-0.65 for TV viewing time. The daily step count with the highest discriminatory power for hyperglycaemia was 13 884 (sensitivity = 77.8; specificity = 51.8) for male children and 12 371 (sensitivity = 55.6; specificity = 55.5) and 11 292 (sensitivity = 57.7; specificity = 48.6) for female children and adolescents respectively. The cut-off for TV viewing time with the highest discriminatory capacity for hyperglycaemia was 3 hours/day (sensitivity = 57.7-77.8; specificity = 48.6-53.2). This study represents the first step for the development of criteria based on cardiometabolic risk factors for step count and TV viewing time in youngsters. However, the present cut-off values have limited practical application because of their poor accuracy and low sensitivity and specificity.
Measurement needs guided by synthetic radar scans in high-resolution model output
NASA Astrophysics Data System (ADS)
Varble, A.; Nesbitt, S. W.; Borque, P.
2017-12-01
Microphysical and dynamical process interactions within deep convective clouds are not well understood, partly because measurement strategies often focus on statistics of cloud state rather than cloud processes. While processes cannot be directly measured, they can be inferred with sufficiently frequent and detailed scanning radar measurements focused on the life cycleof individual cloud regions. This is a primary goal of the 2018-19 DOE ARM Cloud, Aerosol, and Complex Terrain Interactions (CACTI) and NSF Remote sensing of Electrification, Lightning, And Mesoscale/microscale Processes with Adaptive Ground Observations (RELAMPAGO) field campaigns in central Argentina, where orographic deep convective initiation is frequent with some high-impact systems growing into the tallest and largest in the world. An array of fixed and mobile scanning multi-wavelength dual-polarization radars will be coupled with surface observations, sounding systems, multi-wavelength vertical profilers, and aircraft in situ measurements to characterize convective cloud life cycles and their relationship with environmental conditions. While detailed cloud processes are an observational target, the radar scan patterns that are most ideal for observing them are unclear. They depend on the locations and scales of key microphysical and dynamical processes operating within the cloud. High-resolution simulations of clouds, while imperfect, can provide information on these locations and scales that guide radar measurement needs. Radar locations are set in the model domain based on planned experiment locations, and simulatedorographic deep convective initiation and upscale growth are sampled using a number of different scans involving RHIs or PPIs with predefined elevation and azimuthal angles that approximately conform with radar range and beam width specifications. Each full scan pattern is applied to output atsingle model time steps with time step intervals that depend on the length of time required to complete each scan in the real world. The ability of different scans to detect key processes within the convective cloud life cycle are examined in connection with previous and subsequent dynamical and microphysical transitions. This work will guide strategic scan patterns that will be used during CACTI and RELAMPAGO.
Evaluation of Himawari-8 surface downwelling solar radiation by ground-based measurements
NASA Astrophysics Data System (ADS)
Damiani, Alessandro; Irie, Hitoshi; Horio, Takashi; Takamura, Tamio; Khatri, Pradeep; Takenaka, Hideaki; Nagao, Takashi; Nakajima, Takashi Y.; Cordero, Raul R.
2018-04-01
Observations from the new Japanese geostationary satellite Himawari-8 permit quasi-real-time estimation of global shortwave radiation at an unprecedented temporal resolution. However, accurate comparisons with ground-truthing observations are essential to assess their uncertainty. In this study, we evaluated the Himawari-8 global radiation product AMATERASS using observations recorded at four SKYNET stations in Japan and, for certain analyses, from the surface network of the Japanese Meteorological Agency in 2016. We found that the spatiotemporal variability of the satellite estimates was smaller than that of the ground observations; variability decreased with increases in the time step and spatial domain. Cloud variability was the main source of uncertainty in the satellite radiation estimates, followed by direct effects caused by aerosols and bright albedo. Under all-sky conditions, good agreement was found between satellite and ground-based data, with a mean bias in the range of 20-30 W m-2 (i.e., AMATERASS overestimated ground observations) and a root mean square error (RMSE) of approximately 70-80 W m-2. However, results depended on the time step used in the validation exercise, on the spatial domain, and on the different climatological regions. In particular, the validation performed at 2.5 min showed largest deviations and RMSE values ranging from about 110 W m-2 for the mainland to a maximum of 150 W m-2 in the subtropical region. We also detected a limited overestimation in the number of clear-sky episodes, particularly at the pixel level. Overall, satellite-based estimates were higher under overcast conditions, whereas frequent episodes of cloud-induced enhanced surface radiation (i.e., measured radiation was greater than expected clear-sky radiation) tended to reduce this difference. Finally, the total mean bias was approximately 10-15 W m-2 under clear-sky conditions, mainly because of overall instantaneous direct aerosol forcing efficiency in the range of 120-150 W m-2 per unit of aerosol optical depth (AOD). A seasonal anticorrelation between AOD and global radiation differences was evident at all stations and was also observed within the diurnal cycle.
NASA Astrophysics Data System (ADS)
Liu, Licheng; Zhuang, Qianlai; Zhu, Qing; Liu, Shaoqing; van Asperen, Hella; Pihlatie, Mari
2018-06-01
Carbon monoxide (CO) plays an important role in controlling the oxidizing capacity of the atmosphere by reacting with OH radicals that affect atmospheric methane (CH4) dynamics. We develop a process-based biogeochemistry model to quantify the CO exchange between soils and the atmosphere with a 5 min internal time step at the global scale. The model is parameterized using the CO flux data from the field and laboratory experiments for 11 representative ecosystem types. The model is then extrapolated to global terrestrial ecosystems using monthly climate forcing data. Global soil gross consumption, gross production, and net flux of the atmospheric CO are estimated to be from -197 to -180, 34 to 36, and -163 to -145 Tg CO yr-1 (1 Tg = 1012 g), respectively, when the model is driven with satellite-based atmospheric CO concentration data during 2000-2013. Tropical evergreen forest, savanna and deciduous forest areas are the largest sinks at 123 Tg CO yr-1. The soil CO gross consumption is sensitive to air temperature and atmospheric CO concentration, while the gross production is sensitive to soil organic carbon (SOC) stock and air temperature. By assuming that the spatially distributed atmospheric CO concentrations ( ˜ 128 ppbv) are not changing over time, the global mean CO net deposition velocity is estimated to be 0.16-0.19 mm s-1 during the 20th century. Under the future climate scenarios, the CO deposition velocity will increase at a rate of 0.0002-0.0013 mm s-1 yr-1 during 2014-2100, reaching 0.20-0.30 mm s-1 by the end of the 21st century, primarily due to the increasing temperature. Areas near the Equator, the eastern US, Europe and eastern Asia will be the largest sinks due to optimum soil moisture and high temperature. The annual global soil net flux of atmospheric CO is primarily controlled by air temperature, soil temperature, SOC and atmospheric CO concentrations, while its monthly variation is mainly determined by air temperature, precipitation, soil temperature and soil moisture.
Exploring Your Universe at UCLA: Steps to Developing and Sustaining a Large STEM Event
NASA Astrophysics Data System (ADS)
Curren, I. S.; Vican, L.; Sitarski, B.; Jewitt, D. C.
2015-12-01
Public STEM events are an excellent method to implement informal education and for scientists and educators to interact with their community. The benefits of such events are twofold. First and foremost, science enthusiasts and students both young and old, in particular, are exposed to STEM in a way that is accessible, fun, and not as stringent as may be presented in classrooms where testing is an underlying goal. Second, scientists and educators are given the opportunity to engage with the public and share their science to an audience who may not have a scientific background, thereby encouraging scientists to develop good communication practices and skills. In 2009 graduate student members of Astronomy Live!, an outreach organization in the UCLA Department of Physics and Astronomy, started a free and public event on the campus that featured a dozen hands-on outreach activities. The event, though small at the time, was a success and it was decided to make it an annual occurrence. Thus, Exploring Your Universe (EYU) was born. Primarily through word of mouth, the event has grown every year, both in number of attendees and number of volunteers. In 2009, approximately 1000 people attended and 20 students volunteered over the course of an eight-hour day. In 2014, participation was at an all-time high with close to 6000 attendees and over 400 volunteers from all departments in the Division of Physical Sciences (plus many non-divisional departments and institutes, as well as non-UCLA organizations). The event, which is the largest STEM event at UCLA and one of the largest in Los Angeles, now features near 100 hands-on activities that span many STEM fields. EYU has been featured by the UCLA news outlets, Daily Bruin and UCLA Today, and is often lauded as their favorite event of the year by attendees and volunteers alike. The event is entirely student-run, though volunteers include faculty, staff, researchers and students alike. As the event has grown, new systems for managing its many aspects have been adopted. Here, we will present the details of how the event was created and has remained successful, and sustainable.
Attack tolerance of correlated time-varying social networks with well-defined communities
NASA Astrophysics Data System (ADS)
Sur, Souvik; Ganguly, Niloy; Mukherjee, Animesh
2015-02-01
In this paper, we investigate the efficiency and the robustness of information transmission for real-world social networks, modeled as time-varying instances, under targeted attack in shorter time spans. We observe that these quantities are markedly higher than that of the randomized versions of the considered networks. An important factor that drives this efficiency or robustness is the presence of short-time correlations across the network instances which we quantify by a novel metric the-edge emergence factor, denoted as ξ. We find that standard targeted attacks are not effective in collapsing this network structure. Remarkably, if the hourly community structures of the temporal network instances are attacked with the largest size community attacked first, the second largest next and so on, the network soon collapses. This behavior, we show is an outcome of the fact that the edge emergence factor bears a strong positive correlation with the size ordered community structures.
The shape and size distribution of H II regions near the percolation transition
NASA Astrophysics Data System (ADS)
Bag, Satadru; Mondal, Rajesh; Sarkar, Prakash; Bharadwaj, Somnath; Sahni, Varun
2018-06-01
Using Shapefinders, which are ratios of Minkowski functionals, we study the morphology of neutral hydrogen (H I) density fields, simulated using seminumerical technique (inside-out), at various stages of reionization. Accompanying the Shapefinders, we also employ the `largest cluster statistic' (LCS), originally proposed in Klypin & Shandarin, to study the percolation in both neutral and ionized hydrogen. We find that the largest ionized region is percolating below the neutral fraction x_{H I}≲ 0.728 (or equivalently z ≲ 9). The study of Shapefinders reveals that the largest ionized region starts to become highly filamentary with non-trivial topology near the percolation transition. During the percolation transition, the first two Shapefinders - `thickness' (T) and `breadth' (B) - of the largest ionized region do not vary much, while the third Shapefinder - `length' (L) - abruptly increases. Consequently, the largest ionized region tends to be highly filamentary and topologically quite complex. The product of the first two Shapefinders, T × B, provides a measure of the `cross-section' of a filament-like ionized region. We find that, near percolation, the value of T × B for the largest ionized region remains stable at ˜7 Mpc2 (in comoving scale) while its length increases with time. Interestingly, all large ionized regions have similar cross-sections. However, their length shows a power-law dependence on their volume, L ∝ V0.72, at the onset of percolation.
Two-step web-mining approach to study geology/geophysics-related open-source software projects
NASA Astrophysics Data System (ADS)
Behrends, Knut; Conze, Ronald
2013-04-01
Geology/geophysics is a highly interdisciplinary science, overlapping with, for instance, physics, biology and chemistry. In today's software-intensive work environments, geoscientists often encounter new open-source software from scientific fields that are only remotely related to the own field of expertise. We show how web-mining techniques can help to carry out systematic discovery and evaluation of such software. In a first step, we downloaded ~500 abstracts (each consisting of ~1 kb UTF-8 text) from agu-fm12.abstractcentral.com. This web site hosts the abstracts of all publications presented at AGU Fall Meeting 2012, the world's largest annual geology/geophysics conference. All abstracts belonged to the category "Earth and Space Science Informatics", an interdisciplinary label cross-cutting many disciplines such as "deep biosphere", "atmospheric research", and "mineral physics". Each publication was represented by a highly structured record with ~20 short data attributes, the largest authorship-record being the unstructured "abstract" field. We processed texts of the abstracts with the statistics software "R" to calculate a corpus and a term-document matrix. Using R package "tm", we applied text-mining techniques to filter data and develop hypotheses about software-development activities happening in various geology/geophysics fields. Analyzing the term-document matrix with basic techniques (e.g., word frequencies, co-occurences, weighting) as well as more complex methods (clustering, classification) several key pieces of information were extracted. For example, text-mining can be used to identify scientists who are also developers of open-source scientific software, and the names of their programming projects and codes can also be identified. In a second step, based on the intermediate results found by processing the conference-abstracts, any new hypotheses can be tested in another webmining subproject: by merging the dataset with open data from github.com and stackoverflow.com. These popular, developer-centric websites have powerful application-programmer interfaces, and follow an open-data policy. In this regard, these sites offer a web-accessible reservoir of information that can be tapped to study questions such as: which open source software projects are eminent in the various geoscience fields? What are the most popular programming languages? How are they trending? Are there any interesting temporal patterns in committer activities? How large are programming teams and how do they change over time? What free software packages exist in the vast realms of related fields? Does the software from these fields have capabilities that might still be useful to me as a researcher, or can help me perform my work better? Are there any open-source projects that might be commercially interesting? This evaluation strategy reveals programming projects that tend to be new. As many important legacy codes are not hosted on open-source code-repositories, the presented search method might overlook some older projects.
Using convolutional neural networks to estimate time-of-flight from PET detector waveforms
NASA Astrophysics Data System (ADS)
Berg, Eric; Cherry, Simon R.
2018-01-01
Although there have been impressive strides in detector development for time-of-flight positron emission tomography, most detectors still make use of simple signal processing methods to extract the time-of-flight information from the detector signals. In most cases, the timing pick-off for each waveform is computed using leading edge discrimination or constant fraction discrimination, as these were historically easily implemented with analog pulse processing electronics. However, now with the availability of fast waveform digitizers, there is opportunity to make use of more of the timing information contained in the coincident detector waveforms with advanced signal processing techniques. Here we describe the application of deep convolutional neural networks (CNNs), a type of machine learning, to estimate time-of-flight directly from the pair of digitized detector waveforms for a coincident event. One of the key features of this approach is the simplicity in obtaining ground-truth-labeled data needed to train the CNN: the true time-of-flight is determined from the difference in path length between the positron emission and each of the coincident detectors, which can be easily controlled experimentally. The experimental setup used here made use of two photomultiplier tube-based scintillation detectors, and a point source, stepped in 5 mm increments over a 15 cm range between the two detectors. The detector waveforms were digitized at 10 GS s-1 using a bench-top oscilloscope. The results shown here demonstrate that CNN-based time-of-flight estimation improves timing resolution by 20% compared to leading edge discrimination (231 ps versus 185 ps), and 23% compared to constant fraction discrimination (242 ps versus 185 ps). By comparing several different CNN architectures, we also showed that CNN depth (number of convolutional and fully connected layers) had the largest impact on timing resolution, while the exact network parameters, such as convolutional filter size and number of feature maps, had only a minor influence.
Knight, Vickie; Guy, Rebecca J; Handan, Wand; Lu, Heng; McNulty, Anna
2014-06-01
In 2010, we introduced an express sexually transmitted infection/HIV testing service at a large metropolitan sexual health clinic, which significantly increased clinical service capacity. However, it also increased reception staff workload and caused backlogs of patients waiting to register or check in for appointments. We therefore implemented a new electronic self-registration and appointment self-arrival system in March 2012 to increase administrative efficiency and reduce waiting time for patients. We compared the median processing time overall and for each step of the registration and arrival process as well as the completeness of patient contact information recorded, in a 1-week period before and after the redesign of the registration system. χ2 Test and rank sum tests were used. Before the redesign, the median processing time was 8.33 minutes (interquartile range [IQR], 6.82-15.43), decreasing by 30% to 5.83 minutes (IQR, 4.75-7.42) when the new electronic self-registration and appointment self-arrival system was introduced (P < 0.001). The largest gain in efficiency was in the time taken to prepare the medical record for the clinician, reducing from a median of 5.31 minutes (IQR, 4.02-8.29) to 0.57 minutes (IQR, 0.38-1) in the 2 periods. Before implementation, 20% of patients provided a postal address and 31% an e-mail address, increasing to 60% and 70% post redesign, respectively (P < 0.001). Our evaluation shows that an electronic patient self-registration and appointment self-arrival system can improve clinic efficiency and save patient time. Systems like this one could be used by any outpatient service with large patient volumes as an integrated part of the electronic patient management system or as a standalone feature.
Glossary Precipitation Frequency Data Server GIS Grids Maps Time Series Temporals Documents Probable provides a measure of the average time between years (and not events) in which a particular value is RECCURENCE INTERVAL). ANNUAL MAXIMUM SERIES (AMS) - Time series of the largest precipitation amounts in a
ERIC Educational Resources Information Center
Gustafson, S. C.; Costello, C. S.; Like, E. C.; Pierce, S. J.; Shenoy, K. N.
2009-01-01
Bayesian estimation of a threshold time (hereafter simply threshold) for the receipt of impulse signals is accomplished given the following: 1) data, consisting of the number of impulses received in a time interval from zero to one and the time of the largest time impulse; 2) a model, consisting of a uniform probability density of impulse time…
Petascale turbulence simulation using a highly parallel fast multipole method on GPUs
NASA Astrophysics Data System (ADS)
Yokota, Rio; Barba, L. A.; Narumi, Tetsu; Yasuoka, Kenji
2013-03-01
This paper reports large-scale direct numerical simulations of homogeneous-isotropic fluid turbulence, achieving sustained performance of 1.08 petaflop/s on GPU hardware using single precision. The simulations use a vortex particle method to solve the Navier-Stokes equations, with a highly parallel fast multipole method (FMM) as numerical engine, and match the current record in mesh size for this application, a cube of 40963 computational points solved with a spectral method. The standard numerical approach used in this field is the pseudo-spectral method, relying on the FFT algorithm as the numerical engine. The particle-based simulations presented in this paper quantitatively match the kinetic energy spectrum obtained with a pseudo-spectral method, using a trusted code. In terms of parallel performance, weak scaling results show the FMM-based vortex method achieving 74% parallel efficiency on 4096 processes (one GPU per MPI process, 3 GPUs per node of the TSUBAME-2.0 system). The FFT-based spectral method is able to achieve just 14% parallel efficiency on the same number of MPI processes (using only CPU cores), due to the all-to-all communication pattern of the FFT algorithm. The calculation time for one time step was 108 s for the vortex method and 154 s for the spectral method, under these conditions. Computing with 69 billion particles, this work exceeds by an order of magnitude the largest vortex-method calculations to date.
NASA Astrophysics Data System (ADS)
Sobolev, Stephan V.; Muldashev, Iskander A.
2017-12-01
Subduction is substantially multiscale process where the stresses are built by long-term tectonic motions, modified by sudden jerky deformations during earthquakes, and then restored by following multiple relaxation processes. Here we develop a cross-scale thermomechanical model aimed to simulate the subduction process from 1 min to million years' time scale. The model employs elasticity, nonlinear transient viscous rheology, and rate-and-state friction. It generates spontaneous earthquake sequences and by using an adaptive time step algorithm, recreates the deformation process as observed naturally during the seismic cycle and multiple seismic cycles. The model predicts that viscosity in the mantle wedge drops by more than three orders of magnitude during the great earthquake with a magnitude above 9. As a result, the surface velocities just an hour or day after the earthquake are controlled by viscoelastic relaxation in the several hundred km of mantle landward of the trench and not by the afterslip localized at the fault as is currently believed. Our model replicates centuries-long seismic cycles exhibited by the greatest earthquakes and is consistent with the postseismic surface displacements recorded after the Great Tohoku Earthquake. We demonstrate that there is no contradiction between extremely low mechanical coupling at the subduction megathrust in South Chile inferred from long-term geodynamic models and appearance of the largest earthquakes, like the Great Chile 1960 Earthquake.
Lim, Jongil; Palmer, Christopher J; Busa, Michael A; Amado, Avelino; Rosado, Luis D; Ducharme, Scott W; Simon, Darnell; Van Emmerik, Richard E A
2017-06-01
The pickup of visual information is critical for controlling movement and maintaining situational awareness in dangerous situations. Altered coordination while wearing protective equipment may impact the likelihood of injury or death. This investigation examined the consequences of load magnitude and distribution on situational awareness, segmental coordination and head gaze in several protective equipment ensembles. Twelve soldiers stepped down onto force plates and were instructed to quickly and accurately identify visual information while establishing marksmanship posture in protective equipment. Time to discriminate visual information was extended when additional pack and helmet loads were added, with the small increase in helmet load having the largest effect. Greater head-leading and in-phase trunk-head coordination were found with lighter pack loads, while trunk-leading coordination increased and head gaze dynamics were more disrupted in heavier pack loads. Additional armour load in the vest had no consequences for Time to discriminate, coordination or head dynamics. This suggests that the addition of head borne load be carefully considered when integrating new technology and that up-armouring does not necessarily have negative consequences for marksmanship performance. Practitioner Summary: Understanding the trade-space between protection and reductions in task performance continue to challenge those developing personal protective equipment. These methods provide an approach that can help optimise equipment design and loading techniques by quantifying changes in task performance and the emergent coordination dynamics that underlie that performance.
Yashima, Kenta; Sasaki, Akira
2016-01-01
How can we identify the epidemiologically high-risk communities in a metapopulation network? The network centrality measure, which quantifies the relative importance of each location, is commonly utilized for this purpose. As the disease invasion condition is given from the basic reproductive ratio R0, we have introduced a novel centrality measure based on the sensitivity analysis of this R0 and shown its capability of revealing the characteristics that has been overlooked by the conventional centrality measures. The epidemic dynamics over the commute network of the Tokyo metropolitan area is theoretically analyzed by using this centrality measure. We found that, the impact of countermeasures at the largest station is more than 1,000 times stronger compare to that at the second largest station, even though the population sizes are only around 1.5 times larger. Furthermore, the effect of countermeasures at every station is strongly dependent on the existence and the number of commuters to this largest station. It is well known that the hubs are the most influential nodes, however, our analysis shows that only the largest among the network plays an extraordinary role. Lastly, we also found that, the location that is important for the prevention of disease invasion does not necessarily match the location that is important for reducing the number of infected. PMID:27607239
Wang, Yanfeng; Chen, Wei; Chen, Xiao; Feng, Huajun; Shen, Dongsheng; Huang, Bin; Jia, Yufeng; Zhou, Yuyang; Liang, Yuxiang
2018-03-01
CdS/MoS 2 , an extremely efficient photocatalyst, has been extensively used in hydrogen photoproduction and pollutant degradation. CdS/MoS 2 can be synthesized by a facile one-step hydrothermal process. However, the effect of the sulfur source on the synthesis of CdS/MoS 2 via one-step hydrothermal methods has seldom been investigated. We report herein a series of one-step hydrothermal preparations of CdS/MoS 2 using three different sulfur sources: thioacetamide, l-cysteine, and thiourea. The results revealed that the sulfur source strongly affected the crystallization, morphology, elemental composition and ultraviolet (UV)-visible-light-absorption ability of the CdS/MoS 2 . Among the investigated sulfur sources, thioacetamide provided the highest visible-light absorption ability for CdS/MoS 2 , with the smallest average particle size and largest surface area, resulting in the highest efficiency in Methylene Blue (MB) degradation. The photocatalytic activity of CdS/MoS 2 synthesized from the three sulfur sources can be arranged in the following order: thioacetamide>l-cysteine>thiourea. The reaction rate constants (k) for thioacetamide, l-cysteine, and thiourea were estimated to be 0.0197, 0.0140, and 0.0084min -1 , respectively. However, thioacetamide may be limited in practical application in terms of its price and toxicity, while l-cysteine is relatively economical, less toxic and exhibited good photocatalytic degradation performance toward MB. Copyright © 2017. Published by Elsevier B.V.
Community-Oriented Primary Care in Action: A Dallas Story
Pickens, Sue; Boumbulian, Paul; Anderson, Ron J.; Ross, Samuel; Phillips, Sharon
2002-01-01
Dallas County, Texas, is the site of the largest urban application of the community-oriented primary care (COPC) model in the United States. We summarize the development and implementation of Dallas’s Parkland Health & Hospital System COPC program. The complexities of implementing and managing this comprehensive community-based program are delineated in terms of Dallas County’s political environment and the components of COPC (assessment, prioritization, community collaboration, health care system, evaluation, and financing). Steps to be taken to ensure the future growth and development of the Dallas program are also considered. The COPC model, as implemented by Parkland, is replicable in other urban areas. PMID:12406794
High-rise housing in the city of Samara: the first steps on the path to sustainable development
NASA Astrophysics Data System (ADS)
Vavilova, Tatiana Ya.; Makeeva, Elena Yu.
2018-03-01
This paper outlines theoretical background of high-rise housing and discusses its design experience. It particularly focuses on environmental, social and economic aspects which are among crucial sustainable development issues. The authors dwell upon the implementation of innovative solutions that meet principles and goals of sustainable development and take construction objects built in Samara (which is one of the largest metropolises in Russia) as an example. The research also investigates the quality of project designs and reveals techniques corresponding to the "green standards". It considers the issues of practicing high-rise building construction in specific urban conditions and identifies unresolved architectural problems.
The 2002 Denali fault earthquake, Alaska: A large magnitude, slip-partitioned event
Eberhart-Phillips, D.; Haeussler, Peter J.; Freymueller, J.T.; Frankel, A.D.; Rubin, C.M.; Craw, P.; Ratchkovski, N.A.; Anderson, G.; Carver, G.A.; Crone, A.J.; Dawson, T.E.; Fletcher, H.; Hansen, R.; Harp, E.L.; Harris, R.A.; Hill, D.P.; Hreinsdottir, S.; Jibson, R.W.; Jones, L.M.; Kayen, R.; Keefer, D.K.; Larsen, C.F.; Moran, S.C.; Personius, S.F.; Plafker, G.; Sherrod, B.; Sieh, K.; Sitar, N.; Wallace, W.K.
2003-01-01
The MW (moment magnitude) 7.9 Denali fault earthquake on 3 November 2002 was associated with 340 kilometers of surface rupture and was the largest strike-slip earthquake in North America in almost 150 years. It illuminates earthquake mechanics and hazards of large strike-slip faults. It began with thrusting on the previously unrecognized Susitna Glacier fault, continued with right-slip on the Denali fault, then took a right step and continued with right-slip on the Totschunda fault. There is good correlation between geologically observed and geophysically inferred moment release. The earthquake produced unusually strong distal effects in the rupture propagation direction, including triggered seismicity.
Exploring the Health Needs of Aging LGBT Adults in the Cape Fear Region of North Carolina.
Rowan, Noell L; Beyer, Kelsey
2017-01-01
This study explored issues of culturally sensitive healthcare practice and needs among lesbian, gay, bisexual and transgender aging adults in coastal North Carolina. Survey data results indicated the largest problem was a history of verbally harassment and need for culturally sensitive healthcare. In conclusion, culturally sensitive interventions are needed to address the health disparities and unique needs of LGBT aging adults. Cultural sensitivity training for service providers is suggested as a vital step in addressing health disparities of aging LGBT adults. Implications for research include further exploration of health related needs of these often hidden and underserved population groups.
The constant displacement scheme for tracking particles in heterogeneous aquifers
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wen, X.H.; Gomez-Hernandez, J.J.
1996-01-01
Simulation of mass transport by particle tracking or random walk in highly heterogeneous media may be inefficient from a computational point of view if the traditional constant time step scheme is used. A new scheme which adjusts automatically the time step for each particle according to the local pore velocity, so that each particle always travels a constant distance, is shown to be computationally faster for the same degree of accuracy than the constant time step method. Using the constant displacement scheme, transport calculations in a 2-D aquifer model, with nature log-transmissivity variance of 4, can be 8.6 times fastermore » than using the constant time step scheme.« less
Role of step size and max dwell time in anatomy based inverse optimization for prostate implants
Manikandan, Arjunan; Sarkar, Biplab; Rajendran, Vivek Thirupathur; King, Paul R.; Sresty, N.V. Madhusudhana; Holla, Ragavendra; Kotur, Sachin; Nadendla, Sujatha
2013-01-01
In high dose rate (HDR) brachytherapy, the source dwell times and dwell positions are vital parameters in achieving a desirable implant dose distribution. Inverse treatment planning requires an optimal choice of these parameters to achieve the desired target coverage with the lowest achievable dose to the organs at risk (OAR). This study was designed to evaluate the optimum source step size and maximum source dwell time for prostate brachytherapy implants using an Ir-192 source. In total, one hundred inverse treatment plans were generated for the four patients included in this study. Twenty-five treatment plans were created for each patient by varying the step size and maximum source dwell time during anatomy-based, inverse-planned optimization. Other relevant treatment planning parameters were kept constant, including the dose constraints and source dwell positions. Each plan was evaluated for target coverage, urethral and rectal dose sparing, treatment time, relative target dose homogeneity, and nonuniformity ratio. The plans with 0.5 cm step size were seen to have clinically acceptable tumor coverage, minimal normal structure doses, and minimum treatment time as compared with the other step sizes. The target coverage for this step size is 87% of the prescription dose, while the urethral and maximum rectal doses were 107.3 and 68.7%, respectively. No appreciable difference in plan quality was observed with variation in maximum source dwell time. The step size plays a significant role in plan optimization for prostate implants. Our study supports use of a 0.5 cm step size for prostate implants. PMID:24049323
Warner, Cody; Conley, Timothy; Murphy, Riley
2018-04-01
Many prisoners rationalise criminal behaviour, and this type of thinking has been linked to recidivism. Correctional programmes for modifying criminal thinking can reshape how offenders view themselves and their circumstances. Our aim was to test whether participation in a cognitive-based curriculum called Steps to Economic and Personal Success (STEPS) was associated with changes in criminal thinking. The STEPS curriculum is delivered in 15 video-based facilitated classes. A pre-intervention/post-intervention survey design was applied to 128 adult male prisoners who completed the programme. Criminal thinking was measured by the Texas Christian University Criminal Thinking Scale, a self-report instrument with the six domains: entitlement, justification, power orientation, cold heartedness, criminal rationalisation and personal irresponsibility. Participants had lower scores in most of the criminal thinking domains after the intervention than before, with largest reductions in justification and power orientation. Findings provide evidence that attitudes to crime can be changed in a correctional setting, and the programme under study shows promise as an effective intervention for changing these attitudes among prisoners. Future research should build on these findings to examine whether and how such changes are related to desistance from offending behaviours. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.
Slauson, Stephen R; Pemberton, Ryan; Ghosh, Partha; Tantillo, Dean J; Aubé, Jeffrey
2015-05-15
The development of the domino reaction between an aminoethyl-substituted diene and maleic anhydride to afford an N-substituted octahydroisoquinolin-1-one is described. A typical procedure involves the treatment of a 1-aminoethyl-substituted butadiene with maleic anhydride at 0 °C to room temperature for 20 min under low-solvent conditions, which affords a series of isoquinolinone carboxylic acids in moderate to excellent yields. NMR monitoring suggested that the reaction proceeded via an initial acylation step followed by an intramolecular Diels-Alder reaction. For the latter step, a significant rate difference was observed depending on whether the amino group was substituted by a phenyl or an alkyl (usually benzyl) substituent, with the former noted by NMR to be substantially slower. The Diels-Alder step was studied by density functional theory (DFT) methods, leading to the conclusion that the degree of preorganization in the starting acylated intermediate had the largest effect on the reaction barriers. In addition, the effect of electronics on the aromatic ring in N-phenyl substrates was studied computationally and experimentally. Overall, this protocol proved considerably more amenable to scale up compared to earlier methods by eliminating the requirement of microwave batch chemistry for this reaction as well as significantly reducing the quantity of solvent.
NASA Technical Reports Server (NTRS)
Molnar, Melissa; Marek, C. John
2005-01-01
A simplified kinetic scheme for Jet-A, and methane fuels with water injection was developed to be used in numerical combustion codes, such as the National Combustor Code (NCC) or even simple FORTRAN codes. The two time step method is either an initial time averaged value (step one) or an instantaneous value (step two). The switch is based on the water concentration in moles/cc of 1x10(exp -20). The results presented here results in a correlation that gives the chemical kinetic time as two separate functions. This two time step method is used as opposed to a one step time averaged method previously developed to determine the chemical kinetic time with increased accuracy. The first time averaged step is used at the initial times for smaller water concentrations. This gives the average chemical kinetic time as a function of initial overall fuel air ratio, initial water to fuel mass ratio, temperature, and pressure. The second instantaneous step, to be used with higher water concentrations, gives the chemical kinetic time as a function of instantaneous fuel and water mole concentration, pressure and temperature (T4). The simple correlations would then be compared to the turbulent mixing times to determine the limiting rates of the reaction. The NASA Glenn GLSENS kinetics code calculates the reaction rates and rate constants for each species in a kinetic scheme for finite kinetic rates. These reaction rates are used to calculate the necessary chemical kinetic times. Chemical kinetic time equations for fuel, carbon monoxide and NOx are obtained for Jet-A fuel and methane with and without water injection to water mass loadings of 2/1 water to fuel. A similar correlation was also developed using data from NASA's Chemical Equilibrium Applications (CEA) code to determine the equilibrium concentrations of carbon monoxide and nitrogen oxide as functions of overall equivalence ratio, water to fuel mass ratio, pressure and temperature (T3). The temperature of the gas entering the turbine (T4) was also correlated as a function of the initial combustor temperature (T3), equivalence ratio, water to fuel mass ratio, and pressure.
Various problems in lunar habitat construction scenarios
NASA Astrophysics Data System (ADS)
Nitta, Keiji; Ohtsubo, Koji; Oguchi, Mitsuo; Ohya, Haruhiko; Kanbe, Seiichiro; Ashida, Akira; Sano, Kenichi
1991-10-01
Many papers describing the lunar base construction have been published previously. Lunar base has been considered to be a useful facility to conduct future scientific programs and to get new nuclear energy resource, namely 3He, for defending the environmental collapse on Earth and also to develop lunar resources such as oxygen and nitrogen for extending human activities in space more economically. The scale of the lunar base and the construction methods adopted are determined by the scenario of a lunar utilization program but constrained by the availability of the established space transportation technologies. As indicated in the scenarios described in papers regarding lunar base construction, the first steps of lunar missions are the investigation of lunar itself for conducting scientific research and for surveying the lunar base construction sites, the second steps are the outpost construction for conducting man-tended missions, for more precise scientific research and studying the lunar base construction methods, and third steps are the construction of a permanent base and the expansion of this lunar base for exploiting lunar resources. The missions within the first and second steps are all possible using the ferry (OTV) similar to the service and command modules of Apollo Spacecraft because all necessary weights to be landed on the lunar surface for these missions seem to be under the equivalent weight of the Apollo Lunar Lander. On the other hand, the permanent facilities constructed on the lunar surface in the third step requires larger quantities of construction materials to be transported from Earth, and a new ferry (advanced OTV) having higher transportation ability, at least above 6 times, compared with Apollo Service and Command Modules, are to be developed. The largest problems in the permament lunar base construction are related to the food production facilities, 30-40 m 2 plant cultivation area per person are required for providing the nutrition requirement and the necessary electric power per person for producing high energy foods, such as wheat, rice and potato, are now estimated ranging from 30 to 40 kW. The extension program of crew numbers under the limitation of usable transportation capability anticipated at present and the construction scenarios, including the numbers of facilities to be constructed every year, are to be determined based upon the requirements of plant cultivation area and of electric power for producing necessary and sufficient foods in order to accelerate the feasibility studies of each subsystem to be installed in the permanent lunar base in future.
Time trends in recurrence of juvenile nasopharyngeal angiofibroma: Experience of the past 4 decades.
Mishra, Anupam; Mishra, Subhash Chandra
2016-01-01
An analysis of time distribution of juvenile nasopharyngeal angiofibroma (JNA) from the last 4 decades is presented. Sixty recurrences were analyzed as per actuarial survival. SPSS software was used to generate Kaplan-Meier (KM) curves and time distributions were compared by Log-rank, Breslow and Tarone-Ware test. The overall recurrence rate was 17.59%. Majority underwent open transpalatal approach(es) without embolization. The probability of detecting a recurrence was 95% in first 24months and comparison of KM curves of 4 different time periods was not significant. This is the first and largest series to address the time-distribution. The required follow up period is 2years. Our recurrence is just half of the largest series (reported so far) suggesting the superiority of transpalatal techniques. The similarity of curves suggests less likelihood for recent technical advances to influence the recurrence that as per our hypothesis is more likely to reflect tumor biology per se. Copyright © 2016 Elsevier Inc. All rights reserved.
Jets or vortices - what flows are generated by an inverse turbulent cascade?
NASA Astrophysics Data System (ADS)
Frishman, Anna; Laurie, Jason; Falkovich, Gregory
An inverse cascade-energy transfer to progressively larger scales - is a salient feature of two-dimensional turbulence. If the cascade reaches the system scale, it creates a coherent flow expected to have the largest available scale and conform with the symmetries of the domain. In a doubly periodic rectangle, the mean flow with zero total momentum was therefore believed to be unidirectional, with two jets along the short side; while for an aspect ratio close to unity, a vortex dipole was expected. Using direct numerical simulations, we show that in fact neither the box symmetry is respected nor the largest scale is realized: the flow is never purely unidirectional since the inverse cascade produces coherent vortices, whose number and relative motion are determined by the aspect ratio. This spontaneous symmetry breaking is closely related to the hierarchy of averaging times. Long-time averaging restores translational invariance due to vortex wandering along one direction, and gives jets whose profile, however, can be deduced neither from the largest-available-scale argument, nor from the often employed maximum-entropy principle or quasi-linear approximation.
Jets or vortices—What flows are generated by an inverse turbulent cascade?
NASA Astrophysics Data System (ADS)
Frishman, Anna; Laurie, Jason; Falkovich, Gregory
2017-03-01
An inverse cascade, energy transfer to progressively larger scales, is a salient feature of two-dimensional turbulence. If the cascade reaches the system scale, it creates a coherent flow expected to have the largest available scale and conform with the symmetries of the domain. In a doubly periodic rectangle, the mean flow with zero total momentum was therefore believed to be unidirectional, with two jets along the short side; while for an aspect ratio close to unity, a vortex dipole is expected. Using direct numerical simulations, we show that in fact neither is the box symmetry respected nor the largest scale realized: the flow is never purely unidirectional since the inverse cascade produces coherent vortices, whose number and relative motion are determined by the aspect ratio. This spontaneous symmetry breaking is closely related to the hierarchy of averaging times. Long-time averaging restores translational invariance due to vortex wandering along one direction, and gives jets whose profile, however, can neither be deduced from the largest-available-scale argument, nor from the often employed maximum-entropy principle or quasilinear approximation.
Lutz, Barry; Liang, Tinny; Fu, Elain; Ramachandran, Sujatha; Kauffman, Peter; Yager, Paul
2013-07-21
Lateral flow tests (LFTs) are an ingenious format for rapid and easy-to-use diagnostics, but they are fundamentally limited to assay chemistries that can be reduced to a single chemical step. In contrast, most laboratory diagnostic assays rely on multiple timed steps carried out by a human or a machine. Here, we use dissolvable sugar applied to paper to create programmable flow delays and present a paper network topology that uses these time delays to program automated multi-step fluidic protocols. Solutions of sucrose at different concentrations (10-70% of saturation) were added to paper strips and dried to create fluidic time delays spanning minutes to nearly an hour. A simple folding card format employing sugar delays was shown to automate a four-step fluidic process initiated by a single user activation step (folding the card); this device was used to perform a signal-amplified sandwich immunoassay for a diagnostic biomarker for malaria. The cards are capable of automating multi-step assay protocols normally used in laboratories, but in a rapid, low-cost, and easy-to-use format.
Lutz, Barry; Liang, Tinny; Fu, Elain; Ramachandran, Sujatha; Kauffman, Peter; Yager, Paul
2013-01-01
Lateral flow tests (LFTs) are an ingenious format for rapid and easy-to-use diagnostics, but they are fundamentally limited to assay chemistries that can be reduced to a single chemical step. In contrast, most laboratory diagnostic assays rely on multiple timed steps carried out by a human or a machine. Here, we use dissolvable sugar applied to paper to create programmable flow delays and present a paper network topology that uses these time delays to program automated multi-step fluidic protocols. Solutions of sucrose at different concentrations (10-70% of saturation) were added to paper strips and dried to create fluidic time delays spanning minutes to nearly an hour. A simple folding card format employing sugar delays was shown to automate a four-step fluidic process initiated by a single user activation step (folding the card); this device was used to perform a signal-amplified sandwich immunoassay for a diagnostic biomarker for malaria. The cards are capable of automating multi-step assay protocols normally used in laboratories, but in a rapid, low-cost, and easy-to-use format. PMID:23685876
Rapee, Ronald M; Lyneham, Heidi J; Wuthrich, Viviana; Chatterton, Mary Lou; Hudson, Jennifer L; Kangas, Maria; Mihalopoulos, Cathrine
2017-10-01
Stepped care is embraced as an ideal model of service delivery but is minimally evaluated. The aim of this study was to evaluate the efficacy of cognitive-behavioral therapy (CBT) for child anxiety delivered via a stepped-care framework compared against a single, empirically validated program. A total of 281 youth with anxiety disorders (6-17 years of age) were randomly allocated to receive either empirically validated treatment or stepped care involving the following: (1) low intensity; (2) standard CBT; and (3) individually tailored treatment. Therapist qualifications increased at each step. Interventions did not differ significantly on any outcome measures. Total therapist time per child was significantly shorter to deliver stepped care (774 minutes) compared with best practice (897 minutes). Within stepped care, the first 2 steps returned the strongest treatment gains. Stepped care and a single empirically validated program for youth with anxiety produced similar efficacy, but stepped care required slightly less therapist time. Restricting stepped care to only steps 1 and 2 would have led to considerable time saving with modest loss in efficacy. Clinical trial registration information-A Randomised Controlled Trial of Standard Care Versus Stepped Care for Children and Adolescents With Anxiety Disorders; http://anzctr.org.au/; ACTRN12612000351819. Copyright © 2017 American Academy of Child and Adolescent Psychiatry. Published by Elsevier Inc. All rights reserved.
NASA Technical Reports Server (NTRS)
Hurd, William J.; Estabrook, Polly; Racho, Caroline S.; Satorius, Edgar H.
2002-01-01
For planetary lander missions, the most challenging phase of the spacecraft to ground communications is during the entry, descent, and landing (EDL). As each 2003 Mars Exploration Rover (MER) enters the Martian atmosphere, it slows dramatically. The extreme acceleration and jerk cause extreme Doppler dynamics on the X-band signal received on Earth. When the vehicle slows sufficiently, the parachute is deployed, causing almost a step in deceleration. After parachute deployment, the lander is lowered beneath the parachute on a bridle. The swinging motion of the lander imparts high Doppler dynamics on the signal and causes the received signal strength to vary widely, due to changing antenna pointing angles. All this time, the vehicle transmits important health and status information that is especially critical if the landing is not successful. Even using the largest Deep Space Network antennas, the weak signal and high dynamics render it impossible to conduct reliable phase coherent communications. Therefore, a specialized form of frequency-shift-keying will be used. This paper describes the EDL scenario, the signal conditions, the methods used to detect and frequency-track the carrier and to detect the data modulation, and the resulting performance estimates.
NASA Astrophysics Data System (ADS)
Spurzem, R.; Berczik, P.; Zhong, S.; Nitadori, K.; Hamada, T.; Berentzen, I.; Veles, A.
2012-07-01
Astrophysical Computer Simulations of Dense Star Clusters in Galactic Nuclei with Supermassive Black Holes are presented using new cost-efficient supercomputers in China accelerated by graphical processing cards (GPU). We use large high-accuracy direct N-body simulations with Hermite scheme and block-time steps, parallelised across a large number of nodes on the large scale and across many GPU thread processors on each node on the small scale. A sustained performance of more than 350 Tflop/s for a science run on using simultaneously 1600 Fermi C2050 GPUs is reached; a detailed performance model is presented and studies for the largest GPU clusters in China with up to Petaflop/s performance and 7000 Fermi GPU cards. In our case study we look at two supermassive black holes with equal and unequal masses embedded in a dense stellar cluster in a galactic nucleus. The hardening processes due to interactions between black holes and stars, effects of rotation in the stellar system and relativistic forces between the black holes are simultaneously taken into account. The simulation stops at the complete relativistic merger of the black holes.
NASA Astrophysics Data System (ADS)
Yang, Xiaoqing; Li, Chengfei; Fu, Ruowen
2016-07-01
As one of the most potential electrode materials for supercapacitors, nitrogen-enriched nanocarbons are still facing challenge of constructing developed mesoporosity for rapid mass transportation and tailoring their pore size for performance optimization and expanding their application scopes. Herein we develop a series of nitrogen-enriched mesoporous carbon (NMC) with extremely high mesoporosity and tunable mesopore size by a two-step method using silica gel as template. In our approach, mesopore size can be easily tailored from 4.7 to 35 nm by increasing the HF/TEOS volume ratio from 1/100 to 1/4. The NMC with mesopores of 6.2 nm presents the largest mesopore volume, surface area and mesopore ratio of 2.56 cm3 g-1, 1003 m2 g-1 and 97.7%, respectively. As a result, the highest specific capacitance of 325 F g-1 can be obtained at the current density of 0.1 A g-1, which can stay over 88% (286 F g-1) as the current density increases by 100 times (10 A g-1). This approach may open the doors for preparation of nitrogen-enriched nanocarbons with desired nanostructure for numerous applications.
Astrometry and early astrophysics at Kuffner Observatory in the late 19th century
NASA Astrophysics Data System (ADS)
Habison, Peter
The astronomer and mathematician Norbert Herz encouraged Moriz von Kuffner, owner of the beer brewery in Ottakring, to finance a private scientific observatory in the western parts of Vienna. In the years 1884-87 the Kuffner Observatory was built at the Gallitzinberg in Wien-Ottakring. It was an example of enlighted patronage and noted at the time for its rapid acquisition of new instruments and by increasing international recognition. It contained the largest heliometer in the world and the largest meridian circle in the Austrian-Hungarian Empire. Of the many scientists who worked here we mention Leo de Ball, Gustav Eberhard, Johannes Hartmann and we should not forget Karl Schwarzschild. Here in Vienna he published papers on celestial mechanics, measuring techniques, optics and his fundamental papers concerning photographic photometry, in particular the quantitative determination of the departure of the reciprocity law. The telescope and the associated camera with which he carried out his measurements are still in existence at the observatory. The observatory houses important astronomical instruments from the 19th century. All telescopes were made by Repsold und Söhne in Hamburg, and Steinheil in Munich. These two German companies were best renowned for quality and precision in high standard astronomical instruments. The Great Refractor (270/3500 mm) is still the third largest refractor in Austria. It was installed at the observatory in 1886 and was used together with the Schwarzschild Refractor for early astrophysical work including photography. It is this double refractor, where Schwarzschild carried out his measurements on photographic photometry. The Meridian Circle (132/1500 mm) was the largest meridian passage instrument of the Austro-Hungarian Empire. Today it is the largest meridian circle in Austria and still one of the largest in Europe. The telescope is equipped with one of the first impersonal micrometers of that time. First observations were carried out in 1886, followed by an international program called the ``Zonenunternehmen der Astronomischen Gesellschaft''. During this program 8468 stars were measured at the meridian circle. The Vertical Circle (81/1200 mm) was used as an auxiliary instrument for the meridian circle and for measuring polar motion. It is a rare instrument and only very few are still in existence at European observatories. Originally the Heliometer (217/3000 mm) was an instrument for measuring very small distances at the celestial sphere. Of this type of instrument, the Vienna heliometer was the largest in the world. It was installed at the observatory in 1896 and was mainly used for measuring the trigonometric parallaxes of the stars. Of 108 known parallaxes in 1910, 16 stars were measured at Kuffner Observatory at that time.
Kahleova, Hana; Lloren, Jan Irene; Mashchak, Andrew; Hill, Martin; Fraser, Gary E
2017-09-01
Background: Scientific evidence for the optimal number, timing, and size of meals is lacking. Objective: We investigated the relation between meal frequency and timing and changes in body mass index (BMI) in the Adventist Health Study 2 (AHS-2), a relatively healthy North American cohort. Methods: The analysis used data from 50,660 adult members aged ≥30 y of Seventh-day Adventist churches in the United States and Canada (mean ± SD follow-up: 7.42 ± 1.23 y). The number of meals per day, length of overnight fast, consumption of breakfast, and timing of the largest meal were exposure variables. The primary outcome was change in BMI per year. Linear regression analyses (stratified on baseline BMI) were adjusted for important demographic and lifestyle factors. Results: Subjects who ate 1 or 2 meals/d had a reduction in BMI per year (in kg · m -2 · y -1 ) (-0.035; 95% CI: -0.065, -0.004 and -0.029; 95% CI: -0.041, -0.017, respectively) compared with those who ate 3 meals/d. On the other hand, eating >3 meals/d (snacking) was associated with a relative increase in BMI ( P < 0.001). Correspondingly, the BMI of subjects who had a long overnight fast (≥18 h) decreased compared with those who had a medium overnight fast (12-17 h) ( P < 0.001). Breakfast eaters (-0.029; 95% CI: -0.047, -0.012; P < 0.001) experienced a decreased BMI compared with breakfast skippers. Relative to subjects who ate their largest meal at dinner, those who consumed breakfast as the largest meal experienced a significant decrease in BMI (-0.038; 95% CI: -0.048, -0.028), and those who consumed a big lunch experienced a smaller but still significant decrease in BMI than did those who ate their largest meal at dinner. Conclusions: Our results suggest that in relatively healthy adults, eating less frequently, no snacking, consuming breakfast, and eating the largest meal in the morning may be effective methods for preventing long-term weight gain. Eating breakfast and lunch 5-6 h apart and making the overnight fast last 18-19 h may be a useful practical strategy. © 2017 American Society for Nutrition.
2007-05-01
course, lessons of one situation do not necessarily apply to another and the chapter will begin with a discussion why this is so. The paper will conclude...be offered why the lessons of the former period apply to the later or not. From both periods of time, several observations regarding the use of...Kennedy authorized the largest tax cut in history. Combined with one of the largest increases in military spending to counter threats from
Stewart, Gregory B; Shields, Brenda J; Fields, Sarah; Comstock, R Dawn; Smith, Gary A
2009-08-01
Describe the association of consumer products and activities with dental injuries among children 0-17 years of age treated in United States emergency departments. A retrospective analysis of data from the National Electronic Injury Surveillance System, 1990-2003. There was an average of 22 000 dental injuries annually among children <18 years of age during the study period, representing an average annual rate of 31.6 dental injuries per 100 000 population. Children with primary dentition (<7 years) sustained over half of the dental injuries recorded, and products/activities associated with home structures/furniture were the leading contributors. Floors, steps, tables, and beds were the consumer products within the home most associated with dental injuries. Outdoor recreational products/activities were associated with the largest number of dental injuries among children with mixed dentition (7-12 years); almost half of these were associated with the bicycle, which was the consumer product associated with the largest number of dental injuries. Among children with permanent teeth (13- to 17-year olds), sports-related products/activities were associated with the highest number of dental injuries. Of all sports, baseball and basketball were associated with the largest number of dental injuries. To our knowledge, this is the first study to evaluate dental injuries among children using a national sample. We identified the leading consumer products/activities associated with dental injuries to children with primary, mixed, and permanent dentition. Knowledge of these consumer products/activities allows for more focused and effective prevention strategies.
Evaluation of Industrial Compensation to Cardiologists in 2015.
Khan, Muhammad Shahzeb; Siddiqi, Tariq Jamal; Fatima, Kaneez; Riaz, Haris; Khosa, Faisal; Manning, Warren J; Krasuski, Richard
2017-12-15
The categorization and characterization of pharmaceutical and device manufacturers or group purchasing organization payments to clinicians is an important step toward assessing conflicts of interest and the potential impact of these payments on practice patterns. Payments have not previously been compared among the subspecialties of cardiology. This is a retrospective analysis of the Open Payments database, including all installments and payments made to doctors in the calendar year 2015 by pharmaceutical and device manufacturers or group purchasing organization. Total payments to individual physicians were then aggregated based on specialty, geographic region, and payment type. The Gini Index was further employed to calculate within each specialty to measure income disparity. In 2015, a total of $166,089,335 was paid in 943,744 payments (average $175.00 per payment) to cardiologists, including 23,372 general cardiologists, 7,530 interventional cardiologists, and 2,293 cardiac electro-physiologists. Payments were mal-distributed across the 3 subspecialties of cardiology (p <0.01), with general cardiology receiving the largest number (73.5%) and total payments (62.6%) and cardiac electrophysiologists receiving significantly higher median payments ($1,662 vs $361 for all cardiologists; p <0.01). The Medtronic Company was the largest single payer for all 3 subspecialties. In conclusion, pharmaceutical and device manufacturers or group purchasing organizations continue to make substantial payments to cardiac practitioners with a significant variation in payments made to different cardiology subspecialists. The largest number and total payments are to general cardiologists, whereas the highest median payments are made to cardiac electrophysiologists. The impact of these payments on practice patterns remains to be examined. Copyright © 2017 Elsevier Inc. All rights reserved.
Chi, Felicia W; Sterling, Stacy; Campbell, Cynthia I; Weisner, Constance
2013-01-01
This study examines the associations between 12-step participation and outcomes over 7 years among 419 adolescent substance use patients with and without psychiatric comorbidities. Although level of participation decreased over time for both groups, comorbid adolescents participated in 12-step groups at comparable or higher levels across time points. Results from mixed-effects logistic regression models indicated that for both groups, 12-step participation was associated with both alcohol and drug abstinence at follow-ups, increasing the likelihood of either by at least 3 times. Findings highlight the potential benefits of 12-step participation in maintaining long-term recovery for adolescents with and without psychiatric disorders.
Software engineering principles applied to large healthcare information systems--a case report.
Nardon, Fabiane Bizinella; de A Moura, Lincoln
2007-01-01
São Paulo is the largest city in Brazil and one of the largest cities in the world. In 2004, São Paulo City Department of Health decided to implement a Healthcare Information System to support managing healthcare services and provide an ambulatory health record. The resulting information system is one of the largest public healthcare information systems ever built, with more than 2 million lines of code. Although statistics shows that most software projects fail, and the risks for the São Paulo initiative were enormous, the information system was completed on-time and on-budget. In this paper, we discuss the software engineering principles adopted that allowed to accomplish that project's goals, hoping that sharing the experience of this project will help other healthcare information systems initiatives to succeed.
Melzer, I; Krasovsky, T; Oddsson, L I E; Liebermann, D G
2010-12-01
This study investigated the force-time relationship during the push-off stage of a rapid voluntary step in young and older healthy adults, to study the assumption that when balance is lost a quick step may preserve stability. The ability to achieve peak propulsive force within a short time is critical for the performance of such a quick powerful step. We hypothesized that older adults would achieve peak force and power in significantly longer times compared to young people, particularly during the push-off preparatory phase. Fifteen young and 15 older volunteers performed rapid forward steps while standing on a force platform. Absolute anteroposterior and body weight normalized vertical forces during the push-off in the preparation and swing phases were used to determine time to peak and peak force, and step power. Two-way analyses of variance ('Group' [young-older] by 'Phase' [preparation-swing]) were used to assess our hypothesis (P ≤ 0.05). Older people exerted lower peak forces (anteroposterior and vertical) than young adults, but not necessarily lower peak power. More significantly, they showed a longer time to peak force, particularly in the vertical direction during the preparation phase. Older adults generate propulsive forces slowly and reach lower magnitudes, mainly during step preparation. The time to achieve a peak force and power, rather than its actual magnitude, may account for failures in quickly performing a preventive action. Such delay may be associated with the inability to react and recruit muscles quickly. Thus, training elderly to step fast in response to relevant cues may be beneficial in the prevention of falls. Copyright © 2010 Elsevier Ltd. All rights reserved.
Spike-frequency adaptation in the inferior colliculus.
Ingham, Neil J; McAlpine, David
2004-02-01
We investigated spike-frequency adaptation of neurons sensitive to interaural phase disparities (IPDs) in the inferior colliculus (IC) of urethane-anesthetized guinea pigs using a stimulus paradigm designed to exclude the influence of adaptation below the level of binaural integration. The IPD-step stimulus consists of a binaural 3,000-ms tone, in which the first 1,000 ms is held at a neuron's least favorable ("worst") IPD, adapting out monaural components, before being stepped rapidly to a neuron's most favorable ("best") IPD for 300 ms. After some variable interval (1-1,000 ms), IPD is again stepped to the best IPD for 300 ms, before being returned to a neuron's worst IPD for the remainder of the stimulus. Exponential decay functions fitted to the response to best-IPD steps revealed an average adaptation time constant of 52.9 +/- 26.4 ms. Recovery from adaptation to best IPD steps showed an average time constant of 225.5 +/- 210.2 ms. Recovery time constants were not correlated with adaptation time constants. During the recovery period, adaptation to a 2nd best-IPD step followed similar kinetics to adaptation during the 1st best-IPD step. The mean adaptation time constant at stimulus onset (at worst IPD) was 34.8 +/- 19.7 ms, similar to the 38.4 +/- 22.1 ms recorded to contralateral stimulation alone. Individual time constants after stimulus onset were correlated with each other but not with time constants during the best-IPD step. We conclude that such binaurally derived measures of adaptation reflect processes that occur above the level of exclusively monaural pathways, and subsequent to the site of primary binaural interaction.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Li-Fang; Ou, Chin-Ching; Striebel, Kathryn A.
The goal of this research was to measure Mn dissolution from a thin porous spinel LiMn{sub 2}O{sub 4} electrode by rotating ring-disk collection experiments. The amount of Mn dissolution from the spinel LiMn{sub 2}O{sub 4} electrode under various conditions was detected by potential step chronoamperometry. The concentration of dissolved Mn was found to increase with increasing cycle numbers and elevated temperature. The dissolved Mn was not dependent on disk rotation speed, which indicated that the Mn dissolution from the disk was under reaction control. The in situ monitoring of Mn dissolution from the spinel was carried out under various conditions.more » The ring currents exhibited maxima corresponding to the end-of-charge (EOC) and end-of-discharge (EOD), with the largest peak at EOC. The results suggest that the dissolution of Mn from spinel LiMn{sub 2}O{sub 4} occurs during charge/discharge cycling, especially in a charged state (at >4.1 V) and in a discharged state (at <3.1 V). The largest peak at EOC demonstrated that Mn dissolution took place mainly at the top of charge. At elevated temperatures, the ring cathodic currents were larger due to the increase of Mn dissolution rate.« less
NASA Technical Reports Server (NTRS)
Garcia, Sammy; Homan, Jonathan; Montz, Michael
2016-01-01
NASA is the mission lead for the James Webb Space Telescope (JWST), the next of the “Great Observatories”, scheduled for launch in 2018. It is directly responsible for the integration and test (I&T) program that will culminate in an end-to-end cryo vacuum optical test of the flight telescope and instrument module in Chamber A at NASA Johnson Space Center. Historic Chamber A is the largest thermal vacuum chamber at Johnson Space Center and one of the largest space simulation chambers in the world. Chamber A has undergone a major modernization effort to support the deep cryogenic, vacuum and cleanliness requirements for testing the JWST. This paper describes the steps performed in efforts to convert the existing the 60’s era Liquid Nitrogen System from a forced flow (pumped) process to a natural circulation (thermo-siphon) process. In addition, the paper will describe the dramatic conservation of liquid nitrogen to support the long duration thermal vacuum testing. Lastly, describe the simplistic and effective control system which results in zero to minimal human inputs during steady state conditions.
Life Cycle Assessment for Proton Conducting Ceramics Synthesized by the Sol-Gel Process.
Lee, Soo-Sun; Hong, Tae-Whan
2014-09-16
In this report, the environmental aspects of producing proton conducting ceramics are investigated by means of the environmental Life Cycle Assessment (LCA) method. The proton conducting ceramics BaZr 0.8 Y 0.2 O 3-δ (BZY), BaCe 0.9 Y 0.1 O 2.95 (BCY10), and Sr(Ce 0.9 Zr 0.1 ) 0.95 Yb 0.05 O 3-δ (SCZY) were prepared by the sol-gel process. Their material requirements and environmental emissions were inventoried, and their energy requirements were determined, based on actual production data. This latter point makes the present LCA especially worthy of attention as a preliminary indication of future environmental impact. The analysis was performed according to the recommendations of ISO norms 14040 and obtained using the Gabi 6 software. The performance of the analyzed samples was also compared with each other. The LCA results for these proton conducting ceramics production processes indicated that the marine aquatic ecotoxicity potential (MAETP) made up the largest part, followed by fresh-water aquatic ecotoxicity potential (FAETP) and Human Toxicity Potential (HTP). The largest contribution was from energy consumption during annealing and calcinations steps.
El-Gohary, Mahmoud; Peterson, Daniel; Gera, Geetanjali; Horak, Fay B; Huisinga, Jessie M
2017-07-01
To test the validity of wearable inertial sensors to provide objective measures of postural stepping responses to the push and release clinical test in people with multiple sclerosis. Cross-sectional study. University medical center balance disorder laboratory. Total sample N=73; persons with multiple sclerosis (PwMS) n=52; healthy controls n=21. Stepping latency, time and number of steps required to reach stability, and initial step length were calculated using 3 inertial measurement units placed on participants' lumbar spine and feet. Correlations between inertial sensor measures and measures obtained from the laboratory-based systems were moderate to strong and statistically significant for all variables: time to release (r=.992), latency (r=.655), time to stability (r=.847), time of first heel strike (r=.665), number of steps (r=.825), and first step length (r=.592). Compared with healthy controls, PwMS demonstrated a longer time to stability and required a larger number of steps to reach stability. The instrumented push and release test is a valid measure of postural responses in PwMS and could be used as a clinical outcome measures for patient care decisions or for clinical trials aimed at improving postural control in PwMS. Copyright © 2016 American Congress of Rehabilitation Medicine. Published by Elsevier Inc. All rights reserved.
Resuscitator’s perceptions and time for corrective ventilation steps during neonatal resuscitation☆
Sharma, Vinay; Lakshminrusimha, Satyan; Carrion, Vivien; Mathew, Bobby
2016-01-01
Background The 2010 neonatal resuscitation program (NRP) guidelines incorporate ventilation corrective steps (using the mnemonic – MRSOPA) into the resuscitation algorithm. The perception of neonatal providers, time taken to perform these maneuvers or the effectiveness of these additional steps has not been evaluated. Methods Using two simulated clinical scenarios of varying degrees of cardiovascular compromise –perinatal asphyxia with (i) bradycardia (heart rate – 40 min−1) and (ii) cardiac arrest, 35 NRP certified providers were evaluated for preference to performing these corrective measures, the time taken for performing these steps and time to onset of chest compressions. Results The average time taken to perform ventilation corrective steps (MRSOPA) was 48.9 ± 21.4 s. Providers were less likely to perform corrective steps and proceed directly to endotracheal intubation in the scenario of cardiac arrest as compared to a state of bradycardia. Cardiac compressions were initiated significantly sooner in the scenario of cardiac arrest 89 ± 24 s as compared to severe bradycardia 122 ± 23 s, p < 0.0001. There were no differences in the time taken to initiation of chest compressions between physicians or mid-level care providers or with the level of experience of the provider. Conclusions Effective ventilation of the lungs with corrective steps using a mask is important in most cases of neonatal resuscitation. Neonatal resuscitators prefer early endotracheal intubation and initiation of chest compressions in the presence of asystolic cardiac arrest. Corrective ventilation steps can potentially postpone initiation of chest compressions and may delay return of spontaneous circulation in the presence of severe cardiovascular compromise. PMID:25796996
NASA Astrophysics Data System (ADS)
Han, Rui-Qi; Xie, Wen-Jie; Xiong, Xiong; Zhang, Wei; Zhou, Wei-Xing
The correlation structure of a stock market contains important financial contents, which may change remarkably due to the occurrence of financial crisis. We perform a comparative analysis of the Chinese stock market around the occurrence of the 2008 crisis based on the random matrix analysis of high-frequency stock returns of 1228 Chinese stocks. Both raw correlation matrix and partial correlation matrix with respect to the market index in two time periods of one year are investigated. We find that the Chinese stocks have stronger average correlation and partial correlation in 2008 than in 2007 and the average partial correlation is significantly weaker than the average correlation in each period. Accordingly, the largest eigenvalue of the correlation matrix is remarkably greater than that of the partial correlation matrix in each period. Moreover, each largest eigenvalue and its eigenvector reflect an evident market effect, while other deviating eigenvalues do not. We find no evidence that deviating eigenvalues contain industrial sectorial information. Surprisingly, the eigenvectors of the second largest eigenvalues in 2007 and of the third largest eigenvalues in 2008 are able to distinguish the stocks from the two exchanges. We also find that the component magnitudes of the some largest eigenvectors are proportional to the stocks’ capitalizations.
Lin, Haijiang; Keriel, Anne; Morales, Carlos R.; Bedard, Nathalie; Zhao, Qing; Hingamp, Pascal; Lefrançois, Stephane; Combaret, Lydie; Wing, Simon S.
2000-01-01
Ubiquitin-specific processing proteases (UBPs) presently form the largest enzyme family in the ubiquitin system, characterized by a core region containing conserved motifs surrounded by divergent sequences, most commonly at the N-terminal end. The functions of these divergent sequences remain unclear. We identified two isoforms of a novel testis-specific UBP, UBP-t1 and UBP-t2, which contain identical core regions but distinct N termini, thereby permitting dissection of the functions of these two regions. Both isoforms were germ cell specific and developmentally regulated. Immunocytochemistry revealed that UBP-t1 was induced in step 16 to 19 spermatids while UBP-t2 was expressed in step 18 to 19 spermatids. Immunoelectron microscopy showed that UBP-t1 was found in the nucleus while UBP-t2 was extranuclear and was found in residual bodies. For the first time, we show that the differential subcellular localization was due to the distinct N-terminal sequences. When transfected into COS-7 cells, the core region was expressed throughout the cell but the UBP-t1 and UBP-t2 isoforms were concentrated in the nucleus and the perinuclear region, respectively. Fusions of each N-terminal end with green fluorescent protein yielded the same subcellular localization as the native proteins, indicating that the N-terminal ends were sufficient for determining differential localization. Interestingly, UBP-t2 colocalized with anti-γ-tubulin immunoreactivity, indicating that like several other components of the ubiquitin system, a deubiquitinating enzyme is associated with the centrosome. Regulated expression and alternative N termini can confer specificity of UBP function by restricting its temporal and spatial loci of action. PMID:10938131
High-resolution conodont oxygen isotope record of Ordovician climate change
NASA Astrophysics Data System (ADS)
Chen, J.; Chen, Z.; Algeo, T. J.
2013-12-01
The Ordovician Period was characterized by several major events, including a prolonged 'super greenhouse' during the Early Ordovician, the 'Great Ordovician Biodiversification Event (GOBE)' of the Middle and early Late Ordovician, and the Hirnantian ice age and mass extinction of the latest Ordovician (Webby et al., 2004, The Great Ordovician Biodiversification Event, Columbia University Press). The cause of the rapid diversification of marine invertebrates during the GOBE is not clear, however, and several scenarios have been proposed including widespread development of shallow cratonic seas, strong magmatic and tectonic activity, and climate moderation. In order to investigate relationships between climate change and marine ecosystem evolution during the Ordovician, we measured the oxygen isotopic composition of single coniform conodonts using a Cameca secondary ion mass spectrometer. Our δ18O profile shows a shift at the Early/Middle Ordovician transition that is indicative of a rapid 6 to 8 °C cooling. This cooling event marks the termination of the Early Ordovician 'super greenhouse' and may have established cooler tropical seawater temperatures that were more favorable for invertebrate animals, setting the stage for the GOBE. Additional cooling episodes occurred during the early Sandbian, early Katian, and Hirnantian, the last culminating in a short-lived (<1-Myr) end-Ordovician ice age. The much cooler conditions that prevailed at that time may have been an important factor in the end-Ordovician mass extinction. Our results differ from those of Trotter et al. (2008, 'Did cooling oceans trigger Ordovician biodiversification? Evidence from conodont thermometry,' Science 321:550-554). Instead of a slow, protracted cooling through the Early and Middle Ordovician, our high-resolution record shows that cooling occurred in several discrete steps, with the largest step being at the Early/Middle Ordovician transition.
Estimating physical activity in children: impact of pedometer wear time and metric.
Laurson, Kelly R; Welk, Gregory J; Eisenmann, Joey C
2015-01-01
The purpose of this study was to provide a practical demonstration of the impact of monitoring frame and metric when assessing pedometer-determined physical activity (PA) in youth. Children (N = 1111) were asked to wear pedometers over a 7-day period during which time worn and steps were recorded each day. Varying data-exclusion criteria were used to demonstrate changes in estimates of PA. Steps were expressed using several metrics and criteria, and construct validity was demonstrated via correlations with adiposity. Meaningful fluctuations in average steps per day and percentage meeting PA recommendations were apparent when different criteria were used. Children who wore the pedometer longer appeared more active, with each minute the pedometer was worn each day accounting for an approximate increase of 11 and 8 steps for boys and girls, respectively (P < .05). Using more restrictive exclusion criteria led to stronger correlations between indices of steps per day, steps per minute, steps per leg length, steps per minute per leg length, and obesity. Wear time has a meaningful impact on estimates of PA. This should be considered when determining exclusion criteria and making comparisons between studies. Results also suggest that incorporating wear time per day and leg length into the metric may increase validity of PA estimates.
Federal Register 2010, 2011, 2012, 2013, 2014
2010-04-26
... large certificated air carriers to file ``On-Time Flight Performance Reports'' and ``Mishandled-Baggage... On-Time Flight Performance Reports to identify problem areas within the air traffic control system... concerning their chances of on-time flights and the rate of mishandled baggage by the 18 largest scheduled...
The Prediction of Teacher Turnover Employing Time Series Analysis.
ERIC Educational Resources Information Center
Costa, Crist H.
The purpose of this study was to combine knowledge of teacher demographic data with time-series forecasting methods to predict teacher turnover. Moving averages and exponential smoothing were used to forecast discrete time series. The study used data collected from the 22 largest school districts in Iowa, designated as FACT schools. Predictions…
Chopra, Sameer; de Castro Abreu, Andre Luis; Berger, Andre K; Sehgal, Shuchi; Gill, Inderbir; Aron, Monish; Desai, Mihir M
2017-01-01
To describe our, step-by-step, technique for robotic intracorporeal neobladder formation. The main surgical steps to forming the intracorporeal orthotopic ileal neobladder are: isolation of 65 cm of small bowel; small bowel anastomosis; bowel detubularisation; suture of the posterior wall of the neobladder; neobladder-urethral anastomosis and cross folding of the pouch; and uretero-enteral anastomosis. Improvements have been made to these steps to enhance time efficiency without compromising neobladder configuration. Our technical improvements have resulted in an improvement in operative time from 450 to 360 min. We describe an updated step-by-step technique of robot-assisted intracorporeal orthotopic ileal neobladder formation. © 2016 The Authors BJU International © 2016 BJU International Published by John Wiley & Sons Ltd.
Advanced in Visualization of 3D Time-Dependent CFD Solutions
NASA Technical Reports Server (NTRS)
Lane, David A.; Lasinski, T. A. (Technical Monitor)
1995-01-01
Numerical simulations of complex 3D time-dependent (unsteady) flows are becoming increasingly feasible because of the progress in computing systems. Unfortunately, many existing flow visualization systems were developed for time-independent (steady) solutions and do not adequately depict solutions from unsteady flow simulations. Furthermore, most systems only handle one time step of the solutions individually and do not consider the time-dependent nature of the solutions. For example, instantaneous streamlines are computed by tracking the particles using one time step of the solution. However, for streaklines and timelines, particles need to be tracked through all time steps. Streaklines can reveal quite different information about the flow than those revealed by instantaneous streamlines. Comparisons of instantaneous streamlines with dynamic streaklines are shown. For a complex 3D flow simulation, it is common to generate a grid system with several millions of grid points and to have tens of thousands of time steps. The disk requirement for storing the flow data can easily be tens of gigabytes. Visualizing solutions of this magnitude is a challenging problem with today's computer hardware technology. Even interactive visualization of one time step of the flow data can be a problem for some existing flow visualization systems because of the size of the grid. Current approaches for visualizing complex 3D time-dependent CFD solutions are described. The flow visualization system developed at NASA Ames Research Center to compute time-dependent particle traces from unsteady CFD solutions is described. The system computes particle traces (streaklines) by integrating through the time steps. This system has been used by several NASA scientists to visualize their CFD time-dependent solutions. The flow visualization capabilities of this system are described, and visualization results are shown.
Short-term Time Step Convergence in a Climate Model
Wan, Hui; Rasch, Philip J.; Taylor, Mark; ...
2015-02-11
A testing procedure is designed to assess the convergence property of a global climate model with respect to time step size, based on evaluation of the root-mean-square temperature difference at the end of very short (1 h) simulations with time step sizes ranging from 1 s to 1800 s. A set of validation tests conducted without sub-grid scale parameterizations confirmed that the method was able to correctly assess the convergence rate of the dynamical core under various configurations. The testing procedure was then applied to the full model, and revealed a slow convergence of order 0.4 in contrast to themore » expected first-order convergence. Sensitivity experiments showed without ambiguity that the time stepping errors in the model were dominated by those from the stratiform cloud parameterizations, in particular the cloud microphysics. This provides a clear guidance for future work on the design of more accurate numerical methods for time stepping and process coupling in the model.« less
Enrichment of spinal cord cell cultures with motoneurons
1978-01-01
Spinal cord cell cultures contain several types of neurons. Two methods are described for enriching such cultures with motoneurons (defined here simply as cholinergic cells that are capable of innervating muscle). In the first method, 7-day embryonic chick spinal cord neurons were separated according to size by 1 g velocity sedimentation. It is assumed that cholinergic motoneurons are among the largest cells present at this stage. The spinal cords were dissociated vigorously so that 95-98% of the cells in the initial suspension were isolated from one another. Cells in leading fractions (large cell fractions: LCFs) contain about seven times as much choline acetyltransferase (CAT) activity per unit cytoplasm as do cells in trailing fractions (small cell fractions: SCFs). Muscle cultures seeded with LCFs develop 10-70 times as much CAT as cultures seeded with SCFs and six times as much CAT as cultures seeded with control (unfractionated) spinal cord cells. More than 20% of the large neurons in LCF-muscle cultures innervate nearby myotubes. In the second method, neurons were gently dissociated from 4-day embryonic spinal cords and maintained in vitro. This approach is based on earlier observations that cholinergic neurons are among the first cells to withdraw form the mitotic cycle in the developing chick embryo (Hamburger, V. 1948. J. Comp. Neurol. 88:221- 283; and Levi-Montalcini, R. 1950. J. Morphol. 86:253-283). 4-Day spinal cord-muscle cultures develop three times as much CAT as do 7-day spinal cord-muscle plates, prepared in the same (gentle) manner. More than 50% of the relatively large 4-day neurons innervate nearby myotubes. Thus, both methods are useful first steps toward the complete isolation of motoneurons. Both methods should facilitate study of the development of cholinergic neurons and of nerve-muscle synapse formation. PMID:566275
Xavier, Prince K.; Petch, Jon C.; Klingaman, Nicholas P.; ...
2015-05-26
We present an analysis of diabatic heating and moistening processes from 12 to 36 h lead time forecasts from 12 Global Circulation Models as part of the “Vertical structure and physical processes of the Madden-Julian Oscillation (MJO)” project. A lead time of 12–36 h is chosen to constrain the large-scale dynamics and thermodynamics to be close to observations while avoiding being too close to the initial spin-up of the models as they adjust to being driven from the Years of Tropical Convection (YOTC) analysis. A comparison of the vertical velocity and rainfall with the observations and YOTC analysis suggests thatmore » the phases of convection associated with the MJO are constrained in most models at this lead time although the rainfall in the suppressed phase is typically overestimated. Although the large-scale dynamics is reasonably constrained, moistening and heating profiles have large intermodel spread. In particular, there are large spreads in convective heating and moistening at midlevels during the transition to active convection. Radiative heating and cloud parameters have the largest relative spread across models at upper levels during the active phase. A detailed analysis of time step behavior shows that some models show strong intermittency in rainfall and differences in the precipitation and dynamics relationship between models. In conclusion, the wealth of model outputs archived during this project is a very valuable resource for model developers beyond the study of the MJO. Additionally, the findings of this study can inform the design of process model experiments, and inform the priorities for field experiments and future observing systems.« less
Extreme scale multi-physics simulations of the tsunamigenic 2004 Sumatra megathrust earthquake
NASA Astrophysics Data System (ADS)
Ulrich, T.; Gabriel, A. A.; Madden, E. H.; Wollherr, S.; Uphoff, C.; Rettenberger, S.; Bader, M.
2017-12-01
SeisSol (www.seissol.org) is an open-source software package based on an arbitrary high-order derivative Discontinuous Galerkin method (ADER-DG). It solves spontaneous dynamic rupture propagation on pre-existing fault interfaces according to non-linear friction laws, coupled to seismic wave propagation with high-order accuracy in space and time (minimal dispersion errors). SeisSol exploits unstructured meshes to account for complex geometries, e.g. high resolution topography and bathymetry, 3D subsurface structure, and fault networks. We present the up-to-date largest (1500 km of faults) and longest (500 s) dynamic rupture simulation modeling the 2004 Sumatra-Andaman earthquake. We demonstrate the need for end-to-end-optimization and petascale performance of scientific software to realize realistic simulations on the extreme scales of subduction zone earthquakes: Considering the full complexity of subduction zone geometries leads inevitably to huge differences in element sizes. The main code improvements include a cache-aware wave propagation scheme and optimizations of the dynamic rupture kernels using code generation. In addition, a novel clustered local-time-stepping scheme for dynamic rupture has been established. Finally, asynchronous output has been implemented to overlap I/O and compute time. We resolve the frictional sliding process on the curved mega-thrust and a system of splay faults, as well as the seismic wave field and seafloor displacement with frequency content up to 2.2 Hz. We validate the scenario by geodetic, seismological and tsunami observations. The resulting rupture dynamics shed new light on the activation and importance of splay faults.
Brinkman, Arinda C M; Romijn, Johannes W A; van Barneveld, Lerau J M; Greuters, Sjoerd; Veerhoek, Dennis; Vonk, Alexander B A; Boer, Christa
2010-06-01
Dilutional coagulopathy as a consequence of cardiopulmonary bypass (CPB) system priming may also be affected by the composition of the priming solution. The direct effects of distinct priming solutions on fibrinogen, one of the foremost limiting factors during dilutional coagulopathy, have been minimally evaluated. Therefore, the authors investigated whether hemodilution with different priming solutions distinctly affects the fibrinogen-mediated step in whole blood clot formation. Prospective observational laboratory study. University hospital laboratory. Eight male healthy volunteers. Blood samples diluted with gelatin-, albumin-, or hydroxyethyl starch (HES)-based priming solutions were ex-vivo evaluated for clot formation by rotational thromboelastometry. The intrinsic pathway (INTEM) coagulation time increased from 186 +/- 19 seconds to 205 +/- 16, 220 +/- 17, and 223 +/- 18 seconds after dilution with gelatin-, albumin-, or HES-containing prime solutions (all p < 0.05 v baseline). The extrinsic pathway (EXTEM) coagulation time was only minimally affected by hemodilution. Moreover, all 3 priming solutions significantly reduced the INTEM and EXTEM maximum clot firmness. The HES-containing priming solution induced the largest decrease in the maximum clot firmness attributed to fibrinogen, from 13 +/- 1 mm (baseline) to 6 +/- 1 mm (p < 0.01 v baseline). All studied priming solutions prolonged coagulation time and decreased clot formation, but the fibrinogen-limiting effect was the most profound for the HES-containing priming solution. These results suggest that the composition of priming solutions may distinctly affect blood clot formation, in particular with respect to the fibrinogen component in hemostasis. Copyright 2010 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Dehghan, Mehdi; Mohammadi, Vahid
2017-08-01
In this research, we investigate the numerical solution of nonlinear Schrödinger equations in two and three dimensions. The numerical meshless method which will be used here is RBF-FD technique. The main advantage of this method is the approximation of the required derivatives based on finite difference technique at each local-support domain as Ωi. At each Ωi, we require to solve a small linear system of algebraic equations with a conditionally positive definite matrix of order 1 (interpolation matrix). This scheme is efficient and its computational cost is same as the moving least squares (MLS) approximation. A challengeable issue is choosing suitable shape parameter for interpolation matrix in this way. In order to overcome this matter, an algorithm which was established by Sarra (2012), will be applied. This algorithm computes the condition number of the local interpolation matrix using the singular value decomposition (SVD) for obtaining the smallest and largest singular values of that matrix. Moreover, an explicit method based on Runge-Kutta formula of fourth-order accuracy will be applied for approximating the time variable. It also decreases the computational costs at each time step since we will not solve a nonlinear system. On the other hand, to compare RBF-FD method with another meshless technique, the moving kriging least squares (MKLS) approximation is considered for the studied model. Our results demonstrate the ability of the present approach for solving the applicable model which is investigated in the current research work.
Leonard, D; Swap, W
2000-01-01
Before the days of the Internet, it was primarily venture capitalists who coached young entrepreneurs in Silicon Valley. Today, because of the phenomenal number of new companies, venture capitalists are just too busy. The largest firms still take on a few carefully selected, highly promising zero-stage start-ups, but they simply can't spend the time on ones that aren't going to grow huge quickly. To fill the void, a new breed of adviser has stepped in to coach entrepreneurs. Called mentor capitalists, they help entrepreneurs with everything from recruiting top talent to attracting their first million in seed money. The mentor capitalists in Silicon Valley are cashed-out, highly successful business architects who no longer want to start businesses but who love the thrill of the entrepreneurial game. They spend hours and hours with first-time entrepreneurs, guiding them as they create and refine a business model, test their ideas in the marketplace, build business processes, raise money, and find talent. The authors of this article found through dozens of extensive interviews with entrepreneurs and their coaches that mentor capitalists play many roles: sculptor, psychologist, diplomat, kingmaker, talent magnet, process engineer, and rainmaker. In exchange for small equity stakes, the mentor capitalists wear these different hats, doling out expertise just in time, as situations arise, and in doses appropriate to the situation. Mentor capitalists seed Silicon Valley with expertise and knowledge, augmenting or even substituting for classes in entrepreneurship at local universities. But, as the authors note, the role of the mentor capitalist is essential to any start-up, anywhere.
7 CFR 1463.106 - Base quota levels for eligible tobacco producers.
Code of Federal Regulations, 2012 CFR
2012-01-01
...)—.952381 (iv) Virginia Sun-cured (type 37) 1.0000 3 Multiply the sum from Step 2 times the farm's average... (35-36)—.952381 (iv) Virginia Sun-cured (type 37) 1.0000 6 Multiply the sum from Step 5 times the farm... (35-36)—.94264 (iv) Virginia Sun-cured (type 37) 1.0000 3 Multiply the sum from Step 2 times the farm...
7 CFR 1463.106 - Base quota levels for eligible tobacco producers.
Code of Federal Regulations, 2013 CFR
2013-01-01
...)—.952381 (iv) Virginia Sun-cured (type 37) 1.0000 3 Multiply the sum from Step 2 times the farm's average... (35-36)—.952381 (iv) Virginia Sun-cured (type 37) 1.0000 6 Multiply the sum from Step 5 times the farm... (35-36)—.94264 (iv) Virginia Sun-cured (type 37) 1.0000 3 Multiply the sum from Step 2 times the farm...
7 CFR 1463.106 - Base quota levels for eligible tobacco producers.
Code of Federal Regulations, 2014 CFR
2014-01-01
...)—.952381 (iv) Virginia Sun-cured (type 37) 1.0000 3 Multiply the sum from Step 2 times the farm's average... (35-36)—.952381 (iv) Virginia Sun-cured (type 37) 1.0000 6 Multiply the sum from Step 5 times the farm... (35-36)—.94264 (iv) Virginia Sun-cured (type 37) 1.0000 3 Multiply the sum from Step 2 times the farm...
7 CFR 1463.106 - Base quota levels for eligible tobacco producers.
Code of Federal Regulations, 2011 CFR
2011-01-01
...)—.952381 (iv) Virginia Sun-cured (type 37) 1.0000 3 Multiply the sum from Step 2 times the farm's average... (35-36)—.952381 (iv) Virginia Sun-cured (type 37) 1.0000 6 Multiply the sum from Step 5 times the farm... (35-36)—.94264 (iv) Virginia Sun-cured (type 37) 1.0000 3 Multiply the sum from Step 2 times the farm...
Training Rapid Stepping Responses in an Individual With Stroke
Inness, Elizabeth L.; Komar, Janice; Biasin, Louis; Brunton, Karen; Lakhani, Bimal; McIlroy, William E.
2011-01-01
Background and Purpose Compensatory stepping reactions are important responses to prevent a fall following a postural perturbation. People with hemiparesis following a stroke show delayed initiation and execution of stepping reactions and often are found to be unable to initiate these steps with the more-affected limb. This case report describes a targeted training program involving repeated postural perturbations to improve control of compensatory stepping in an individual with stroke. Case Description Compensatory stepping reactions of a 68-year-old man were examined 52 days after left hemorrhagic stroke. He required assistance to prevent a fall in all trials administered during his initial examination because he showed weight-bearing asymmetry (with more weight borne on the more-affected right side), was unable to initiate stepping with the right leg (despite blocking of the left leg in some trials), and demonstrated delayed response times. The patient completed 6 perturbation training sessions (30–60 minutes per session) that aimed to improve preperturbation weight-bearing symmetry, to encourage stepping with the right limb, and to reduce step initiation and completion times. Outcomes Improved efficacy of compensatory stepping reactions with training and reduced reliance on assistance to prevent falling were observed. Improvements were noted in preperturbation asymmetry and step timing. Blocking the left foot was effective in encouraging stepping with the more-affected right foot. Discussion This case report demonstrates potential short-term adaptations in compensatory stepping reactions following perturbation training in an individual with stroke. Future work should investigate the links between improved compensatory step characteristics and fall risk in this vulnerable population. PMID:21511992
Next Steps in Network Time Synchronization For Navy Shipboard Applications
2008-12-01
40th Annual Precise Time and Time Interval (PTTI) Meeting NEXT STEPS IN NETWORK TIME SYNCHRONIZATION FOR NAVY SHIPBOARD APPLICATIONS...dynamic manner than in previous designs. This new paradigm creates significant network time synchronization challenges. The Navy has been...deploying the Network Time Protocol (NTP) in shipboard computing infrastructures to meet the current network time synchronization requirements
A Lyapunov and Sacker–Sell spectral stability theory for one-step methods
Steyer, Andrew J.; Van Vleck, Erik S.
2018-04-13
Approximation theory for Lyapunov and Sacker–Sell spectra based upon QR techniques is used to analyze the stability of a one-step method solving a time-dependent (nonautonomous) linear ordinary differential equation (ODE) initial value problem in terms of the local error. Integral separation is used to characterize the conditioning of stability spectra calculations. The stability of the numerical solution by a one-step method of a nonautonomous linear ODE using real-valued, scalar, nonautonomous linear test equations is justified. This analysis is used to approximate exponential growth/decay rates on finite and infinite time intervals and establish global error bounds for one-step methods approximating uniformly,more » exponentially stable trajectories of nonautonomous and nonlinear ODEs. A time-dependent stiffness indicator and a one-step method that switches between explicit and implicit Runge–Kutta methods based upon time-dependent stiffness are developed based upon the theoretical results.« less
A Lyapunov and Sacker–Sell spectral stability theory for one-step methods
DOE Office of Scientific and Technical Information (OSTI.GOV)
Steyer, Andrew J.; Van Vleck, Erik S.
Approximation theory for Lyapunov and Sacker–Sell spectra based upon QR techniques is used to analyze the stability of a one-step method solving a time-dependent (nonautonomous) linear ordinary differential equation (ODE) initial value problem in terms of the local error. Integral separation is used to characterize the conditioning of stability spectra calculations. The stability of the numerical solution by a one-step method of a nonautonomous linear ODE using real-valued, scalar, nonautonomous linear test equations is justified. This analysis is used to approximate exponential growth/decay rates on finite and infinite time intervals and establish global error bounds for one-step methods approximating uniformly,more » exponentially stable trajectories of nonautonomous and nonlinear ODEs. A time-dependent stiffness indicator and a one-step method that switches between explicit and implicit Runge–Kutta methods based upon time-dependent stiffness are developed based upon the theoretical results.« less
Control Circuit For Two Stepping Motors
NASA Technical Reports Server (NTRS)
Ratliff, Roger; Rehmann, Kenneth; Backus, Charles
1990-01-01
Control circuit operates two independent stepping motors, one at a time. Provides following operating features: After selected motor stepped to chosen position, power turned off to reduce dissipation; Includes two up/down counters that remember at which one of eight steps each motor set. For selected motor, step indicated by illumination of one of eight light-emitting diodes (LED's) in ring; Selected motor advanced one step at time or repeatedly at rate controlled; Motor current - 30 mA at 90 degree positions, 60 mA at 45 degree positions - indicated by high or low intensity of LED that serves as motor-current monitor; Power-on reset feature provides trouble-free starts; To maintain synchronism between control circuit and motors, stepping of counters inhibited when motor power turned off.
Food and nutrition in Canadian "prime time" television commercials.
Ostbye, T; Pomerleau, J; White, M; Coolich, M; McWhinney, J
1993-01-01
Television is, arguably, the most influential mass medium and "prime time" viewing attracts the largest audiences. To assess the type, number and nutritional content of foods advertised on TV, commercial breaks during "prime time" (7:00 to 11:00 p.m.) on five Canadian channels (CBC-English, CBC-French, CTV, CFPL, Much Music) were recorded and analyzed. A similar analysis of Saturday morning children's TV commercials was also performed. Commercials for foods and food products constituted between 24-35% of all commercials, the largest advertising output for any group of products. The combination of food presented in commercials reflected average current consumption patterns. Of special concern was the emphasis on low nutrition beverages, especially beer, as well as snacks and candy on Much Music. While further government intervention to restrict advertising practices may be an impractical option, there is scope for increasing the alternative promotion of healthy dietary choices.
Finite Memory Walk and Its Application to Small-World Network
NASA Astrophysics Data System (ADS)
Oshima, Hiraku; Odagaki, Takashi
2012-07-01
In order to investigate the effects of cycles on the dynamical process on both regular lattices and complex networks, we introduce a finite memory walk (FMW) as an extension of the simple random walk (SRW), in which a walker is prohibited from moving to sites visited during m steps just before the current position. This walk interpolates the simple random walk (SRW), which has no memory (m = 0), and the self-avoiding walk (SAW), which has an infinite memory (m = ∞). We investigate the FMW on regular lattices and clarify the fundamental characteristics of the walk. We find that (1) the mean-square displacement (MSD) of the FMW shows a crossover from the SAW at a short time step to the SRW at a long time step, and the crossover time is approximately equivalent to the number of steps remembered, and that the MSD can be rescaled in terms of the time step and the size of memory; (2) the mean first-return time (MFRT) of the FMW changes significantly at the number of remembered steps that corresponds to the size of the smallest cycle in the regular lattice, where ``smallest'' indicates that the size of the cycle is the smallest in the network; (3) the relaxation time of the first-return time distribution (FRTD) decreases as the number of cycles increases. We also investigate the FMW on the Watts--Strogatz networks that can generate small-world networks, and show that the clustering coefficient of the Watts--Strogatz network is strongly related to the MFRT of the FMW that can remember two steps.
Mass imbalances in EPANET water-quality simulations
NASA Astrophysics Data System (ADS)
Davis, Michael J.; Janke, Robert; Taxon, Thomas N.
2018-04-01
EPANET is widely employed to simulate water quality in water distribution systems. However, in general, the time-driven simulation approach used to determine concentrations of water-quality constituents provides accurate results only for short water-quality time steps. Overly long time steps can yield errors in concentration estimates and can result in situations in which constituent mass is not conserved. The use of a time step that is sufficiently short to avoid these problems may not always be feasible. The absence of EPANET errors or warnings does not ensure conservation of mass. This paper provides examples illustrating mass imbalances and explains how such imbalances can occur because of fundamental limitations in the water-quality routing algorithm used in EPANET. In general, these limitations cannot be overcome by the use of improved water-quality modeling practices. This paper also presents a preliminary event-driven approach that conserves mass with a water-quality time step that is as long as the hydraulic time step. Results obtained using the current approach converge, or tend to converge, toward those obtained using the preliminary event-driven approach as the water-quality time step decreases. Improving the water-quality routing algorithm used in EPANET could eliminate mass imbalances and related errors in estimated concentrations. The results presented in this paper should be of value to those who perform water-quality simulations using EPANET or use the results of such simulations, including utility managers and engineers.
Quadratic adaptive algorithm for solving cardiac action potential models.
Chen, Min-Hung; Chen, Po-Yuan; Luo, Ching-Hsing
2016-10-01
An adaptive integration method is proposed for computing cardiac action potential models accurately and efficiently. Time steps are adaptively chosen by solving a quadratic formula involving the first and second derivatives of the membrane action potential. To improve the numerical accuracy, we devise an extremum-locator (el) function to predict the local extremum when approaching the peak amplitude of the action potential. In addition, the time step restriction (tsr) technique is designed to limit the increase in time steps, and thus prevent the membrane potential from changing abruptly. The performance of the proposed method is tested using the Luo-Rudy phase 1 (LR1), dynamic (LR2), and human O'Hara-Rudy dynamic (ORd) ventricular action potential models, and the Courtemanche atrial model incorporating a Markov sodium channel model. Numerical experiments demonstrate that the action potential generated using the proposed method is more accurate than that using the traditional Hybrid method, especially near the peak region. The traditional Hybrid method may choose large time steps near to the peak region, and sometimes causes the action potential to become distorted. In contrast, the proposed new method chooses very fine time steps in the peak region, but large time steps in the smooth region, and the profiles are smoother and closer to the reference solution. In the test on the stiff Markov ionic channel model, the Hybrid blows up if the allowable time step is set to be greater than 0.1ms. In contrast, our method can adjust the time step size automatically, and is stable. Overall, the proposed method is more accurate than and as efficient as the traditional Hybrid method, especially for the human ORd model. The proposed method shows improvement for action potentials with a non-smooth morphology, and it needs further investigation to determine whether the method is helpful during propagation of the action potential. Copyright © 2016 Elsevier Ltd. All rights reserved.
Wolff, Sebastian; Bucher, Christian
2013-01-01
This article presents asynchronous collision integrators and a simple asynchronous method treating nodal restraints. Asynchronous discretizations allow individual time step sizes for each spatial region, improving the efficiency of explicit time stepping for finite element meshes with heterogeneous element sizes. The article first introduces asynchronous variational integration being expressed by drift and kick operators. Linear nodal restraint conditions are solved by a simple projection of the forces that is shown to be equivalent to RATTLE. Unilateral contact is solved by an asynchronous variant of decomposition contact response. Therein, velocities are modified avoiding penetrations. Although decomposition contact response is solving a large system of linear equations (being critical for the numerical efficiency of explicit time stepping schemes) and is needing special treatment regarding overconstraint and linear dependency of the contact constraints (for example from double-sided node-to-surface contact or self-contact), the asynchronous strategy handles these situations efficiently and robust. Only a single constraint involving a very small number of degrees of freedom is considered at once leading to a very efficient solution. The treatment of friction is exemplified for the Coulomb model. Special care needs the contact of nodes that are subject to restraints. Together with the aforementioned projection for restraints, a novel efficient solution scheme can be presented. The collision integrator does not influence the critical time step. Hence, the time step can be chosen independently from the underlying time-stepping scheme. The time step may be fixed or time-adaptive. New demands on global collision detection are discussed exemplified by position codes and node-to-segment integration. Numerical examples illustrate convergence and efficiency of the new contact algorithm. Copyright © 2013 The Authors. International Journal for Numerical Methods in Engineering published by John Wiley & Sons, Ltd. PMID:23970806
Martin, Anne; Adams, Jacob M; Bunn, Christopher; Gill, Jason M R; Gray, Cindy M; Hunt, Kate; Maxwell, Douglas J; van der Ploeg, Hidde P; Wyke, Sally
2017-01-01
Objectives Time spent inactive and sedentary are both associated with poor health. Self-monitoring of walking, using pedometers for real-time feedback, is effective at increasing physical activity. This study evaluated the feasibility of a new pocket-worn sedentary time and physical activity real-time self-monitoring device (SitFIT). Methods Forty sedentary men were equally randomised into two intervention groups. For 4 weeks, one group received a SitFIT providing feedback on steps and time spent sedentary (lying/sitting); the other group received a SitFIT providing feedback on steps and time spent upright (standing/stepping). Change in sedentary time, standing time, stepping time and step count was assessed using activPAL monitors at baseline, 4-week follow-up (T1) and 12-week (T2) follow-up. Semistructured interviews were conducted after 4 and 12 weeks. Results The SitFIT was reported as acceptable and usable and seen as a motivating tool to reduce sedentary time by both groups. On average, participants reduced their sedentary time by 7.8 minutes/day (95% CI −55.4 to 39.7) (T1) and by 8.2 minutes/day (95% CI −60.1 to 44.3) (T2). They increased standing time by 23.2 minutes/day (95% CI 4.0 to 42.5) (T1) and 16.2 minutes/day (95% CI −13.9 to 46.2) (T2). Stepping time was increased by 8.5 minutes/day (95% CI 0.9 to 16.0) (T1) and 9.0 minutes/day (95% CI 0.5 to 17.5) (T2). There were no between-group differences at either follow-up time points. Conclusion The SitFIT was perceived as a useful tool for self-monitoring of sedentary time. It has potential as a real-time self-monitoring device to reduce sedentary and increase upright time. PMID:29081985
Fonseca-Azevedo, Karina; Herculano-Houzel, Suzana
2012-01-01
Despite a general trend for larger mammals to have larger brains, humans are the primates with the largest brain and number of neurons, but not the largest body mass. Why are great apes, the largest primates, not also those endowed with the largest brains? Recently, we showed that the energetic cost of the brain is a linear function of its numbers of neurons. Here we show that metabolic limitations that result from the number of hours available for feeding and the low caloric yield of raw foods impose a tradeoff between body size and number of brain neurons, which explains the small brain size of great apes compared with their large body size. This limitation was probably overcome in Homo erectus with the shift to a cooked diet. Absent the requirement to spend most available hours of the day feeding, the combination of newly freed time and a large number of brain neurons affordable on a cooked diet may thus have been a major positive driving force to the rapid increased in brain size in human evolution. PMID:23090991
Fonseca-Azevedo, Karina; Herculano-Houzel, Suzana
2012-11-06
Despite a general trend for larger mammals to have larger brains, humans are the primates with the largest brain and number of neurons, but not the largest body mass. Why are great apes, the largest primates, not also those endowed with the largest brains? Recently, we showed that the energetic cost of the brain is a linear function of its numbers of neurons. Here we show that metabolic limitations that result from the number of hours available for feeding and the low caloric yield of raw foods impose a tradeoff between body size and number of brain neurons, which explains the small brain size of great apes compared with their large body size. This limitation was probably overcome in Homo erectus with the shift to a cooked diet. Absent the requirement to spend most available hours of the day feeding, the combination of newly freed time and a large number of brain neurons affordable on a cooked diet may thus have been a major positive driving force to the rapid increased in brain size in human evolution.
Impaired Response Selection During Stepping Predicts Falls in Older People-A Cohort Study.
Schoene, Daniel; Delbaere, Kim; Lord, Stephen R
2017-08-01
Response inhibition, an important executive function, has been identified as a risk factor for falls in older people. This study investigated whether step tests that include different levels of response inhibition differ in their ability to predict falls and whether such associations are mediated by measures of attention, speed, and/or balance. A cohort study with a 12-month follow-up was conducted in community-dwelling older people without major cognitive and mobility impairments. Participants underwent 3 step tests: (1) choice stepping reaction time (CSRT) requiring rapid decision making and step initiation; (2) inhibitory choice stepping reaction time (iCSRT) requiring additional response inhibition and response-selection (go/no-go); and (3) a Stroop Stepping Test (SST) under congruent and incongruent conditions requiring conflict resolution. Participants also completed tests of processing speed, balance, and attention as potential mediators. Ninety-three of the 212 participants (44%) fell in the follow-up period. Of the step tests, only components of the iCSRT task predicted falls in this time with the relative risk per standard deviation for the reaction time (iCSRT-RT) = 1.23 (95%CI = 1.10-1.37). Multiple mediation analysis indicated that the iCSRT-RT was independently associated with falls and not mediated through slow processing speed, poor balance, or inattention. Combined stepping and response inhibition as measured in a go/no-go test stepping paradigm predicted falls in older people. This suggests that integrity of the response-selection component of a voluntary stepping response is crucial for minimizing fall risk. Copyright © 2017 AMDA – The Society for Post-Acute and Long-Term Care Medicine. Published by Elsevier Inc. All rights reserved.
Effect of transmitter turn-off time on transient soundings
Fitterman, D.V.; Anderson, W.L.
1987-01-01
A general procedure for computing the effect of non-zero turn-off time on the transient electromagnetic response is presented which can be applied to forward and inverse calculation methods for any transmitter-receiver configuration. We consider in detail the case of a large transmitter loop which has a receiver coil located at the center of the loop (central induction or in-loop array). For a linear turn-off ramp of width t0, the voltage response is shown to be the voltage due to an ideal step turn-off averaged over windows of width t0. Thus the effect is similar to that obtained by using averaging windows in the receiver. In general when time zero is taken to be the end of the ramp, the apparent resistivity increases for a homogeneous half-space over a limited time range. For time zero taken to be the start of the ramp the apparent resistivity is affected in the opposite direction. The effect of the ramp increases with increasing t0 and first-layer resistivity, is largest during the intermediate stage, and decreases with increasing time. It is shown that for a ramp turn-off, there is no effect in the early and late stages. For two-layered models with a resistive first layer (??1>??2), the apparent resistivity is increased in the intermediate stage. When the first layer is more conductive than the second layer (??1?2) and the layer thickness is comparable or greater than the loop radius, similar results are obtained; however, when the layer is thin compared to the loop radius the apparent resistivity is initially decreased and then increases as time increases. Examples are presented which illustrate the strong influence of the geoelectrical section on the turn-off effect. Neglecting the turn-off ramp will affect data interpretation as shown by field examples; the influence is the greatest on near-surface layer parameters. ?? 1987.
NASA Astrophysics Data System (ADS)
Ponte, Jean Pierre; Robin, Cecile; Guillocheau, Francois; Baby, Guillaume; Dall'Asta, Massimo; Popescu, Speranta; Suc, Jean Pierre; Droz, Laurence; Rabineau, Marina; Moulin, Maryline
2016-04-01
The Mozambique margin is an oblique to transform margin which feeds one of the largest African turbiditic system, the Zambezi deep-sea fan (1800 km length and 400 km wide; Droz and Mougenot., AAPG Bull., 1987). The Zambezi sedimentary system is characterized by (1) a changing catchment area through time with evidences of river captures (Thomas and Shaw, J. Afr. Earth. Sci, 1988) and (2) a delta, storing more than 12 km of sediment, with no gravitary tectonics. The aim of this study is to carry out a source to sink study along the Zambezi sedimentary system and to analyse the margin evolution (vertical movements, climate change) since Early Cretaceous times. The used data are seismic lines (industrial and academic) and petroleum wells (with access to the cuttings). Our first objective was to perform a new biochronostratigraphic framework based on nannofossils, foraminifers, pollen and spores on the cuttings of three industrial wells. The second target was to recognize the different steps of the growth of the Zambezi sedimentary systems. Four main phases were identified: • Late Jurassic (?) - early Late Cretaceous: from Neocomian to Aptian times, the high of the clinoforms is getting higher, with the first occurrence of contouritic ridges during Aptian times. • Late Cretaceous - Early Paleocene: a major drop of relative sea-level occurred as a consequence of the South African Plateau uplift. The occurrence of two depocenters suggests siliciclastic supplies from the Bushveld and from the North Mozambique domain. • Early Paleocene - Eocene: growth of carbonate platforms and large contouritic ridges. • Oligocene - Present-day: birth of the modern Zambezi Delta, with quite low siliciclastic supply during Oligocene times, increasing during Miocene times. As previously expected (Droz and Mougenot) some sediments of the so-called Zambezi fans are coming from a feeder located east of the Davie Ridge. This study was founded by TOTAL and IFREMER in the frame of the research project PAMELA (Passive Margin Exploration Laboratories).
Grabiner, Mark D; Marone, Jane R; Wyatt, Marilynn; Sessoms, Pinata; Kaufman, Kenton R
2018-06-01
The fractal scaling evident in the step-to-step fluctuations of stepping-related time series reflects, to some degree, neuromotor noise. The primary purpose of this study was to determine the extent to which the fractal scaling of step width, step width and step width variability are affected by performance of an attention-demanding task. We hypothesized that the attention-demanding task would shift the structure of the step width time series toward white, uncorrelated noise. Subjects performed two 10-min treadmill walking trials, a control trial of undisturbed walking and a trial during which they performed a mental arithmetic/texting task. Motion capture data was converted to step width time series, the fractal scaling of which were determined from their power spectra. Fractal scaling decreased by 22% during the texting condition (p < 0.001) supporting the hypothesized shift toward white uncorrelated noise. Step width and step width variability increased 19% and five percent, respectively (p < 0.001). However, a stepwise discriminant analysis to which all three variables were input revealed that the control and dual task conditions were discriminated only by step width fractal scaling. The change of the fractal scaling of step width is consistent with increased cognitive demand and suggests a transition in the characteristics of the signal noise. This may reflect an important advance toward the understanding of the manner in which neuromotor noise contributes to some types of falls. However, further investigation of the repeatability of the results, the sensitivity of the results to progressive increases in cognitive load imposed by attention-demanding tasks, and the extent to which the results can be generalized to the gait of older adults seems warranted. Copyright © 2018 Elsevier B.V. All rights reserved.
NASA Technical Reports Server (NTRS)
Molnar, Melissa; Marek, C. John
2004-01-01
A simplified kinetic scheme for Jet-A, and methane fuels with water injection was developed to be used in numerical combustion codes, such as the National Combustor Code (NCC) or even simple FORTRAN codes that are being developed at Glenn. The two time step method is either an initial time averaged value (step one) or an instantaneous value (step two). The switch is based on the water concentration in moles/cc of 1x10(exp -20). The results presented here results in a correlation that gives the chemical kinetic time as two separate functions. This two step method is used as opposed to a one step time averaged method previously developed to determine the chemical kinetic time with increased accuracy. The first time averaged step is used at the initial times for smaller water concentrations. This gives the average chemical kinetic time as a function of initial overall fuel air ratio, initial water to fuel mass ratio, temperature, and pressure. The second instantaneous step, to be used with higher water concentrations, gives the chemical kinetic time as a function of instantaneous fuel and water mole concentration, pressure and temperature (T4). The simple correlations would then be compared to the turbulent mixing times to determine the limiting properties of the reaction. The NASA Glenn GLSENS kinetics code calculates the reaction rates and rate constants for each species in a kinetic scheme for finite kinetic rates. These reaction rates were then used to calculate the necessary chemical kinetic times. Chemical kinetic time equations for fuel, carbon monoxide and NOx were obtained for Jet-A fuel and methane with and without water injection to water mass loadings of 2/1 water to fuel. A similar correlation was also developed using data from NASA's Chemical Equilibrium Applications (CEA) code to determine the equilibrium concentrations of carbon monoxide and nitrogen oxide as functions of overall equivalence ratio, water to fuel mass ratio, pressure and temperature (T3). The temperature of the gas entering the turbine (T4) was also correlated as a function of the initial combustor temperature (T3), equivalence ratio, water to fuel mass ratio, and pressure.
Adaptive time stepping for fluid-structure interaction solvers
Mayr, M.; Wall, W. A.; Gee, M. W.
2017-12-22
In this work, a novel adaptive time stepping scheme for fluid-structure interaction (FSI) problems is proposed that allows for controlling the accuracy of the time-discrete solution. Furthermore, it eases practical computations by providing an efficient and very robust time step size selection. This has proven to be very useful, especially when addressing new physical problems, where no educated guess for an appropriate time step size is available. The fluid and the structure field, but also the fluid-structure interface are taken into account for the purpose of a posteriori error estimation, rendering it easy to implement and only adding negligible additionalmore » cost. The adaptive time stepping scheme is incorporated into a monolithic solution framework, but can straightforwardly be applied to partitioned solvers as well. The basic idea can be extended to the coupling of an arbitrary number of physical models. Accuracy and efficiency of the proposed method are studied in a variety of numerical examples ranging from academic benchmark tests to complex biomedical applications like the pulsatile blood flow through an abdominal aortic aneurysm. Finally, the demonstrated accuracy of the time-discrete solution in combination with reduced computational cost make this algorithm very appealing in all kinds of FSI applications.« less
Adaptive time stepping for fluid-structure interaction solvers
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mayr, M.; Wall, W. A.; Gee, M. W.
In this work, a novel adaptive time stepping scheme for fluid-structure interaction (FSI) problems is proposed that allows for controlling the accuracy of the time-discrete solution. Furthermore, it eases practical computations by providing an efficient and very robust time step size selection. This has proven to be very useful, especially when addressing new physical problems, where no educated guess for an appropriate time step size is available. The fluid and the structure field, but also the fluid-structure interface are taken into account for the purpose of a posteriori error estimation, rendering it easy to implement and only adding negligible additionalmore » cost. The adaptive time stepping scheme is incorporated into a monolithic solution framework, but can straightforwardly be applied to partitioned solvers as well. The basic idea can be extended to the coupling of an arbitrary number of physical models. Accuracy and efficiency of the proposed method are studied in a variety of numerical examples ranging from academic benchmark tests to complex biomedical applications like the pulsatile blood flow through an abdominal aortic aneurysm. Finally, the demonstrated accuracy of the time-discrete solution in combination with reduced computational cost make this algorithm very appealing in all kinds of FSI applications.« less
On the correct use of stepped-sine excitations for the measurement of time-varying bioimpedance.
Louarroudi, E; Sanchez, B
2017-02-01
When a linear time-varying (LTV) bioimpedance is measured using stepped-sine excitations, a compromise must be made: the temporal distortions affecting the data depend on the experimental time, which in turn sets the data accuracy and limits the temporal bandwidth of the system that needs to be measured. Here, the experimental time required to measure linear time-invariant bioimpedance with a specified accuracy is analyzed for different stepped-sine excitation setups. We provide simple equations that allow the reader to know whether LTV bioimpedance can be measured through repeated time- invariant stepped-sine experiments. Bioimpedance technology is on the rise thanks to a plethora of healthcare monitoring applications. The results presented can help to avoid distortions in the data while measuring accurately non-stationary physiological phenomena. The impact of the work presented is broad, including the potential of enhancing bioimpedance studies and healthcare devices using bioimpedance technology.
Wolves Recolonizing Islands: Genetic Consequences and Implications for Conservation and Management
Remm, Jaanus; Hindrikson, Maris; Jõgisalu, Inga; Männil, Peep; Kübarsepp, Marko; Saarma, Urmas
2016-01-01
After a long and deliberate persecution, the grey wolf (Canis lupus) is slowly recolonizing its former areas in Europe, and the genetic consequences of this process are of particular interest. Wolves, though present in mainland Estonia for a long time, have only recently started to recolonize the country’s two largest islands, Saaremaa and Hiiumaa. The main objective of this study was to analyse wolf population structure and processes in Estonia, with particular attention to the recolonization of islands. Fifteen microsatellite loci were genotyped for 185 individuals across Estonia. As a methodological novelty, all putative wolf-dog hybrids were identified and removed (n = 17) from the dataset beforehand to avoid interference of dog alleles in wolf population analysis. After the preliminary filtering, our final dataset comprised of 168 “pure” wolves. We recommend using hybrid-removal step as a standard precautionary procedure not only for wolf population studies, but also for other taxa prone to hybridization. STRUCTURE indicated four genetic groups in Estonia. Spatially explicit DResD analysis identified two areas, one of them on Saaremaa island and the other in southwestern Estonia, where neighbouring individuals were genetically more similar than expected from an isolation-by-distance null model. Three blending areas and two contrasting transition zones were identified in central Estonia, where the sampled individuals exhibited strong local differentiation over relatively short distance. Wolves on the largest Estonian islands are part of human-wildlife conflict due to livestock depredation. Negative public attitude, especially on Saaremaa where sheep herding is widespread, poses a significant threat for island wolves. To maintain the long-term viability of the wolf population on Estonian islands, not only wolf hunting quota should be targeted with extreme care, but effective measures should be applied to avoid inbreeding and minimize conflicts with local communities and stakeholders. PMID:27384049
SHEPPARD, C. R. C.; ATEWEBERHAN, M.; BOWEN, B. W.; CARR, P.; CHEN, C. A.; CLUBBE, C.; CRAIG, M. T.; EBINGHAUS, R.; EBLE, J.; FITZSIMMONS, N.; GAITHER, M. R.; GAN, C-H.; GOLLOCK, M.; GUZMAN, N.; GRAHAM, N. A. J.; HARRIS, A.; JONES, R.; KESHAVMURTHY, S.; KOLDEWEY, H.; LUNDIN, C. G.; MORTIMER, J. A.; OBURA, D.; PFEIFFER, M.; PRICE, A. R. G.; PURKIS, S.; RAINES, P.; READMAN, J. W.; RIEGL, B.; ROGERS, A.; SCHLEYER, M.; SEAWARD, M. R. D; SHEPPARD, A. L. S.; TAMELANDER, J.; TURNER, J. R.; VISRAM, S.; VOGLER, C.; VOGT, S.; WOLSCHKE, H.; YANG, J. M-C.; YANG, S-Y.; YESSON, C.
2014-01-01
The Chagos Archipelago was designated a no-take marine protected area (MPA) in 2010; it covers 550 000 km2, with more than 60 000 km2 shallow limestone platform and reefs. This has doubled the global cover of such MPAs.It contains 25–50% of the Indian Ocean reef area remaining in excellent condition, as well as the world’s largest contiguous undamaged reef area. It has suffered from warming episodes, but after the most severe mortality event of 1998, coral cover was restored after 10 years.Coral reef fishes are orders of magnitude more abundant than in other Indian Ocean locations, regardless of whether the latter are fished or protected.Coral diseases are extremely low, and no invasive marine species are known.Genetically, Chagos marine species are part of the Western Indian Ocean, and Chagos serves as a ‘stepping-stone’ in the ocean.The no-take MPA extends to the 200 nm boundary, and. includes 86 unfished seamounts and 243 deep knolls as well as encompassing important pelagic species.On the larger islands, native plants, coconut crabs, bird and turtle colonies were largely destroyed in plantation times, but several smaller islands are in relatively undamaged state.There are now 10 ‘important bird areas’, coconut crab density is high and numbers of green and hawksbill turtles are recovering.Diego Garcia atoll contains a military facility; this atoll contains one Ramsar site and several ‘strict nature reserves’. Pollutant monitoring shows it to be the least polluted inhabited atoll in the world. Today, strict environmental regulations are enforced.Shoreline erosion is significant in many places. Its economic cost in the inhabited part of Diego Garcia is very high, but all islands are vulnerable.Chagos is ideally situated for several monitoring programmes, and use is increasingly being made of the archipelago for this purpose. PMID:25505830
LaFontaine, Jacob H.; Jones, L. Elliott; Painter, Jaime A.
2017-12-29
A suite of hydrologic models has been developed for the Apalachicola-Chattahoochee-Flint River Basin (ACFB) as part of the National Water Census, a U.S. Geological Survey research program that focuses on developing new water accounting tools and assessing water availability and use at the regional and national scales. Seven hydrologic models were developed using the Precipitation-Runoff Modeling System (PRMS), a deterministic, distributed-parameter, process-based system that simulates the effects of precipitation, temperature, land cover, and water use on basin hydrology. A coarse-resolution PRMS model was developed for the entire ACFB, and six fine-resolution PRMS models were developed for six subbasins of the ACFB. The coarse-resolution model was loosely coupled with a groundwater model to better assess the effects of water use on streamflow in the lower ACFB, a complex geologic setting with karst features. The PRMS coarse-resolution model was used to provide inputs of recharge to the groundwater model, which in turn provide simulations of groundwater flow that were aggregated with PRMS-based simulations of surface runoff and shallow-subsurface flow. Simulations without the effects of water use were developed for each model for at least the calendar years 1982–2012 with longer periods for the Potato Creek subbasin (1942–2012) and the Spring Creek subbasin (1952–2012). Water-use-affected flows were simulated for 2008–12. Water budget simulations showed heterogeneous distributions of precipitation, actual evapotranspiration, recharge, runoff, and storage change across the ACFB. Streamflow volume differences between no-water-use and water-use simulations were largest along the main stem of the Apalachicola and Chattahoochee River Basins, with streamflow percentage differences largest in the upper Chattahoochee and Flint River Basins and Spring Creek in the lower Flint River Basin. Water-use information at a shorter time step and a fully coupled simulation in the lower ACFB may further improve water availability estimates and hydrologic simulations in the basin.
Sheppard, C R C; Ateweberhan, M; Bowen, B W; Carr, P; Chen, C A; Clubbe, C; Craig, M T; Ebinghaus, R; Eble, J; Fitzsimmons, N; Gaither, M R; Gan, C-H; Gollock, M; Guzman, N; Graham, N A J; Harris, A; Jones, R; Keshavmurthy, S; Koldewey, H; Lundin, C G; Mortimer, J A; Obura, D; Pfeiffer, M; Price, A R G; Purkis, S; Raines, P; Readman, J W; Riegl, B; Rogers, A; Schleyer, M; Seaward, M R D; Sheppard, A L S; Tamelander, J; Turner, J R; Visram, S; Vogler, C; Vogt, S; Wolschke, H; Yang, J M-C; Yang, S-Y; Yesson, C
2012-03-01
The Chagos Archipelago was designated a no-take marine protected area (MPA) in 2010; it covers 550 000 km 2 , with more than 60 000 km 2 shallow limestone platform and reefs. This has doubled the global cover of such MPAs.It contains 25-50% of the Indian Ocean reef area remaining in excellent condition, as well as the world's largest contiguous undamaged reef area. It has suffered from warming episodes, but after the most severe mortality event of 1998, coral cover was restored after 10 years.Coral reef fishes are orders of magnitude more abundant than in other Indian Ocean locations, regardless of whether the latter are fished or protected.Coral diseases are extremely low, and no invasive marine species are known.Genetically, Chagos marine species are part of the Western Indian Ocean, and Chagos serves as a 'stepping-stone' in the ocean.The no-take MPA extends to the 200 nm boundary, and. includes 86 unfished seamounts and 243 deep knolls as well as encompassing important pelagic species.On the larger islands, native plants, coconut crabs, bird and turtle colonies were largely destroyed in plantation times, but several smaller islands are in relatively undamaged state.There are now 10 'important bird areas', coconut crab density is high and numbers of green and hawksbill turtles are recovering.Diego Garcia atoll contains a military facility; this atoll contains one Ramsar site and several 'strict nature reserves'. Pollutant monitoring shows it to be the least polluted inhabited atoll in the world. Today, strict environmental regulations are enforced.Shoreline erosion is significant in many places. Its economic cost in the inhabited part of Diego Garcia is very high, but all islands are vulnerable.Chagos is ideally situated for several monitoring programmes, and use is increasingly being made of the archipelago for this purpose.
Nagahara, Ryu; Mizutani, Mirai; Matsuo, Akifumi; Kanehisa, Hiroaki; Fukunaga, Tetsuo
2018-06-01
We aimed to investigate the step-to-step spatiotemporal variables and ground reaction forces during the acceleration phase for characterising intra-individual fastest sprinting within a single session. Step-to-step spatiotemporal variables and ground reaction forces produced by 15 male athletes were measured over a 50-m distance during repeated (three to five) 60-m sprints using a long force platform system. Differences in measured variables between the fastest and slowest trials were examined at each step until the 22nd step using a magnitude-based inferences approach. There were possibly-most likely higher running speed and step frequency (2nd to 22nd steps) and shorter support time (all steps) in the fastest trial than in the slowest trial. Moreover, for the fastest trial there were likely-very likely greater mean propulsive force during the initial four steps and possibly-very likely larger mean net anterior-posterior force until the 17th step. The current results demonstrate that better sprinting performance within a single session is probably achieved by 1) a high step frequency (except the initial step) with short support time at all steps, 2) exerting a greater mean propulsive force during initial acceleration, and 3) producing a greater mean net anterior-posterior force during initial and middle acceleration.
Enabling fast, stable and accurate peridynamic computations using multi-time-step integration
Lindsay, P.; Parks, M. L.; Prakash, A.
2016-04-13
Peridynamics is a nonlocal extension of classical continuum mechanics that is well-suited for solving problems with discontinuities such as cracks. This paper extends the peridynamic formulation to decompose a problem domain into a number of smaller overlapping subdomains and to enable the use of different time steps in different subdomains. This approach allows regions of interest to be isolated and solved at a small time step for increased accuracy while the rest of the problem domain can be solved at a larger time step for greater computational efficiency. Lastly, performance of the proposed method in terms of stability, accuracy, andmore » computational cost is examined and several numerical examples are presented to corroborate the findings.« less
Biometric recognition using 3D ear shape.
Yan, Ping; Bowyer, Kevin W
2007-08-01
Previous works have shown that the ear is a promising candidate for biometric identification. However, in prior work, the preprocessing of ear images has had manual steps and algorithms have not necessarily handled problems caused by hair and earrings. We present a complete system for ear biometrics, including automated segmentation of the ear in a profile view image and 3D shape matching for recognition. We evaluated this system with the largest experimental study to date in ear biometrics, achieving a rank-one recognition rate of 97.8 percent for an identification scenario and an equal error rate of 1.2 percent for a verification scenario on a database of 415 subjects and 1,386 total probes.
Yentes, Jennifer M; Rennard, Stephen I; Schmid, Kendra K; Blanke, Daniel; Stergiou, Nicholas
2017-06-01
Compared with control subjects, patients with chronic obstructive pulmonary disease (COPD) have an increased incidence of falls and demonstrate balance deficits and alterations in mediolateral trunk acceleration while walking. Measures of gait variability have been implicated as indicators of fall risk, fear of falling, and future falls. To investigate whether alterations in gait variability are found in patients with COPD as compared with healthy control subjects. Twenty patients with COPD (16 males; mean age, 63.6 ± 9.7 yr; FEV 1 /FVC, 0.52 ± 0.12) and 20 control subjects (9 males; mean age, 62.5 ± 8.2 yr) walked for 3 minutes on a treadmill while their gait was recorded. The amount (SD and coefficient of variation) and structure of variability (sample entropy, a measure of regularity) were quantified for step length, time, and width at three walking speeds (self-selected and ±20% of self-selected speed). Generalized linear mixed models were used to compare dependent variables. Patients with COPD demonstrated increased mean and SD step time across all speed conditions as compared with control subjects. They also walked with a narrower step width that increased with increasing speed, whereas the healthy control subjects walked with a wider step width that decreased as speed increased. Further, patients with COPD demonstrated less variability in step width, with decreased SD, compared with control subjects at all three speed conditions. No differences in regularity of gait patterns were found between groups. Patients with COPD walk with increased duration of time between steps, and this timing is more variable than that of control subjects. They also walk with a narrower step width in which the variability of the step widths from step to step is decreased. Changes in these parameters have been related to increased risk of falling in aging research. This provides a mechanism that could explain the increased prevalence of falls in patients with COPD.
Bertoldo Menezes, D; Reyer, A; Musso, M
2018-02-05
The Brill transition is a phase transition process in polyamides related with structural changes between the hydrogen bonds of the lateral functional groups (CO) and (NH). In this study, we have used the potential of Raman spectroscopy for exploring this phase transition in polyamide 6,6 (nylon 6,6), due to the sensitivity of this spectroscopic technique to small intermolecular changes affecting vibrational properties of relevant functional groups. During a step by step heating and cooling process of the sample we collected Raman spectra allowing us from two-dimensional Raman correlation spectroscopy to identify which spectral regions suffered the largest influence during the Brill transition, and from Terahertz Stokes and anti-Stokes Raman spectroscopy to obtain complementary information, e.g. on the temperature of the sample. This allowed us to grasp signatures of the Brill transition from peak parameters of vibrational modes associated with (CC) skeletal stretches and (CNH) bending, and to verify the Brill transition temperature at around 160°C, as well as the reversibility of this phase transition. Copyright © 2017 Elsevier B.V. All rights reserved.
Gadolinium sulfate modified by formate to obtain optimized magneto-caloric effect.
Xu, Long-Yang; Zhao, Jiong-Peng; Liu, Ting; Liu, Fu-Chen
2015-06-01
Three new Gd(III) based coordination polymers [Gd2(C2H6SO)(SO4)3(H2O)2]n (1), {[Gd4(HCOO)2(SO4)5(H2O)6]·H2O}n (2), and [Gd(HCOO)(SO4)(H2O)]n (3) were obtained by modifying gadolinium sulfate. With the gradual increase of the volume ratio of HCOOH and DMSO in synthesis, the formate anions begin to coordinate with metal centers; this results in the coordination numbers of sulfate anion increasing and the contents of water and DMSO molecules decreasing in target complexes. Accordingly, spin densities both per mass and per volume were enhanced step by step, which are beneficial for the magneto-caloric effect (MCE). Magnetic studies reveal that with the more formate anions present, the larger the negative value of magnetic entropy change (-ΔSm) is. Complex 3 exhibits the largest -ΔSm = 49.91 J kg(-1) K(-1) (189.51 mJ cm(-3) K(-1)) for T = 2 K and ΔH = 7 T among three new complexes.
Very large scale monoclonal antibody purification: the case for conventional unit operations.
Kelley, Brian
2007-01-01
Technology development initiatives targeted for monoclonal antibody purification may be motivated by manufacturing limitations and are often aimed at solving current and future process bottlenecks. A subject under debate in many biotechnology companies is whether conventional unit operations such as chromatography will eventually become limiting for the production of recombinant protein therapeutics. An evaluation of the potential limitations of process chromatography and filtration using today's commercially available resins and membranes was conducted for a conceptual process scaled to produce 10 tons of monoclonal antibody per year from a single manufacturing plant, a scale representing one of the world's largest single-plant capacities for cGMP protein production. The process employs a simple, efficient purification train using only two chromatographic and two ultrafiltration steps, modeled after a platform antibody purification train that has generated 10 kg batches in clinical production. Based on analyses of cost of goods and the production capacity of this very large scale purification process, it is unlikely that non-conventional downstream unit operations would be needed to replace conventional chromatographic and filtration separation steps, at least for recombinant antibodies.
What makes a visualization memorable?
Borkin, Michelle A; Vo, Azalea A; Bylinskii, Zoya; Isola, Phillip; Sunkavalli, Shashank; Oliva, Aude; Pfister, Hanspeter
2013-12-01
An ongoing debate in the Visualization community concerns the role that visualization types play in data understanding. In human cognition, understanding and memorability are intertwined. As a first step towards being able to ask questions about impact and effectiveness, here we ask: 'What makes a visualization memorable?' We ran the largest scale visualization study to date using 2,070 single-panel visualizations, categorized with visualization type (e.g., bar chart, line graph, etc.), collected from news media sites, government reports, scientific journals, and infographic sources. Each visualization was annotated with additional attributes, including ratings for data-ink ratios and visual densities. Using Amazon's Mechanical Turk, we collected memorability scores for hundreds of these visualizations, and discovered that observers are consistent in which visualizations they find memorable and forgettable. We find intuitive results (e.g., attributes like color and the inclusion of a human recognizable object enhance memorability) and less intuitive results (e.g., common graphs are less memorable than unique visualization types). Altogether our findings suggest that quantifying memorability is a general metric of the utility of information, an essential step towards determining how to design effective visualizations.
(I Can't Get No) Saturation: A simulation and guidelines for sample sizes in qualitative research.
van Rijnsoever, Frank J
2017-01-01
I explore the sample size in qualitative research that is required to reach theoretical saturation. I conceptualize a population as consisting of sub-populations that contain different types of information sources that hold a number of codes. Theoretical saturation is reached after all the codes in the population have been observed once in the sample. I delineate three different scenarios to sample information sources: "random chance," which is based on probability sampling, "minimal information," which yields at least one new code per sampling step, and "maximum information," which yields the largest number of new codes per sampling step. Next, I use simulations to assess the minimum sample size for each scenario for systematically varying hypothetical populations. I show that theoretical saturation is more dependent on the mean probability of observing codes than on the number of codes in a population. Moreover, the minimal and maximal information scenarios are significantly more efficient than random chance, but yield fewer repetitions per code to validate the findings. I formulate guidelines for purposive sampling and recommend that researchers follow a minimum information scenario.
75 FR 31513 - Prevention of Significant Deterioration and Title V Greenhouse Gas Tailoring Rule
Federal Register 2010, 2011, 2012, 2013, 2014
2010-06-03
...EPA is tailoring the applicability criteria that determine which stationary sources and modification projects become subject to permitting requirements for greenhouse gas (GHG) emissions under the Prevention of Significant Deterioration (PSD) and title V programs of the Clean Air Act (CAA or Act). This rulemaking is necessary because without it PSD and title V requirements would apply, as of January 2, 2011, at the 100 or 250 tons per year (tpy) levels provided under the CAA, greatly increasing the number of required permits, imposing undue costs on small sources, overwhelming the resources of permitting authorities, and severely impairing the functioning of the programs. EPA is relieving these resource burdens by phasing in the applicability of these programs to GHG sources, starting with the largest GHG emitters. This rule establishes two initial steps of the phase-in. The rule also commits the agency to take certain actions on future steps addressing smaller sources, but excludes certain smaller sources from PSD and title V permitting for GHG emissions until at least April 30, 2016.
1945-08-18
were interconnected, how? ever,, it was found that one of the oscillators had an intermit - tent, defect-. This trouble was cleared by removing the...switches> i.e., two pairsL are included in.the unit," one of ä pair of selectors {the " fast selector") steps .each time the latch operates, the other (the...34slow seleotor") steps oiice eaoh time the fast seleotor completes 25 steps. Thus, a total of 625 steps, or changes in permutation, is involved be
Effects of Helicity on Lagrangian and Eulerian Time Correlations in Turbulence
NASA Technical Reports Server (NTRS)
Rubinstein, Robert; Zhou, Ye
1998-01-01
Taylor series expansions of turbulent time correlation functions are applied to show that helicity influences Eulerian time correlations more strongly than Lagrangian time correlations: to second order in time, the helicity effect on Lagrangian time correlations vanishes, but the helicity effect on Eulerian time correlations is nonzero. Fourier analysis shows that the helicity effect on Eulerian time correlations is confined to the largest inertial range scales. Some implications for sound radiation by swirling flows are discussed.
Zonneveld, Isaak
2003-03-01
This study includes some aspects of the shift in the Dutch attitude in relation to water during the past millennia from defense to attack to keeping the balance ("co-evolution"). It has a special focus on the freshwater tidal part, which embraces the largest seaport of the world: Rotterdam, as well as the largest national park of The Netherlands. It reports especially about a young mans endeavor in half a century real time monitoring of some land(scape) units with simple means.
Interior Secretary Highlights Key Trends, Including Climate Change and Fiscal Constraint
NASA Astrophysics Data System (ADS)
Showstack, Randy
2014-06-01
Climate change is "the defining issue of our time," Department of the Interior (DOI) Secretary Sally Jewell said during her 18 June keynote addess at the AGU Science Policy Conference in Washington, D. C. The United States has to "lead by example. We can't be the largest economy in the world and the second largest producer of carbon in the world"—after China—"and not take care of our own problems first to demonstrate to the world what needs to be done," she said.
Lindsey, Bruce D.; Rupert, Michael G.
2012-01-01
Decadal-scale changes in groundwater quality were evaluated by the U.S. Geological Survey National Water-Quality Assessment (NAWQA) Program. Samples of groundwater collected from wells during 1988-2000 - a first sampling event representing the decade ending the 20th century - were compared on a pair-wise basis to samples from the same wells collected during 2001-2010 - a second sampling event representing the decade beginning the 21st century. The data set consists of samples from 1,236 wells in 56 well networks, representing major aquifers and urban and agricultural land-use areas, with analytical results for chloride, dissolved solids, and nitrate. Statistical analysis was done on a network basis rather than by individual wells. Although spanning slightly more or less than a 10-year period, the two-sample comparison between the first and second sampling events is referred to as an analysis of decadal-scale change based on a step-trend analysis. The 22 principal aquifers represented by these 56 networks account for nearly 80 percent of the estimated withdrawals of groundwater used for drinking-water supply in the Nation. Well networks where decadal-scale changes in concentrations were statistically significant were identified using the Wilcoxon-Pratt signed-rank test. For the statistical analysis of chloride, dissolved solids, and nitrate concentrations at the network level, more than half revealed no statistically significant change over the decadal period. However, for networks that had statistically significant changes, increased concentrations outnumbered decreased concentrations by a large margin. Statistically significant increases of chloride concentrations were identified for 43 percent of 56 networks. Dissolved solids concentrations increased significantly in 41 percent of the 54 networks with dissolved solids data, and nitrate concentrations increased significantly in 23 percent of 56 networks. At least one of the three - chloride, dissolved solids, or nitrate - had a statistically significant increase in concentration in 66 percent of the networks. Statistically significant decreases in concentrations were identified in 4 percent of the networks for chloride, 2 percent of the networks for dissolved solids, and 9 percent of the networks for nitrate. A larger percentage of urban land-use networks had statistically significant increases in chloride, dissolved solids, and nitrate concentrations than agricultural land-use networks. In order to assess the magnitude of statistically significant changes, the median of the differences between constituent concentrations from the first full-network sampling event and those from the second full-network sampling event was calculated using the Turnbull method. The largest median decadal increases in chloride concentrations were in networks in the Upper Illinois River Basin (67 mg/L) and in the New England Coastal Basins (34 mg/L), whereas the largest median decadal decrease in chloride concentrations was in the Upper Snake River Basin (1 mg/L). The largest median decadal increases in dissolved solids concentrations were in networks in the Rio Grande Valley (260 mg/L) and the Upper Illinois River Basin (160 mg/L). The largest median decadal decrease in dissolved solids concentrations was in the Apalachicola-Chattahoochee-Flint River Basin (6.0 mg/L). The largest median decadal increases in nitrate as nitrogen (N) concentrations were in networks in the South Platte River Basin (2.0 mg/L as N) and the San Joaquin-Tulare Basins (1.0 mg/L as N). The largest median decadal decrease in nitrate concentrations was in the Santee River Basin and Coastal Drainages (0.63 mg/L). The magnitude of change in networks with statistically significant increases typically was much larger than the magnitude of change in networks with statistically significant decreases. The magnitude of change was greatest for chloride in the urban land-use networks and greatest for dissolved solids and nitrate in the agricultural land-use networks. Analysis of data from all networks combined indicated statistically significant increases for chloride, dissolved solids, and nitrate. Although chloride, dissolved solids, and nitrate concentrations were typically less than the drinking-water standards and guidelines, a statistical test was used to determine whether or not the proportion of samples exceeding the drinking-water standard or guideline changed significantly between the first and second full-network sampling events. The proportion of samples exceeding the U.S. Environmental Protection Agency (USEPA) Secondary Maximum Contaminant Level for dissolved solids (500 milligrams per liter) increased significantly between the first and second full-network sampling events when evaluating all networks combined at the national level. Also, for all networks combined, the proportion of samples exceeding the USEPA Maximum Contaminant Level (MCL) of 10 mg/L as N for nitrate increased significantly. One network in the Delmarva Peninsula had a significant increase in the proportion of samples exceeding the MCL for nitrate. A subset of 261 wells was sampled every other year (biennially) to evaluate decadal-scale changes using a time-series analysis. The analysis of the biennial data set showed that changes were generally similar to the findings from the analysis of decadal-scale change that was based on a step-trend analysis. Because of the small number of wells in a network with biennial data (typically 4-5 wells), the time-series analysis is more useful for understanding water-quality responses to changes in site-specific conditions rather than as an indicator of the change for the entire network.
NASA Astrophysics Data System (ADS)
Amalia, E.; Moelyadi, M. A.; Ihsan, M.
2018-04-01
The flow of air passing around a circular cylinder on the Reynolds number of 250,000 is to show Von Karman Vortex Street Phenomenon. This phenomenon was captured well by using a right turbulence model. In this study, some turbulence models available in software ANSYS Fluent 16.0 was tested to simulate Von Karman vortex street phenomenon, namely k- epsilon, SST k-omega and Reynolds Stress, Detached Eddy Simulation (DES), and Large Eddy Simulation (LES). In addition, it was examined the effect of time step size on the accuracy of CFD simulation. The simulations are carried out by using two-dimensional and three- dimensional models and then compared with experimental data. For two-dimensional model, Von Karman Vortex Street phenomenon was captured successfully by using the SST k-omega turbulence model. As for the three-dimensional model, Von Karman Vortex Street phenomenon was captured by using Reynolds Stress Turbulence Model. The time step size value affects the smoothness quality of curves of drag coefficient over time, as well as affecting the running time of the simulation. The smaller time step size, the better inherent drag coefficient curves produced. Smaller time step size also gives faster computation time.
Steps wandering on the lysozyme and KDP crystals during growth in solution
NASA Astrophysics Data System (ADS)
Rashkovich, L. N.; Chernevich, T. G.; Gvozdev, N. V.; Shustin, O. A.; Yaminsky, I. V.
2001-10-01
We have applied atomic force microscopy for the study in solution of time evolution of step roughness on the crystal faces with high (pottasium dihydrophosphate: KDP) and low (lysozyme) density of kinks. It was found that the roughness increases with time revealing the time dependence as t1/4. Step velocity does not depend upon distance between steps, that is why the experimental data were interpreted on the basis of Voronkov theory, which assume, that the attachment and detachment of building units in the kinks is major limitation for crystal growth. In the frame of this theoretical model the calculation of material parameters is performed.
Two Independent Contributions to Step Variability during Over-Ground Human Walking
Collins, Steven H.; Kuo, Arthur D.
2013-01-01
Human walking exhibits small variations in both step length and step width, some of which may be related to active balance control. Lateral balance is thought to require integrative sensorimotor control through adjustment of step width rather than length, contributing to greater variability in step width. Here we propose that step length variations are largely explained by the typical human preference for step length to increase with walking speed, which itself normally exhibits some slow and spontaneous fluctuation. In contrast, step width variations should have little relation to speed if they are produced more for lateral balance. As a test, we examined hundreds of overground walking steps by healthy young adults (N = 14, age < 40 yrs.). We found that slow fluctuations in self-selected walking speed (2.3% coefficient of variation) could explain most of the variance in step length (59%, P < 0.01). The residual variability not explained by speed was small (1.5% coefficient of variation), suggesting that step length is actually quite precise if not for the slow speed fluctuations. Step width varied over faster time scales and was independent of speed fluctuations, with variance 4.3 times greater than that for step length (P < 0.01) after accounting for the speed effect. That difference was further magnified by walking with eyes closed, which appears detrimental to control of lateral balance. Humans appear to modulate fore-aft foot placement in precise accordance with slow fluctuations in walking speed, whereas the variability of lateral foot placement appears more closely related to balance. Step variability is separable in both direction and time scale into balance- and speed-related components. The separation of factors not related to balance may reveal which aspects of walking are most critical for the nervous system to control. PMID:24015308
Jones, Brian A; Hull, Melissa A; Potanos, Kristina M; Zurakowski, David; Fitzgibbons, Shimae C; Ching, Y Avery; Duggan, Christopher; Jaksic, Tom; Kim, Heung Bae
2016-01-01
Background The International Serial Transverse Enteroplasty (STEP) Data Registry is a voluntary online database created in 2004 to collect information on patients undergoing the STEP procedure. The aim of this study was to identify preoperative factors significantly associated with 1) transplantation or death, or 2) attainment of enteral autonomy following STEP. Study Design Data were collected from September 2004 to January 2010. Univariate and multivariate logistic regression analyses were applied to determine predictors of transplantation/death or enteral autonomy post-STEP. Time to reach full enteral nutrition was estimated using a Kaplan-Meier curve. Results Fourteen of the 111 patients in the Registry were excluded due to inadequate follow-up. Of the remaining 97 patients, 11 patients died, and 5 progressed to intestinal transplantation. On multivariate analysis, higher direct bilirubin and shorter pre-STEP bowel length were independently predictive of progression to transplantation or death (p = .05 and p < .001, respectively). Of the 78 patients who were ≥7 days of age and required parenteral nutrition (PN) at the time of STEP, 37 (47%) achieved enteral autonomy after the first STEP. Longer pre-STEP bowel length was also independently associated with enteral autonomy (p = .002). The median time to reach enteral autonomy based on Kaplan-Meier analysis was 21 months (95% CI: 12-30). Conclusions Overall mortality post-STEP was 11%. Pre-STEP risk factors for progressing to transplantation or death were higher direct bilirubin and shorter bowel length. Among patients who underwent STEP for short bowel syndrome, 47% attained full enteral nutrition post-STEP. Patients with longer pre-STEP bowel length were significantly more likely to achieve enteral autonomy. PMID:23357726
NASA Astrophysics Data System (ADS)
Shanmugasundaram, Jothiganesh; Lee, Eungul
2018-03-01
The association of North-East Indian Monsoon rainfall (NEIMR) over the southeastern peninsular India with the oceanic and atmospheric conditions over the adjacent ocean regions at pentad time step (five days period) was investigated during the months of October to December for the period 1985-2014. The non-parametric correlation and composite analyses were carried out for the simultaneous and lagged time steps (up to four lags) of oceanic and atmospheric variables with pentad NEIMR. The results indicated that NEIMR was significantly correlated: 1) positively with both sea surface temperature (SST) led by 1-4 pentads (lag 1-4 time steps) and latent heat flux (LHF) during the simultaneous, lag 1 and 2 time steps over the equatorial western Indian Ocean, 2) positively with SST but negatively with LHF (less heat flux from ocean to atmosphere) during the same and all the lagged time steps over the Bay of Bengal. Consistently, during the wet NEIMR pentads over the southeastern peninsular India, SST significantly increased over the Bay of Bengal during all the time steps and the equatorial western Indian Ocean during the lag 2-4 time steps, while the LHF decreased over the Bay of Bengal (all time steps) and increased over the Indian Ocean (same, lag 1 and 2). The investigation on ocean-atmospheric interaction revealed that the enhanced LHF over the equatorial western Indian Ocean was related to increased atmospheric moisture demand and increased wind speed, whereas the reduced LHF over the Bay of Bengal was associated with decreased atmospheric moisture demand and decreased wind speed. The vertically integrated moisture flux and moisture transport vectors from 1000 to 850 hPa exhibited that the moisture was carried away from the equatorial western Indian Ocean to the strong moisture convergence regions of the Bay of Bengal during the same and lag 1 time steps of wet NEIMR pentads. Further, the moisture over the Bay of Bengal was transported to the southeastern peninsular India through stronger cyclonic circulations, which were confirmed by the moisture transport vectors and positive vorticity. The identified ocean and atmosphere processes, associated with the wet NEIMR conditions, could be a valuable scientific input for enhancing the rainfall predictability, which has a huge socioeconomic value to agriculture and water resource management sectors in the southeastern peninsular India.
NASA Technical Reports Server (NTRS)
Chan, Daniel C.; Darian, Armen; Sindir, Munir
1992-01-01
We have applied and compared the efficiency and accuracy of two commonly used numerical methods for the solution of Navier-Stokes equations. The artificial compressibility method augments the continuity equation with a transient pressure term and allows one to solve the modified equations as a coupled system. Due to its implicit nature, one can have the luxury of taking a large temporal integration step at the expense of higher memory requirement and larger operation counts per step. Meanwhile, the fractional step method splits the Navier-Stokes equations into a sequence of differential operators and integrates them in multiple steps. The memory requirement and operation count per time step are low, however, the restriction on the size of time marching step is more severe. To explore the strengths and weaknesses of these two methods, we used them for the computation of a two-dimensional driven cavity flow with Reynolds number of 100 and 1000, respectively. Three grid sizes, 41 x 41, 81 x 81, and 161 x 161 were used. The computations were considered after the L2-norm of the change of the dependent variables in two consecutive time steps has fallen below 10(exp -5).
Chirality in distorted square planar Pd(O,N)2 compounds.
Brunner, Henri; Bodensteiner, Michael; Tsuno, Takashi
2013-10-01
Salicylidenimine palladium(II) complexes trans-Pd(O,N)2 adopt step and bowl arrangements. A stereochemical analysis subdivides 52 compounds into 41 step and 11 bowl types. Step complexes with chiral N-substituents and all the bowl complexes induce chiral distortions in the square planar system, resulting in Δ/Λ configuration of the Pd(O,N)2 unit. In complexes with enantiomerically pure N-substituents ligand chirality entails a specific square chirality and only one diastereomer assembles in the lattice. Dimeric Pd(O,N)2 complexes with bridging N-substituents in trans-arrangement are inherently chiral. For dimers different chirality patterns for the Pd(O,N)2 square are observed. The crystals contain racemates of enantiomers. In complex two independent molecules form a tight pair. The (RC) configuration of the ligand induces the same Δ chirality in the Pd(O,N)2 units of both molecules with varying square chirality due to the different crystallographic location of the independent molecules. In complexes and atrop isomerism induces specific configurations in the Pd(O,N)2 bowl systems. The square chirality is largest for complex [(Diop)Rh(PPh3 )Cl)], a catalyst for enantioselective hydrogenation. In the lattice of two diastereomers with the same (RC ,RC) configuration in the ligand Diop but opposite Δ and Λ square configurations co-crystallize, a rare phenomenon in stereochemistry. © 2013 Wiley Periodicals, Inc.
A Class of Prediction-Correction Methods for Time-Varying Convex Optimization
NASA Astrophysics Data System (ADS)
Simonetto, Andrea; Mokhtari, Aryan; Koppel, Alec; Leus, Geert; Ribeiro, Alejandro
2016-09-01
This paper considers unconstrained convex optimization problems with time-varying objective functions. We propose algorithms with a discrete time-sampling scheme to find and track the solution trajectory based on prediction and correction steps, while sampling the problem data at a constant rate of $1/h$, where $h$ is the length of the sampling interval. The prediction step is derived by analyzing the iso-residual dynamics of the optimality conditions. The correction step adjusts for the distance between the current prediction and the optimizer at each time step, and consists either of one or multiple gradient steps or Newton steps, which respectively correspond to the gradient trajectory tracking (GTT) or Newton trajectory tracking (NTT) algorithms. Under suitable conditions, we establish that the asymptotic error incurred by both proposed methods behaves as $O(h^2)$, and in some cases as $O(h^4)$, which outperforms the state-of-the-art error bound of $O(h)$ for correction-only methods in the gradient-correction step. Moreover, when the characteristics of the objective function variation are not available, we propose approximate gradient and Newton tracking algorithms (AGT and ANT, respectively) that still attain these asymptotical error bounds. Numerical simulations demonstrate the practical utility of the proposed methods and that they improve upon existing techniques by several orders of magnitude.
Zhao, Shuzhen; He, Lujia; Feng, Chenchen; He, Xiaoli
2018-06-01
Laboratory errors in blood collection center (BCC) are most common in the preanalytical phase. It is, therefore, of vital importance for administrators to take measures to improve healthcare quality and patient safety.In 2015, a case bundle management strategy was applied in a large outpatient BCC to improve its medical quality and patient safety.Unqualified blood sampling, complications, patient waiting time, largest number of patients waiting during peak hours, patient complaints, and patient satisfaction were compared over the period from 2014 to 2016.The strategy reduced unqualified blood sampling, complications, patient waiting time, largest number of patients waiting during peak hours, and patient complaints, while improving patient satisfaction.This strategy was effective in improving BCC healthcare quality and patient safety.
Optimization of Thermal Preprocessing for Efficient Combustion of Woody Biomass
NASA Astrophysics Data System (ADS)
Kumagai, Seiji; Aranai, Masahiko; Takeda, Koichi; Enda, Yukio
We attempted to optimize both drying time and temperature for stem chips and bark of Japanese cedar in order to obtain the largest release of combustion heat. Moisture release rates of the stem and bark during air-drying in an oven were evaluated. Higher and lower heating values of stem and bark, dried at different temperatures for different lengths of time, were also evaluated. The drying conditions of 180°C and 30min resulted in the largest heat release of the stem (˜ 4%increase compared to conditions of 105°C and 30min). The optimal drying conditions were not obvious for bark. However, for the drying process in actual plants, the conditions of 180°C and 30min were suggested to be acceptable for both stem and bark.
NASA Astrophysics Data System (ADS)
Djaman, Koffi; Irmak, Suat; Sall, Mamadou; Sow, Abdoulaye; Kabenge, Isa
2017-10-01
The objective of this study was to quantify differences associated with using 24-h time step reference evapotranspiration (ETo), as compared with the sum of hourly ETo computations with the standardized ASCE Penman-Monteith (ASCE-PM) model for semi-arid dry conditions at Fanaye and Ndiaye (Senegal) and semiarid humid conditions at Sapu (The Gambia) and Kankan (Guinea). The results showed that there was good agreement between the sum of hourly ETo and daily time step ETo at all four locations. The daily time step overestimated the daily ETo relative to the sum of hourly ETo by 1.3 to 8% for the whole study periods. However, there is location and monthly dependence of the magnitude of ETo values and the ratio of the ETo values estimated by both methods. Sum of hourly ETo tends to give higher ETo during winter time at Fanaye and Sapu, while the daily ETo was higher from March to November at the same weather stations. At Ndiaye and Kankan, daily time step estimates of ETo were high during the year. The simple linear regression slopes between the sum of 24-h ETo and the daily time step ETo at all weather stations varied from 1.02 to 1.08 with high coefficient of determination (R 2 ≥ 0.87). Application of the hourly ETo estimation method might help on accurate ETo estimation to meet irrigation requirement under precision agriculture.
NASA Astrophysics Data System (ADS)
Gatto, Riccardo
2017-12-01
This article considers the random walk over Rp, with p ≥ 2, where a given particle starts at the origin and moves stepwise with uniformly distributed step directions and step lengths following a common distribution. Step directions and step lengths are independent. The case where the number of steps of the particle is fixed and the more general case where it follows an independent continuous time inhomogeneous counting process are considered. Saddlepoint approximations to the distribution of the distance from the position of the particle to the origin are provided. Despite the p-dimensional nature of the random walk, the computations of the saddlepoint approximations are one-dimensional and thus simple. Explicit formulae are derived with dimension p = 3: for uniformly and exponentially distributed step lengths, for fixed and for Poisson distributed number of steps. In these situations, the high accuracy of the saddlepoint approximations is illustrated by numerical comparisons with Monte Carlo simulation. Contribution to the "Topical Issue: Continuous Time Random Walk Still Trendy: Fifty-year History, Current State and Outlook", edited by Ryszard Kutner and Jaume Masoliver.
Qualitative Features Extraction from Sensor Data using Short-time Fourier Transform
NASA Technical Reports Server (NTRS)
Amini, Abolfazl M.; Figueroa, Fernando
2004-01-01
The information gathered from sensors is used to determine the health of a sensor. Once a normal mode of operation is established any deviation from the normal behavior indicates a change. This change may be due to a malfunction of the sensor(s) or the system (or process). The step-up and step-down features, as well as sensor disturbances are assumed to be exponential. An RC network is used to model the main process, which is defined by a step-up (charging), drift, and step-down (discharging). The sensor disturbances and spike are added while the system is in drift. The system runs for a period of at least three time-constants of the main process every time a process feature occurs (e.g. step change). The Short-Time Fourier Transform of the Signal is taken using the Hamming window. Three window widths are used. The DC value is removed from the windowed data prior to taking the FFT. The resulting three dimensional spectral plots provide good time frequency resolution. The results indicate distinct shapes corresponding to each process.
NASA Astrophysics Data System (ADS)
Gnedin, Nickolay Y.; Semenov, Vadim A.; Kravtsov, Andrey V.
2018-04-01
An optimally efficient explicit numerical scheme for solving fluid dynamics equations, or any other parabolic or hyperbolic system of partial differential equations, should allow local regions to advance in time with their own, locally constrained time steps. However, such a scheme can result in violation of the Courant-Friedrichs-Lewy (CFL) condition, which is manifestly non-local. Although the violations can be considered to be "weak" in a certain sense and the corresponding numerical solution may be stable, such calculation does not guarantee the correct propagation speed for arbitrary waves. We use an experimental fluid dynamics code that allows cubic "patches" of grid cells to step with independent, locally constrained time steps to demonstrate how the CFL condition can be enforced by imposing a constraint on the time steps of neighboring patches. We perform several numerical tests that illustrate errors introduced in the numerical solutions by weak CFL condition violations and show how strict enforcement of the CFL condition eliminates these errors. In all our tests the strict enforcement of the CFL condition does not impose a significant performance penalty.
Okubo, Yoshiro; Schoene, Daniel; Lord, Stephen R
2017-04-01
To examine the effects of stepping interventions on fall risk factors and fall incidence in older people. Electronic databases (PubMed, EMBASE, CINAHL, Cochrane, CENTRAL) and reference lists of included articles from inception to March 2015. Randomised (RCT) or clinical controlled trials (CCT) of volitional and reactive stepping interventions that included older (minimum age 60) people providing data on falls or fall risk factors. Meta-analyses of seven RCTs (n=660) showed that the stepping interventions significantly reduced the rate of falls (rate ratio=0.48, 95% CI 0.36 to 0.65, p<0.0001, I 2 =0%) and the proportion of fallers (risk ratio=0.51, 95% CI 0.38 to 0.68, p<0.0001, I 2 =0%). Subgroup analyses stratified by reactive and volitional stepping interventions revealed a similar efficacy for rate of falls and proportion of fallers. A meta-analysis of two RCTs (n=62) showed that stepping interventions significantly reduced laboratory-induced falls, and meta-analysis findings of up to five RCTs and CCTs (n=36-416) revealed that stepping interventions significantly improved simple and choice stepping reaction time, single leg stance, timed up and go performance (p<0.05), but not measures of strength. The findings indicate that both reactive and volitional stepping interventions reduce falls among older adults by approximately 50%. This clinically significant reduction may be due to improvements in reaction time, gait, balance and balance recovery but not in strength. Further high-quality studies aimed at maximising the effectiveness and feasibility of stepping interventions are required. CRD42015017357. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/.
A concept analysis of nursing overtime.
Lobo, Vanessa M; Fisher, Anita; Ploeg, Jenny; Peachey, Gladys; Akhtar-Danesh, Noori
2013-11-01
To report a concept analysis of nursing overtime. Economic constraints have resulted in hospital restructuring with the aim of reducing costs. These processes often target nurse staffing (the largest organizational expense) by increasing usage of alternative staffing strategies including overtime hours. Overtime is a multifaceted, poorly defined, and indiscriminately used concept. Analysis of nursing overtime is an important step towards development and propagation of appropriate staffing strategies and rigorous research. Concept analysis. The search of electronic literature included indexes, grey literature, dictionaries, policy statements, contracts, glossaries and ancestry searching. Sources included were published between 1993-2012; dates were chosen in relation to increases in overtime hours used as a result of the healthcare structuring in the early 1990s. Approximately 65 documents met the inclusion criteria. Walker and Avant's methodology guided the analysis. Nursing overtime can be defined by four attributes: perception of choice or control over overtime hours worked; rewards or lack thereof; time off duty counts equally as much as time on duty; and disruption due to a lack of preparation. Antecedents of overtime arise from societal, organizational, and individual levels. The consequences of nursing overtime can be positive and negative, affecting organizations, nurses, and the patients they care for. This concept analysis clarifies the intricacies surrounding nursing overtime with recommendations to advance nursing research, practice, and policies. A nursing-specific middle-range theory was proposed to guide the understanding and study of nursing overtime. © 2013 Blackwell Publishing Ltd.
Influence of viscoelastic nature on the intermittent peel-front dynamics of adhesive tape
NASA Astrophysics Data System (ADS)
Kumar, Jagadish; Ananthakrishna, G.
2010-07-01
We investigate the influence of viscoelastic nature of the adhesive on the intermittent peel front dynamics by extending a recently introduced model for peeling of an adhesive tape. As time and rate-dependent deformation of the adhesives are measured in stationary conditions, a crucial step in incorporating the viscoelastic effects applicable to unstable intermittent peel dynamics is the introduction of a dynamization scheme that eliminates the explicit time dependence in terms of dynamical variables. We find contrasting influences of viscoelastic contribution in different regions of tape mass, roller inertia, and pull velocity. As the model acoustic energy dissipated depends on the nature of the peel front and its dynamical evolution, the combined effect of the roller inertia and pull velocity makes the acoustic energy noisier for small tape mass and low-pull velocity while it is burstlike for low-tape mass, intermediate values of the roller inertia and high-pull velocity. The changes are quantified by calculating the largest Lyapunov exponent and analyzing the statistical distributions of the amplitudes and durations of the model acoustic energy signals. Both single and two stage power-law distributions are observed. Scaling relations between the exponents are derived which show that the exponents corresponding to large values of event sizes and durations are completely determined by those for small values. The scaling relations are found to be satisfied in all cases studied. Interestingly, we find only five types of model acoustic emission signals among multitude of possibilities of the peel front configurations.
Jiang, S C; Zhang, X X
2005-12-01
A two-dimensional model was developed to model the effects of dynamic changes in the physical properties on tissue temperature and damage to simulate laser-induced interstitial thermotherapy (LITT) treatment procedures with temperature monitoring. A modified Monte Carlo method was used to simulate photon transport in the tissue in the non-uniform optical property field with the finite volume method used to solve the Pennes bioheat equation to calculate the temperature distribution and the Arrhenius equation used to predict the thermal damage extent. The laser light transport and the heat transfer as well as the damage accumulation were calculated iteratively at each time step. The influences of different laser sources, different applicator sizes, and different irradiation modes on the final damage volume were analyzed to optimize the LITT treatment. The numerical results showed that damage volume was the smallest for the 1,064-nm laser, with much larger, similar damage volumes for the 980- and 850-nm lasers at normal blood perfusion rates. The damage volume was the largest for the 1,064-nm laser with significantly smaller, similar damage volumes for the 980- and 850-nm lasers with temporally interrupted blood perfusion. The numerical results also showed that the variations in applicator sizes, laser powers, heating durations and temperature monitoring ranges significantly affected the shapes and sizes of the thermal damage zones. The shapes and sizes of the thermal damage zones can be optimized by selecting different applicator sizes, laser powers, heating duration times, temperature monitoring ranges, etc.
The hyperbolic step potential: Anti-bound states, SUSY partners and Wigner time delays
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gadella, M.; Kuru, Ş.; Negro, J., E-mail: jnegro@fta.uva.es
We study the scattering produced by a one dimensional hyperbolic step potential, which is exactly solvable and shows an unusual interest because of its asymmetric character. The analytic continuation of the scattering matrix in the momentum representation has a branch cut and an infinite number of simple poles on the negative imaginary axis which are related with the so called anti-bound states. This model does not show resonances. Using the wave functions of the anti-bound states, we obtain supersymmetric (SUSY) partners which are the series of Rosen–Morse II potentials. We have computed the Wigner reflection and transmission time delays formore » the hyperbolic step and such SUSY partners. Our results show that the more bound states a partner Hamiltonian has the smaller is the time delay. We also have evaluated time delays for the hyperbolic step potential in the classical case and have obtained striking similitudes with the quantum case. - Highlights: • The scattering matrix of hyperbolic step potential is studied. • The scattering matrix has a branch cut and an infinite number of poles. • The poles are associated to anti-bound states. • Susy partners using antibound states are computed. • Wigner time delays for the hyperbolic step and partner potentials are compared.« less
Effects of dual task on turning ability in stroke survivors and older adults.
Hollands, K L; Agnihotri, D; Tyson, S F
2014-09-01
Turning is an integral component of independent mobility in which stroke survivors frequently fall. This study sought to measure the effects of competing cognitive demands on the stepping patterns of stroke survivors, compared to healthy age-match adults, during turning as a putative mechanism for falls. Walking and turning (90°) was assessed under single (walking and turning alone) and dual task (subtracting serial 3s while walking and turning) conditions using an electronic, pressure-sensitive walkway. Dependent measures were time to turn, variability in time to turn, step length, step width and single support time during three steps of the turn. Turning ability in single and dual task conditions was compared between stroke survivors (n=17, mean ± SD: 59 ± 113 months post-stroke, 64 ± 10 years of age) and age-matched healthy counterparts (n=15). Both groups took longer, were more variable, tended to widen the second step and, crucially, increased single support time on the inside leg of the turn while turning and distracted. Increased single support time during turning may represent biomechanical mechanism, within stepping patterns of turning under distraction, for increased risk of falls for both stroke survivors and older adults. Crown Copyright © 2014. Published by Elsevier B.V. All rights reserved.
Fouad, Heba M; Abdelhakim, Mohamad A; Awadein, Ahmed; Elhilali, Hala
2016-10-01
To compare the outcomes of medial rectus (MR) muscle pulley fixation and augmented recession in children with convergence excess esotropia and variable-angle infantile esotropia. This was a prospective randomized interventional study in which children with convergence excess esotropia or variable-angle infantile esotropia were randomly allocated to either augmented MR muscle recession (augmented group) or MR muscle pulley posterior fixation (pulley group). In convergence excess, the MR recession was based on the average of distance and near angles of deviation with distance correction in the augmented group, and on the distance angle of deviation in the pulley group. In variable-angle infantile esotropia, the MR recession was based on the average of the largest and smallest angles in the augmented group and on the smallest angle in the pulley group. Pre- and postoperative ductions, versions, pattern strabismus, smallest and largest angles of deviation, and angle disparity were analyzed. Surgery was performed on 60 patients: 30 underwent bilateral augmented MR recession, and 30 underwent bilateral MR recession with pulley fixation. The success rate was statistically significantly higher (P = 0.037) in the pulley group (70%) than in the augmented group (40%). The postoperative smallest and largest angles and the angle disparity were statistically significantly lower in the pulley group than the augmented group (P < 0.01). Medial rectus muscle pulley fixation is a useful surgical step for addressing marked variability of the angle in variable angle esotropia and convergence excess esotropia. Copyright © 2016 American Association for Pediatric Ophthalmology and Strabismus. Published by Elsevier Inc. All rights reserved.
Quantum steerability: Characterization, quantification, superactivation, and unbounded amplification
NASA Astrophysics Data System (ADS)
Hsieh, Chung-Yun; Liang, Yeong-Cherng; Lee, Ray-Kuang
2016-12-01
Quantum steering, also called Einstein-Podolsky-Rosen steering, is the intriguing phenomenon associated with the ability of spatially separated observers to steer—by means of local measurements—the set of conditional quantum states accessible by a distant party. In the light of quantum information, all steerable quantum states are known to be resources for quantum information processing tasks. Here, via a quantity dubbed steering fraction, we derive a simple, but general criterion that allows one to identify quantum states that can exhibit quantum steering (without having to optimize over the measurements performed by each party), thus making an important step towards the characterization of steerable quantum states. The criterion, in turn, also provides upper bounds on the largest steering-inequality violation achievable by arbitrary finite-dimensional maximally entangled states. For the quantification of steerability, we prove that a strengthened version of the steering fraction is a convex steering monotone and demonstrate how it is related to two other steering monotones, namely, steerable weight and steering robustness. Using these tools, we further demonstrate the superactivation of steerability for a well-known family of entangled quantum states, i.e., we show how the steerability of certain entangled, but unsteerable quantum states can be recovered by allowing joint measurements on multiple copies of the same state. In particular, our approach allows one to explicitly construct a steering inequality to manifest this phenomenon. Finally, we prove that there exist examples of quantum states (including some which are unsteerable under projective measurements) whose steering-inequality violation can be arbitrarily amplified by allowing joint measurements on as little as three copies of the same state. For completeness, we also demonstrate how the largest steering-inequality violation can be used to bound the largest Bell-inequality violation and derive, analogously, a simple sufficient condition for Bell nonlocality from the latter.
Thies, Sibylle B; Richardson, James K; Demott, Trina; Ashton-Miller, James A
2005-08-01
Patients with peripheral neuropathy (PN) report greater difficulty walking on irregular surfaces with low light (IL) than on flat surfaces with regular lighting (FR). We tested the primary hypothesis that older PN patients would demonstrate greater step width and step width variability under IL conditions than under FR conditions. Forty-two subjects (22 male, 20 female: mean +/- S.D.: 64.7 +/- 9.8 years) with PN underwent history, physical examination, and electrodiagnostic testing. Subjects were asked to walk 10 m at a comfortable speed while kinematic and force data were measured at 100 Hz using optoelectronic markers and foot switches. Ten trials were conducted under both IL and FR conditions. Step width, time, length, and speed were calculated with a MATLAB algorithm, with the standard deviation serving as the measure of variability. The results showed that under IL, as compared to FR, conditions subjects demonstrated greater step width (197.1 +/- 40.8 mm versus 180.5 +/- 32.4 mm; P < 0.001) and step width variability (40.4 +/- 9.0 mm versus 34.5 +/- 8.4 mm; P < 0.001), step time and its variability (P < 0.001 and P = 0.003, respectively), and step length variability (P < 0.001). Average step length and gait speed decreased under IL conditions (P < 0.001 for both). Step width variability and step time variability correlated best under IL conditions with a clinical measure of PN severity and fall history, respectively. We conclude that IL conditions cause PN patients to increase the variability of their step width and other gait parameters.
Caetano, Maria Joana D; Lord, Stephen R; Allen, Natalie E; Brodie, Matthew A; Song, Jooeun; Paul, Serene S; Canning, Colleen G; Menant, Jasmine C
2018-02-01
Decline in the ability to take effective steps and to adapt gait, particularly under challenging conditions, may be important reasons why people with Parkinson's disease (PD) have an increased risk of falling. This study aimed to determine the extent of stepping and gait adaptability impairments in PD individuals as well as their associations with PD symptoms, cognitive function and previous falls. Thirty-three older people with PD and 33 controls were assessed in choice stepping reaction time, Stroop stepping and gait adaptability tests; measurements identified as fall risk factors in older adults. People with PD had similar mean choice stepping reaction times to healthy controls, but had significantly greater intra-individual variability. In the Stroop stepping test, the PD participants were more likely to make an error (48 vs 18%), took 715 ms longer to react (2312 vs 1517 ms) and had significantly greater response variability (536 vs 329 ms) than the healthy controls. People with PD also had more difficulties adapting their gait in response to targets (poorer stepping accuracy) and obstacles (increased number of steps) appearing at short notice on a walkway. Within the PD group, higher disease severity, reduced cognition and previous falls were associated with poorer stepping and gait adaptability performances. People with PD have reduced ability to adapt gait to unexpected targets and obstacles and exhibit poorer stepping responses, particularly in a test condition involving conflict resolution. Such impaired stepping responses in Parkinson's disease are associated with disease severity, cognitive impairment and falls. Copyright © 2017 Elsevier Ltd. All rights reserved.
ERIC Educational Resources Information Center
Christian, David
1991-01-01
Urges an approach to the teaching of history that takes the largest possible perspective, crossing time as well as space. Discusses the problems and advantages of such an approach. Describes a course on "big" history that begins with time, creation myths, and astronomy, and moves on to paleontology and evolution. (DK)
Step Detection Robust against the Dynamics of Smartphones
Lee, Hwan-hee; Choi, Suji; Lee, Myeong-jin
2015-01-01
A novel algorithm is proposed for robust step detection irrespective of step mode and device pose in smartphone usage environments. The dynamics of smartphones are decoupled into a peak-valley relationship with adaptive magnitude and temporal thresholds. For extracted peaks and valleys in the magnitude of acceleration, a step is defined as consisting of a peak and its adjacent valley. Adaptive magnitude thresholds consisting of step average and step deviation are applied to suppress pseudo peaks or valleys that mostly occur during the transition among step modes or device poses. Adaptive temporal thresholds are applied to time intervals between peaks or valleys to consider the time-varying pace of human walking or running for the correct selection of peaks or valleys. From the experimental results, it can be seen that the proposed step detection algorithm shows more than 98.6% average accuracy for any combination of step mode and device pose and outperforms state-of-the-art algorithms. PMID:26516857
Habituation of self-motion perception following unidirectional angular velocity steps.
Clément, Gilles; Terlevic, Robert
2016-09-07
We investigated whether the perceived angular velocity following velocity steps of 80°/s in the dark decreased with the repetition of the stimulation in the same direction. The perceptual response to velocity steps in the opposite direction was also compared before and after this unidirectional habituation training. Participants indicated their perceived angular velocity by clicking on a wireless mouse every time they felt that they had rotated by 90°. The prehabituation perceptual response decayed exponentially with a time constant of 23.9 s. After 100 velocity steps in the same direction, this time constant was 12.9 s. The time constant after velocity steps in the opposite direction was 13.4 s, indicating that the habituation of the sensation of rotation is not direction specific. The peak velocity of the perceptual response was not affected by the habituation training. The differences between the habituation characteristics of self-motion perception and eye movements confirm that different velocity storage mechanisms mediate ocular and perceptual responses.
Automating the evaluation of flood damages: methodology and potential gains
NASA Astrophysics Data System (ADS)
Eleutério, Julian; Martinez, Edgar Daniel
2010-05-01
The evaluation of flood damage potential consists of three main steps: assessing and processing data, combining data and calculating potential damages. The first step consists of modelling hazard and assessing vulnerability. In general, this step of the evaluation demands more time and investments than the others. The second step of the evaluation consists of combining spatial data on hazard with spatial data on vulnerability. Geographic Information System (GIS) is a fundamental tool in the realization of this step. GIS software allows the simultaneous analysis of spatial and matrix data. The third step of the evaluation consists of calculating potential damages by means of damage-functions or contingent analysis. All steps demand time and expertise. However, the last two steps must be realized several times when comparing different management scenarios. In addition, uncertainty analysis and sensitivity test are made during the second and third steps of the evaluation. The feasibility of these steps could be relevant in the choice of the extent of the evaluation. Low feasibility could lead to choosing not to evaluate uncertainty or to limit the number of scenario comparisons. Several computer models have been developed over time in order to evaluate the flood risk. GIS software is largely used to realise flood risk analysis. The software is used to combine and process different types of data, and to visualise the risk and the evaluation results. The main advantages of using a GIS in these analyses are: the possibility of "easily" realising the analyses several times, in order to compare different scenarios and study uncertainty; the generation of datasets which could be used any time in future to support territorial decision making; the possibility of adding information over time to update the dataset and make other analyses. However, these analyses require personnel specialisation and time. The use of GIS software to evaluate the flood risk requires personnel with a double professional specialisation. The professional should be proficient in GIS software and in flood damage analysis (which is already a multidisciplinary field). Great effort is necessary in order to correctly evaluate flood damages, and the updating and the improvement of the evaluation over time become a difficult task. The automation of this process should bring great advance in flood management studies over time, especially for public utilities. This study has two specific objectives: (1) show the entire process of automation of the second and third steps of flood damage evaluations; and (2) analyse the induced potential gains in terms of time and expertise needed in the analysis. A programming language is used within GIS software in order to automate hazard and vulnerability data combination and potential damages calculation. We discuss the overall process of flood damage evaluation. The main result of this study is a computational tool which allows significant operational gains on flood loss analyses. We quantify these gains by means of a hypothetical example. The tool significantly reduces the time of analysis and the needs for expertise. An indirect gain is that sensitivity and cost-benefit analyses can be more easily realized.
A local time stepping algorithm for GPU-accelerated 2D shallow water models
NASA Astrophysics Data System (ADS)
Dazzi, Susanna; Vacondio, Renato; Dal Palù, Alessandro; Mignosa, Paolo
2018-01-01
In the simulation of flooding events, mesh refinement is often required to capture local bathymetric features and/or to detail areas of interest; however, if an explicit finite volume scheme is adopted, the presence of small cells in the domain can restrict the allowable time step due to the stability condition, thus reducing the computational efficiency. With the aim of overcoming this problem, the paper proposes the application of a Local Time Stepping (LTS) strategy to a GPU-accelerated 2D shallow water numerical model able to handle non-uniform structured meshes. The algorithm is specifically designed to exploit the computational capability of GPUs, minimizing the overheads associated with the LTS implementation. The results of theoretical and field-scale test cases show that the LTS model guarantees appreciable reductions in the execution time compared to the traditional Global Time Stepping strategy, without compromising the solution accuracy.
Schulze, M; Kuster, C; Schäfer, J; Jung, M; Grossfeld, R
2018-03-01
The processing of ejaculates is a fundamental step for the fertilizing capacity of boar spermatozoa. The aim of the present study was to identify factors that affect quality of boar semen doses. The production process during 1 day of semen processing in 26 European boar studs was monitored. In each boar stud, nine to 19 randomly selected ejaculates from 372 Pietrain boars were analyzed for sperm motility, acrosome and plasma membrane integrity, mitochondrial activity and thermo-resistance (TRT). Each ejaculate was monitored for production time and temperature for each step in semen processing using the special programmed software SEQU (version 1.7, Minitüb, Tiefenbach, Germany). The dilution of ejaculates with a short-term extender was completed in one step in 10 AI centers (n = 135 ejaculates), in two steps in 11 AI centers (n = 158 ejaculates) and in three steps in five AI centers (n = 79 ejaculates). Results indicated there was a greater semen quality with one-step isothermal dilution compared with the multi-step dilution of AI semen doses (total motility TRT d7: 71.1 ± 19.2%, 64.6 ± 20.0%, 47.1 ± 27.1%; one-step compared with two-step compared with the three-step dilution; P < .05). There was a marked advantage when using the one-step isothermal dilution regarding time management, preservation suitability, stability and stress resistance. One-step dilution caused significant lower holding times of raw ejaculates and reduced the possible risk of making mistakes due to a lower number of processing steps. These results lead to refined recommendations for boar semen processing. Copyright © 2018 Elsevier B.V. All rights reserved.
Efficient and accurate time-stepping schemes for integrate-and-fire neuronal networks.
Shelley, M J; Tao, L
2001-01-01
To avoid the numerical errors associated with resetting the potential following a spike in simulations of integrate-and-fire neuronal networks, Hansel et al. and Shelley independently developed a modified time-stepping method. Their particular scheme consists of second-order Runge-Kutta time-stepping, a linear interpolant to find spike times, and a recalibration of postspike potential using the spike times. Here we show analytically that such a scheme is second order, discuss the conditions under which efficient, higher-order algorithms can be constructed to treat resets, and develop a modified fourth-order scheme. To support our analysis, we simulate a system of integrate-and-fire conductance-based point neurons with all-to-all coupling. For six-digit accuracy, our modified Runge-Kutta fourth-order scheme needs a time-step of Delta(t) = 0.5 x 10(-3) seconds, whereas to achieve comparable accuracy using a recalibrated second-order or a first-order algorithm requires time-steps of 10(-5) seconds or 10(-9) seconds, respectively. Furthermore, since the cortico-cortical conductances in standard integrate-and-fire neuronal networks do not depend on the value of the membrane potential, we can attain fourth-order accuracy with computational costs normally associated with second-order schemes.
Yu, Mengmeng; Shen, Lin; Zhang, Aijun; Sheng, Jiping
2011-10-15
It has been known that methyl jasmonate (MeJA) interacts with ethylene to elicit resistance. In green mature tomato fruits (Lycopersicon esculentum cv. Lichun), 0.02mM MeJA increased the activity of 1-aminocyclopropane-1-carboxylate oxidase (ACO), and consequently influenced the last step of ethylene biosynthesis. Fruits treated with a combination of 0.02 MeJA and 0.02 α-aminoisobutyric acid (AIB, a competitive inhibitor of ACO) exhibited a lower ethylene production comparing to that by 0.02mM MeJA alone. The increased activities of defense enzymes and subsequent control of disease incidence caused by Botrytis cinerea with 0.2mM MeJA treatment was impaired by AIB as well. A close relationship (P<0.05) was found between the activity alterations of ACO and that of chitinase (CHI) and β-1,3-glucanase (GLU). In addition, this study further detected the changes of gene expressions and enzyme kinetics of ACO to different concentrations of MeJA. LeACO1 was found the principal member from the ACO gene family to respond to MeJA. Accumulation of LeACO1/3/4 transcripts followed the concentration pattern of MeJA treatments, where the largest elevations were reached by 0.2mM. For kinetic analysis, K(m) values of ACO stepped up during the experiment and reached the maximums at 0.2mM MeJA with ascending concentrations of treatments. V(max) exhibited a gradual increase from 3h to 24h, and the largest induction appeared with 1.0mM MeJA. The results suggested that ACO is involved in MeJA-induced resistance in tomato, and the concentration influence of MeJA on ACO was attributable to the variation of gene transcripts and enzymatic properties. Copyright © 2011 Elsevier GmbH. All rights reserved.
THE BRIGHTEST CLUSTER GALAXY IN A85: THE LARGEST CORE KNOWN SO FAR
DOE Office of Scientific and Technical Information (OSTI.GOV)
López-Cruz, O.; Añorve, C.; Ibarra-Medel, H. J.
2014-11-10
We have found that the brightest cluster galaxy (BCG) in A85, Holm 15A, displays the largest core known so far. Its cusp radius, r {sub γ} = 4.57 ± 0.06 kpc (4.''26 ± 0.''06), is more than 18 times larger than the mean for BCGs and ≳ 1 kpc larger than A2261-BCG, hitherto the largest-cored BCG. Holm 15A hosts the luminous amorphous radio source 0039-095B and has the optical signature of a LINER. Scaling laws indicate that this core could host a supermassive black hole (SMBH) of mass M {sub •} ∼ (10{sup 9}-10{sup 11}) M {sub ☉}. We suggestmore » that cores this large represent a relatively short phase in the evolution of BCGs, whereas the masses of their associated SBMH might be set by initial conditions.« less
Bourret, S.C.; Swansen, J.E.
1982-07-02
A stepping motor is microprocessor controlled by digital circuitry which monitors the output of a shaft encoder adjustably secured to the stepping motor and generates a subsequent stepping pulse only after the preceding step has occurred and a fixed delay has expired. The fixed delay is variable on a real-time basis to provide for smooth and controlled deceleration.
One False Step: "Detroit," "Step" and Movies of Rising and Falling
ERIC Educational Resources Information Center
Beck, Bernard
2018-01-01
"Detroit" and "Step" are two recent movies in the context of urban riots in protest of police brutality. They refer to time periods separated by half a century, but there are common themes in the two that seem appropriate to both times. The movies are not primarily concerned with the riot events, but the riot is a major…
Comparative analysis of peak-detection techniques for comprehensive two-dimensional chromatography.
Latha, Indu; Reichenbach, Stephen E; Tao, Qingping
2011-09-23
Comprehensive two-dimensional gas chromatography (GC×GC) is a powerful technology for separating complex samples. The typical goal of GC×GC peak detection is to aggregate data points of analyte peaks based on their retention times and intensities. Two techniques commonly used for two-dimensional peak detection are the two-step algorithm and the watershed algorithm. A recent study [4] compared the performance of the two-step and watershed algorithms for GC×GC data with retention-time shifts in the second-column separations. In that analysis, the peak retention-time shifts were corrected while applying the two-step algorithm but the watershed algorithm was applied without shift correction. The results indicated that the watershed algorithm has a higher probability of erroneously splitting a single two-dimensional peak than the two-step approach. This paper reconsiders the analysis by comparing peak-detection performance for resolved peaks after correcting retention-time shifts for both the two-step and watershed algorithms. Simulations with wide-ranging conditions indicate that when shift correction is employed with both algorithms, the watershed algorithm detects resolved peaks with greater accuracy than the two-step method. Copyright © 2011 Elsevier B.V. All rights reserved.
Zhao, Xiaohui; Oppler, Scott; Dunleavy, Dana; Kroopnick, Marc
2010-10-01
This study investigated the validity of four approaches (the average, most recent, highest-within-administration, and highest-across-administration approaches) of using repeaters' Medical College Admission Test (MCAT) scores to predict Step 1 scores. Using the differential predication method, this study investigated the magnitude of differences in the expected Step 1 total scores between MCAT nonrepeaters and three repeater groups (two-time, three-time, and four-time test takers) for the four scoring approaches. For the average score approach, matriculants with the same MCAT average are expected to achieve similar Step 1 total scores regardless of whether the individual attempted the MCAT exam one or multiple times. For the other three approaches, repeaters are expected to achieve lower Step 1 scores than nonrepeaters; for a given MCAT score, as the number of attempts increases, the expected Step 1 decreases. The effect was strongest for the highest-across-administration approach, followed by the highest-within-administration approach, and then the most recent approach. Using the average score is the best approach for considering repeaters' MCAT scores in medical school admission decisions.
Wafa, Sharifah Wajihah; Aziz, Nur Nadzirah; Shahril, Mohd Razif; Halib, Hasmiza; Rahim, Marhasiyah; Janssen, Xanne
2017-04-01
This study describes the patterns of objectively measured sitting, standing and stepping in obese children using the activPALTM and highlights possible differences in sedentary levels and patterns during weekdays and weekends. Sixty-five obese children, aged 9-11 years, were recruited from primary schools in Terengganu, Malaysia. Sitting, standing and stepping were objectively measured using an activPALTM accelerometer over a period of 4-7 days. Obese children spent an average of 69.6% of their day sitting/lying, 19.1% standing and 11.3% stepping. Weekdays and weekends differed significantly in total time spent sitting/lying, standing, stepping, step count, number of sedentary bouts and length of sedentary bouts (p < 0.05, respectively). Obese children spent a large proportion of their time sedentarily, and they spent more time sedentarily during weekends compared with weekdays. This study on sedentary behaviour patterns presents valuable information for designing and implementing strategies to decrease sedentary time among obese children, particularly during weekends. © The Author [2016]. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Kahleová, Hana; Lloren, Jan Irene; Mashchak, Andrew; Hill, Martin; Fraser, Gary
2016-01-01
Our study focuses on examining the relationship between the frequency and timing of meals and changes in BMI in the Adventist Health Study-2 (AHS-2) which represents a relatively healthy population in North America. A longitudinal analysis was undertaken using data from 48 673 individuals monitored over an average period of 7.43 ± 1.24 years. The number of meals per day, length of nighttime fasting, eating breakfast and timing of the largest meal of the day (breakfast 5-11 a.m., lunch noon-4 p.m. or supper/dinner 5-11 p.m.) were used as independent variables. The primary output was the change in body mass index (BMI) once in a year. Linear regression analyses were adjusted for all important demographic factors and lifestyle factors. Consumption of 1 and 2 meals a day was associated with decrease in BMI (-0.04; 95% CI -0.06 to -0.03 and -0.02; 95% CI -0.03 to -0,01 kg.m-2 per year, respectively). On the other hand, consumption of 3 or more meals a day was associated with increase in BMI, in a linear relation (p < 0.001). BMI of those who skipped breakfast increased (0.029; 95% CI 0.021-0.037 kg.m-2 per year; p = 0.002) as compared to no BMI change in those who had breakfast (-0.0002; 95% CI -0.005 to + 0.004 kg.m-2 per year). Those, whose largest meal of the day was breakfast, recorded no significant change in BMI (-0.002 95% CI -0.008 to +0.004 kg.m-2 per year). On the contrary, the largest supper was associated with the greatest increase in BMI (0.034; 95% CI 0.029-0.040 kg.m-2 per year). Our results indicate that eating less frequently, consuming breakfast and having the largest meal in the morning hours may be effective measures to prevent weight gain.Key words: body mass index (BMI) - frequency and timing of meals - body mass regulation - breakfast.
NASA Technical Reports Server (NTRS)
Elmiligui, Alaa; Cannizzaro, Frank; Melson, N. D.
1991-01-01
A general multiblock method for the solution of the three-dimensional, unsteady, compressible, thin-layer Navier-Stokes equations has been developed. The convective and pressure terms are spatially discretized using Roe's flux differencing technique while the viscous terms are centrally differenced. An explicit Runge-Kutta method is used to advance the solution in time. Local time stepping, adaptive implicit residual smoothing, and the Full Approximation Storage (FAS) multigrid scheme are added to the explicit time stepping scheme to accelerate convergence to steady state. Results for three-dimensional test cases are presented and discussed.
NASA Astrophysics Data System (ADS)
Foresti, L.; Reyniers, M.; Seed, A.; Delobbe, L.
2016-01-01
The Short-Term Ensemble Prediction System (STEPS) is implemented in real-time at the Royal Meteorological Institute (RMI) of Belgium. The main idea behind STEPS is to quantify the forecast uncertainty by adding stochastic perturbations to the deterministic Lagrangian extrapolation of radar images. The stochastic perturbations are designed to account for the unpredictable precipitation growth and decay processes and to reproduce the dynamic scaling of precipitation fields, i.e., the observation that large-scale rainfall structures are more persistent and predictable than small-scale convective cells. This paper presents the development, adaptation and verification of the STEPS system for Belgium (STEPS-BE). STEPS-BE provides in real-time 20-member ensemble precipitation nowcasts at 1 km and 5 min resolutions up to 2 h lead time using a 4 C-band radar composite as input. In the context of the PLURISK project, STEPS forecasts were generated to be used as input in sewer system hydraulic models for nowcasting urban inundations in the cities of Ghent and Leuven. Comprehensive forecast verification was performed in order to detect systematic biases over the given urban areas and to analyze the reliability of probabilistic forecasts for a set of case studies in 2013 and 2014. The forecast biases over the cities of Leuven and Ghent were found to be small, which is encouraging for future integration of STEPS nowcasts into the hydraulic models. Probabilistic forecasts of exceeding 0.5 mm h-1 are reliable up to 60-90 min lead time, while the ones of exceeding 5.0 mm h-1 are only reliable up to 30 min. The STEPS ensembles are slightly under-dispersive and represent only 75-90 % of the forecast errors.
NASA Astrophysics Data System (ADS)
Foresti, L.; Reyniers, M.; Seed, A.; Delobbe, L.
2015-07-01
The Short-Term Ensemble Prediction System (STEPS) is implemented in real-time at the Royal Meteorological Institute (RMI) of Belgium. The main idea behind STEPS is to quantify the forecast uncertainty by adding stochastic perturbations to the deterministic Lagrangian extrapolation of radar images. The stochastic perturbations are designed to account for the unpredictable precipitation growth and decay processes and to reproduce the dynamic scaling of precipitation fields, i.e. the observation that large scale rainfall structures are more persistent and predictable than small scale convective cells. This paper presents the development, adaptation and verification of the system STEPS for Belgium (STEPS-BE). STEPS-BE provides in real-time 20 member ensemble precipitation nowcasts at 1 km and 5 min resolution up to 2 h lead time using a 4 C-band radar composite as input. In the context of the PLURISK project, STEPS forecasts were generated to be used as input in sewer system hydraulic models for nowcasting urban inundations in the cities of Ghent and Leuven. Comprehensive forecast verification was performed in order to detect systematic biases over the given urban areas and to analyze the reliability of probabilistic forecasts for a set of case studies in 2013 and 2014. The forecast biases over the cities of Leuven and Ghent were found to be small, which is encouraging for future integration of STEPS nowcasts into the hydraulic models. Probabilistic forecasts of exceeding 0.5 mm h-1 are reliable up to 60-90 min lead time, while the ones of exceeding 5.0 mm h-1 are only reliable up to 30 min. The STEPS ensembles are slightly under-dispersive and represent only 80-90 % of the forecast errors.
Improving arrival time identification in transient elastography
NASA Astrophysics Data System (ADS)
Klein, Jens; McLaughlin, Joyce; Renzi, Daniel
2012-04-01
In this paper, we improve the first step in the arrival time algorithm used for shear wave speed recovery in transient elastography. In transient elastography, a shear wave is initiated at the boundary and the interior displacement of the propagating shear wave is imaged with an ultrasound ultra-fast imaging system. The first step in the arrival time algorithm finds the arrival times of the shear wave by cross correlating displacement time traces (the time history of the displacement at a single point) with a reference time trace located near the shear wave source. The second step finds the shear wave speed from the arrival times. In performing the first step, we observe that the wave pulse decorrelates as it travels through the medium, which leads to inaccurate estimates of the arrival times and ultimately to blurring and artifacts in the shear wave speed image. In particular, wave ‘spreading’ accounts for much of this decorrelation. Here we remove most of the decorrelation by allowing the reference wave pulse to spread during the cross correlation. This dramatically improves the images obtained from arrival time identification. We illustrate the improvement of this method on phantom and in vivo data obtained from the laboratory of Mathias Fink at ESPCI, Paris.
NASA Astrophysics Data System (ADS)
Suryono, T. J.; Gofuku, A.
2018-02-01
One of the important thing in the mitigation of accidents in nuclear power plant accidents is time management. The accidents should be resolved as soon as possible in order to prevent the core melting and the release of radioactive material to the environment. In this case, operators should follow the emergency operating procedure related with the accident, in step by step order and in allowable time. Nowadays, the advanced main control rooms are equipped with computer-based procedures (CBPs) which is make it easier for operators to do their tasks of monitoring and controlling the reactor. However, most of the CBPs do not include the time remaining display feature which informs operators of time available for them to execute procedure steps and warns them if the they reach the time limit. Furthermore, the feature will increase the awareness of operators about their current situation in the procedure. This paper investigates this issue. The simplified of emergency operating procedure (EOP) of steam generator tube rupture (SGTR) accident of PWR plant is applied. In addition, the sequence of actions on each step of the procedure is modelled using multilevel flow modelling (MFM) and influenced propagation rule. The prediction of action time on each step is acquired based on similar case accidents and the Support Vector Regression. The derived time will be processed and then displayed on a CBP user interface.
FAQ HURRICANES, TYPHOONS, AND TROPICAL CYCLONES
Time ? How do I tell at what time a satellite picture was taken ? A14) How do I convert from mph to largest number of hurricanes in the Atlantic Ocean at the same time? E19) How many direct hits by hurricane and how does this feedback to the storm itself? I : REAL TIME INFORMATION I1) Where can I get real
NASA Astrophysics Data System (ADS)
Little, Duncan A.; Tennyson, Jonathan; Plummer, Martin; Noble, Clifford J.; Sunderland, Andrew G.
2017-06-01
TIMEDELN implements the time-delay method of determining resonance parameters from the characteristic Lorentzian form displayed by the largest eigenvalues of the time-delay matrix. TIMEDELN constructs the time-delay matrix from input K-matrices and analyses its eigenvalues. This new version implements multi-resonance fitting and may be run serially or as a high performance parallel code with three levels of parallelism. TIMEDELN takes K-matrices from a scattering calculation, either read from a file or calculated on a dynamically adjusted grid, and calculates the time-delay matrix. This is then diagonalized, with the largest eigenvalue representing the longest time-delay experienced by the scattering particle. A resonance shows up as a characteristic Lorentzian form in the time-delay: the programme searches the time-delay eigenvalues for maxima and traces resonances when they pass through different eigenvalues, separating overlapping resonances. It also performs the fitting of the calculated data to the Lorentzian form and outputs resonance positions and widths. Any remaining overlapping resonances can be fitted jointly. The branching ratios of decay into the open channels can also be found. The programme may be run serially or in parallel with three levels of parallelism. The parallel code modules are abstracted from the main physics code and can be used independently.
Brakenridge, C L; Fjeldsoe, B S; Young, D C; Winkler, E A H; Dunstan, D W; Straker, L M; Healy, G N
2016-11-04
Office workers engage in high levels of sitting time. Effective, context-specific, and scalable strategies are needed to support widespread sitting reduction. This study aimed to evaluate organisational-support strategies alone or in combination with an activity tracker to reduce sitting in office workers. From one organisation, 153 desk-based office workers were cluster-randomised (by team) to organisational support only (e.g., manager support, emails; 'Group ORG', 9 teams, 87 participants), or organisational support plus LUMOback activity tracker ('Group ORG + Tracker', 9 teams, 66 participants). The waist-worn tracker provided real-time feedback and prompts on sitting and posture. ActivPAL3 monitors were used to ascertain primary outcomes (sitting time during work- and overall hours) and other activity outcomes: prolonged sitting time (≥30 min bouts), time between sitting bouts, standing time, stepping time, and number of steps. Health and work outcomes were assessed by questionnaire. Changes within each group (three- and 12 months) and differences between groups were analysed by linear mixed models. Missing data were multiply imputed. At baseline, participants (46 % women, 23-58 years) spent (mean ± SD) 74.3 ± 9.7 % of their workday sitting, 17.5 ± 8.3 % standing and 8.1 ± 2.7 % stepping. Significant (p < 0.05) reductions in sitting time (both work and overall) were observed within both groups, but only at 12 months. For secondary activity outcomes, Group ORG significantly improved in work prolonged sitting, time between sitting bouts and standing time, and overall prolonged sitting time (12 months), and in overall standing time (three- and 12 months); while Group ORG + Tracker, significantly improved in work prolonged sitting, standing, stepping and overall standing time (12 months). Adjusted for confounders, the only significant between-group differences were a greater stepping time and step count for Group ORG + Tracker relative to Group ORG (+20.6 min/16 h day, 95 % CI: 3.1, 38.1, p = 0.021; +846.5steps/16 h day, 95 % CI: 67.8, 1625.2, p = 0.033) at 12 months. Observed changes in health and work outcomes were small and not statistically significant. Organisational-support strategies with or without an activity tracker resulted in improvements in sitting, prolonged sitting and standing; adding a tracker enhanced stepping changes. Improvements were most evident at 12 months, suggesting the organisational-support strategies may have taken time to embed within the organisation. Australian New Zealand Clinical Trial Registry: ACTRN12614000252617 . Registered 10 March 2014.
Analysis of Modeling Assumptions used in Production Cost Models for Renewable Integration Studies
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stoll, Brady; Brinkman, Gregory; Townsend, Aaron
2016-01-01
Renewable energy integration studies have been published for many different regions exploring the question of how higher penetration of renewable energy will impact the electric grid. These studies each make assumptions about the systems they are analyzing; however the effect of many of these assumptions has not been yet been examined and published. In this paper we analyze the impact of modeling assumptions in renewable integration studies, including the optimization method used (linear or mixed-integer programming) and the temporal resolution of the dispatch stage (hourly or sub-hourly). We analyze each of these assumptions on a large and a small systemmore » and determine the impact of each assumption on key metrics including the total production cost, curtailment of renewables, CO2 emissions, and generator starts and ramps. Additionally, we identified the impact on these metrics if a four-hour ahead commitment step is included before the dispatch step and the impact of retiring generators to reduce the degree to which the system is overbuilt. We find that the largest effect of these assumptions is at the unit level on starts and ramps, particularly for the temporal resolution, and saw a smaller impact at the aggregate level on system costs and emissions. For each fossil fuel generator type we measured the average capacity started, average run-time per start, and average number of ramps. Linear programming results saw up to a 20% difference in number of starts and average run time of traditional generators, and up to a 4% difference in the number of ramps, when compared to mixed-integer programming. Utilizing hourly dispatch instead of sub-hourly dispatch saw no difference in coal or gas CC units for either start metric, while gas CT units had a 5% increase in the number of starts and 2% increase in the average on-time per start. The number of ramps decreased up to 44%. The smallest effect seen was on the CO2 emissions and total production cost, with a 0.8% and 0.9% reduction respectively when using linear programming compared to mixed-integer programming and 0.07% and 0.6% reduction, respectively, in the hourly dispatch compared to sub-hourly dispatch.« less
Son, Jeong-Whan; Lee, Min Sun; Lee, Jae Sung
2017-01-21
Positron emission tomography (PET) detectors with the ability to encode depth-of-interaction (DOI) information allow us to simultaneously improve the spatial resolution and sensitivity of PET scanners. In this study, we propose a DOI PET detector based on a stair-pattern reflector arrangement inserted between pixelated crystals and a single-ended scintillation light readout. The main advantage of the proposed method is its simplicity; DOI information is decoded from a flood map and the data can be simply acquired by using a single-ended readout system. Another potential advantage is that the two-step DOI detectors can provide the largest peak position distance in a flood map because two-dimensional peak positions can be evenly distributed. We conducted a Monte Carlo simulation and obtained flood maps. Then, we conducted experimental studies using two-step DOI arrays of 5 × 5 Lu 1.9 Y 0.1 SiO 5 :Ce crystals with a cross-section of 1.7 × 1.7 mm 2 and different detector configurations: an unpolished single-layer ( U S) array, a polished single-layer ( P S) array and a polished stacked two-layer ( P T) array. For each detector configuration, both air gaps and room-temperature vulcanization (RTV) silicone gaps were tested. Detectors U S and P T showed good peak separation in each scintillator with an average peak-to-valley ratio (PVR) and distance-to-width ratio (DWR) of 2.09 and 1.53, respectively. Detector P S RTV showed lower PVR and DWR (1.65 and 1.34, respectively). The configuration of detector P T Air is preferable for the construction of time-of-flight-DOI detectors because timing resolution was degraded by only about 40 ps compared with that of a non-DOI detector. The performance of detectors U S Air and P S RTV was lower than that of a non-DOI detector, and thus these designs are favorable when the manufacturing cost is more important than timing performance. The results demonstrate that the proposed DOI-encoding method is a promising candidate for PET scanners that require high resolution and sensitivity and operate with conventional acquisition systems.
Changes in Tsunami Risk Perception in Northern Chile After the April 1 2014 Tsunami
NASA Astrophysics Data System (ADS)
Carvalho, L.; Lagos, M.
2016-12-01
Tsunamis are a permanent risk in the coast of Chile. Apart from that, the coastal settlements and the Chilean State, historically, have underestimated the danger of tsunamis. On April 1 2014, a magnitude Mw 8.2 earthquake and a minor tsunami occurred off the coast of northern Chile. Considering that over decades this region has been awaiting an earthquake that would generate a large tsunami, in this study we inquired if the familiarity with the subject tsunami and the lack of frequent tsunamis or occurrence of non-hazardous tsunamis for people could lead to adaptive responses to underestimate the danger. The purpose of this study was to evaluate the perceived risk of tsunami in the city of Arica, before and after the April 1 2014 event. A questionnaire was designed and applied in two time periods to 547 people living in low coastal areas in Arica. In the first step, the survey was applied in March 2014. While in step 2, new questions were included and the survey was reapplied, a year after the minor tsunami. A descriptive analysis of data was performed, followed by a comparison between means. We identified illusion of invulnerability, especially regarding to assessment that preparedness and education actions are enough. Answers about lack of belief in the occurrence of future tsunamis were also reported. At the same time, there were learning elements identified. After April 1, a larger number of participants described self-protection actions for emergency, as well as performing of preventive actions. In addition, we mapped answers about the tsunami danger degree in different locations in the city, where we observed a high knowledge of it. When compared with other hazards, the concern about tsunamis were very high, lower than earthquakes hazard, but higher than pollution, crime and rain. Moreover, we identified place attachment in answers about sense of security and affective bonds with home and their location. We discussed the relationship between risk perception, illusion of invulnerability and place attachment. Finally, we questioned whether learning elements will remain in time, or if this elements are related to short-term public interest. The April 1 event was not the largest earthquake expected in this subduction zone, therefore, it is extremely important that communities are educated and prepared to live with risk.
How many steps/day are enough? For adults.
Tudor-Locke, Catrine; Craig, Cora L; Brown, Wendy J; Clemes, Stacy A; De Cocker, Katrien; Giles-Corti, Billie; Hatano, Yoshiro; Inoue, Shigeru; Matsudo, Sandra M; Mutrie, Nanette; Oppert, Jean-Michel; Rowe, David A; Schmidt, Michael D; Schofield, Grant M; Spence, John C; Teixeira, Pedro J; Tully, Mark A; Blair, Steven N
2011-07-28
Physical activity guidelines from around the world are typically expressed in terms of frequency, duration, and intensity parameters. Objective monitoring using pedometers and accelerometers offers a new opportunity to measure and communicate physical activity in terms of steps/day. Various step-based versions or translations of physical activity guidelines are emerging, reflecting public interest in such guidance. However, there appears to be a wide discrepancy in the exact values that are being communicated. It makes sense that step-based recommendations should be harmonious with existing evidence-based public health guidelines that recognize that "some physical activity is better than none" while maintaining a focus on time spent in moderate-to-vigorous physical activity (MVPA). Thus, the purpose of this review was to update our existing knowledge of "How many steps/day are enough?", and to inform step-based recommendations consistent with current physical activity guidelines. Normative data indicate that healthy adults typically take between 4,000 and 18,000 steps/day, and that 10,000 steps/day is reasonable for this population, although there are notable "low active populations." Interventions demonstrate incremental increases on the order of 2,000-2,500 steps/day. The results of seven different controlled studies demonstrate that there is a strong relationship between cadence and intensity. Further, despite some inter-individual variation, 100 steps/minute represents a reasonable floor value indicative of moderate intensity walking. Multiplying this cadence by 30 minutes (i.e., typical of a daily recommendation) produces a minimum of 3,000 steps that is best used as a heuristic (i.e., guiding) value, but these steps must be taken over and above habitual activity levels to be a true expression of free-living steps/day that also includes recommendations for minimal amounts of time in MVPA. Computed steps/day translations of time in MVPA that also include estimates of habitual activity levels equate to 7,100 to 11,000 steps/day. A direct estimate of minimal amounts of MVPA accumulated in the course of objectively monitored free-living behaviour is 7,000-8,000 steps/day. A scale that spans a wide range of incremental increases in steps/day and is congruent with public health recognition that "some physical activity is better than none," yet still incorporates step-based translations of recommended amounts of time in MVPA may be useful in research and practice. The full range of users (researchers to practitioners to the general public) of objective monitoring instruments that provide step-based outputs require good reference data and evidence-based recommendations to be able to design effective health messages congruent with public health physical activity guidelines, guide behaviour change, and ultimately measure, track, and interpret steps/day.
How many steps/day are enough? for adults
2011-01-01
Physical activity guidelines from around the world are typically expressed in terms of frequency, duration, and intensity parameters. Objective monitoring using pedometers and accelerometers offers a new opportunity to measure and communicate physical activity in terms of steps/day. Various step-based versions or translations of physical activity guidelines are emerging, reflecting public interest in such guidance. However, there appears to be a wide discrepancy in the exact values that are being communicated. It makes sense that step-based recommendations should be harmonious with existing evidence-based public health guidelines that recognize that "some physical activity is better than none" while maintaining a focus on time spent in moderate-to-vigorous physical activity (MVPA). Thus, the purpose of this review was to update our existing knowledge of "How many steps/day are enough?", and to inform step-based recommendations consistent with current physical activity guidelines. Normative data indicate that healthy adults typically take between 4,000 and 18,000 steps/day, and that 10,000 steps/day is reasonable for this population, although there are notable "low active populations." Interventions demonstrate incremental increases on the order of 2,000-2,500 steps/day. The results of seven different controlled studies demonstrate that there is a strong relationship between cadence and intensity. Further, despite some inter-individual variation, 100 steps/minute represents a reasonable floor value indicative of moderate intensity walking. Multiplying this cadence by 30 minutes (i.e., typical of a daily recommendation) produces a minimum of 3,000 steps that is best used as a heuristic (i.e., guiding) value, but these steps must be taken over and above habitual activity levels to be a true expression of free-living steps/day that also includes recommendations for minimal amounts of time in MVPA. Computed steps/day translations of time in MVPA that also include estimates of habitual activity levels equate to 7,100 to 11,000 steps/day. A direct estimate of minimal amounts of MVPA accumulated in the course of objectively monitored free-living behaviour is 7,000-8,000 steps/day. A scale that spans a wide range of incremental increases in steps/day and is congruent with public health recognition that "some physical activity is better than none," yet still incorporates step-based translations of recommended amounts of time in MVPA may be useful in research and practice. The full range of users (researchers to practitioners to the general public) of objective monitoring instruments that provide step-based outputs require good reference data and evidence-based recommendations to be able to design effective health messages congruent with public health physical activity guidelines, guide behaviour change, and ultimately measure, track, and interpret steps/day. PMID:21798015
NASA Astrophysics Data System (ADS)
de Graaf, I. E. M.
2014-12-01
The world's largest accessible source of freshwater is hidden underground. However it remains difficult to estimate its volume, and we still cannot answer the question; will there be enough for everybody? In many places of the world groundwater abstraction is unsustainable: more water is used than refilled, leading to decreasing river discharges and declining groundwater levels. It is predicted that for many regions in the world unsustainable water use will increase in the coming decades, due to rising human water use under a changing climate. It would not take long before water shortage causes widespread droughts and the first water war begins. Improving our knowledge about our hidden water is the first step to prevent such large water conflicts. The world's largest aquifers are mapped, but these maps do not mention how much water these aquifers contain or how fast water levels decline. If we can add thickness and geohydrological information to these aquifer maps, we can estimate how much water is stored and its flow direction. Also, data on groundwater age and how fast the aquifer is refilled is needed to predict the impact of human water use and climate change on the groundwater resource. Ultimately, if we can provide this knowledge water conflicts will focus more on a fair distribution instead of absolute amounts of water.
Meyers, Robert W; Oliver, Jon L; Hughes, Michael G; Lloyd, Rhodri S; Cronin, John B
2017-04-01
Meyers, RW, Oliver, JL, Hughes, MG, Lloyd, RS, and Cronin, JB. Influence of age, maturity, and body size on the spatiotemporal determinants of maximal sprint speed in boys. J Strength Cond Res 31(4): 1009-1016, 2017-The aim of this study was to investigate the influence of age, maturity, and body size on the spatiotemporal determinants of maximal sprint speed in boys. Three-hundred and seventy-five boys (age: 13.0 ± 1.3 years) completed a 30-m sprint test, during which maximal speed, step length, step frequency, contact time, and flight time were recorded using an optical measurement system. Body mass, height, leg length, and a maturity offset represented somatic variables. Step frequency accounted for the highest proportion of variance in speed (∼58%) in the pre-peak height velocity (pre-PHV) group, whereas step length explained the majority of the variance in speed (∼54%) in the post-PHV group. In the pre-PHV group, mass was negatively related to speed, step length, step frequency, and contact time; however, measures of stature had a positive influence on speed and step length yet a negative influence on step frequency. Speed and step length were also negatively influence by mass in the post-PHV group, whereas leg length continued to positively influence step length. The results highlighted that pre-PHV boys may be deemed step frequency reliant, whereas those post-PHV boys may be marginally step length reliant. Furthermore, the negative influence of body mass, both pre-PHV and post-PHV, suggests that training to optimize sprint performance in youth should include methods such as plyometric and strength training, where a high neuromuscular focus and the development force production relative to body weight are key foci.
NASA Technical Reports Server (NTRS)
Wilson, Robert M.
2014-01-01
At end of the 2012 hurricane season the National Hurricane Center retired the original HURDAT dataset and replaced it with the newer version HURDAT2, which reformatted the original data and included additional information, in particular, estimates of the 34-, 50, and 64-kt wind radii for the interval 2004-2013. During the brief 10-year interval, some 164 tropical cyclones are noted to have formed in the North Atlantic basin, with 77 becoming hurricanes. Hurricane Sandy (2012) stands out as being the largest individual storm that occurred in the North Atlantic basin during the 2004 -2013 timeframe, both in terms of its 34- and 64-kt wind radii and wind areas, having maximum 34- and 64-kt wind radii, maximum wind areas, and average wind areas each more than 2 standard deviations larger than the corresponding means. In terms of the largest yearly total 34-kt wind area (i.e., the sum of all individual storm 34-kt wind areas during the year), the year 2010 stands out as being the largest (about 423 × 10(exp 6) nmi(exp 2)), compared to the mean of about 174 × 10(exp 6) nmi(exp 2)), surpassing the year 2005 (353 x 10(exp 6) nmi(exp 2)) that had the largest number of individual storms (28). However, in terms of the largest yearly total 64-kt wind area, the year 2005 was the largest (about 9 × 10(exp 6) nmi(exp 2)), compared to the mean of about 3 × 106 nmi(exp 2)). Interesting is that the ratio of total 64-kt wind area to total 34-kt wind area has decreased over time, from 0.034 in 2004 to 0.008 in 2013.
A Coordinated Initialization Process for the Distributed Space Exploration Simulation
NASA Technical Reports Server (NTRS)
Crues, Edwin Z.; Phillips, Robert G.; Dexter, Dan; Hasan, David
2007-01-01
A viewgraph presentation on the federate initialization process for the Distributed Space Exploration Simulation (DSES) is described. The topics include: 1) Background: DSES; 2) Simulation requirements; 3) Nine Step Initialization; 4) Step 1: Create the Federation; 5) Step 2: Publish and Subscribe; 6) Step 3: Create Object Instances; 7) Step 4: Confirm All Federates Have Joined; 8) Step 5: Achieve initialize Synchronization Point; 9) Step 6: Update Object Instances With Initial Data; 10) Step 7: Wait for Object Reflections; 11) Step 8: Set Up Time Management; 12) Step 9: Achieve startup Synchronization Point; and 13) Conclusions
Mapping Rice Cropping Patterns Using Multi-temporal Sentinel-1A Data
NASA Astrophysics Data System (ADS)
Nguyen, S. T.; Chen, C. F.; Chen, C. R.; Chiang, S. H.; Khin, L. V.
2016-12-01
Rice is the world's third largest crop behind maize and wheat, providing food for more than half of the world's population. Rice agriculture has been a key driver of socioeconomic development in Vietnam as it provides food for more than 90 million people and is considered as a main source of income for the majority of rural populations. Vietnam has approximately 7.5 million ha, annually producing roughly 39 million tons of grain rice making this nation become one of the largest rice suppliers on earth with approximately 7.4 million tons of grain rice exported annually. Thus, monitoring rice-growing areas to meet people's food needs while safeguarding the environment is important to developing strategies for national food security and rice grain exports. Previous studies of rice crop monitoring are often carried using coarse resolution optical satellite data such as MODIS data. Because rice fields in Vietnam are generally small and fragmental, the use of coarse resolution optical satellite data reveals disadvantages due to mixed-pixel issues and data contamination caused by cloud cover. The Sentinel-1A satellite launched on 3 April 2014 provides opportunities to collectively map small patches of rice fields at different scales owing to its high spatial resolution of 10 m and temporal resolution of 12 days. The main objective of this study is to develop an approach to map rice-cropping systems in An Giang and Dong Thap provinces, South Vietnam using multi-temporal Sentinel-1A VH data. We processed the data following four main steps: (1) data pre-processing, (2) constructing smooth time-series VH backscatter data, (3) rice crop classification using the support vector machines (SVM), and (4) accuracy assessment. The mapping results validated with the ground the ground reference data indicated that the overall accuracy and Kappa coefficient were 83.4% and 0.7, respectively. The mapping results also compared with the government's rice area statistics at the district level reaffirmed the consistency between these two datasets with the correlation coefficient (R2) of 0.93 and the relative error in area of 2.2%. This study demonstrates the potential application of time-series Sentinel-1A data for rice crop mapping and the methods are thus proposed for large-scale rice crop monitoring in the country.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Meyer, Chad D.; Balsara, Dinshaw S.; Aslam, Tariq D.
2014-01-15
Parabolic partial differential equations appear in several physical problems, including problems that have a dominant hyperbolic part coupled to a sub-dominant parabolic component. Explicit methods for their solution are easy to implement but have very restrictive time step constraints. Implicit solution methods can be unconditionally stable but have the disadvantage of being computationally costly or difficult to implement. Super-time-stepping methods for treating parabolic terms in mixed type partial differential equations occupy an intermediate position. In such methods each superstep takes “s” explicit Runge–Kutta-like time-steps to advance the parabolic terms by a time-step that is s{sup 2} times larger than amore » single explicit time-step. The expanded stability is usually obtained by mapping the short recursion relation of the explicit Runge–Kutta scheme to the recursion relation of some well-known, stable polynomial. Prior work has built temporally first- and second-order accurate super-time-stepping methods around the recursion relation associated with Chebyshev polynomials. Since their stability is based on the boundedness of the Chebyshev polynomials, these methods have been called RKC1 and RKC2. In this work we build temporally first- and second-order accurate super-time-stepping methods around the recursion relation associated with Legendre polynomials. We call these methods RKL1 and RKL2. The RKL1 method is first-order accurate in time; the RKL2 method is second-order accurate in time. We verify that the newly-designed RKL1 and RKL2 schemes have a very desirable monotonicity preserving property for one-dimensional problems – a solution that is monotone at the beginning of a time step retains that property at the end of that time step. It is shown that RKL1 and RKL2 methods are stable for all values of the diffusion coefficient up to the maximum value. We call this a convex monotonicity preserving property and show by examples that it is very useful in parabolic problems with variable diffusion coefficients. This includes variable coefficient parabolic equations that might give rise to skew symmetric terms. The RKC1 and RKC2 schemes do not share this convex monotonicity preserving property. One-dimensional and two-dimensional von Neumann stability analyses of RKC1, RKC2, RKL1 and RKL2 are also presented, showing that the latter two have some advantages. The paper includes several details to facilitate implementation. A detailed accuracy analysis is presented to show that the methods reach their design accuracies. A stringent set of test problems is also presented. To demonstrate the robustness and versatility of our methods, we show their successful operation on problems involving linear and non-linear heat conduction and viscosity, resistive magnetohydrodynamics, ambipolar diffusion dominated magnetohydrodynamics, level set methods and flux limited radiation diffusion. In a prior paper (Meyer, Balsara and Aslam 2012 [36]) we have also presented an extensive test-suite showing that the RKL2 method works robustly in the presence of shocks in an anisotropically conducting, magnetized plasma.« less
NASA Astrophysics Data System (ADS)
Meyer, Chad D.; Balsara, Dinshaw S.; Aslam, Tariq D.
2014-01-01
Parabolic partial differential equations appear in several physical problems, including problems that have a dominant hyperbolic part coupled to a sub-dominant parabolic component. Explicit methods for their solution are easy to implement but have very restrictive time step constraints. Implicit solution methods can be unconditionally stable but have the disadvantage of being computationally costly or difficult to implement. Super-time-stepping methods for treating parabolic terms in mixed type partial differential equations occupy an intermediate position. In such methods each superstep takes “s” explicit Runge-Kutta-like time-steps to advance the parabolic terms by a time-step that is s2 times larger than a single explicit time-step. The expanded stability is usually obtained by mapping the short recursion relation of the explicit Runge-Kutta scheme to the recursion relation of some well-known, stable polynomial. Prior work has built temporally first- and second-order accurate super-time-stepping methods around the recursion relation associated with Chebyshev polynomials. Since their stability is based on the boundedness of the Chebyshev polynomials, these methods have been called RKC1 and RKC2. In this work we build temporally first- and second-order accurate super-time-stepping methods around the recursion relation associated with Legendre polynomials. We call these methods RKL1 and RKL2. The RKL1 method is first-order accurate in time; the RKL2 method is second-order accurate in time. We verify that the newly-designed RKL1 and RKL2 schemes have a very desirable monotonicity preserving property for one-dimensional problems - a solution that is monotone at the beginning of a time step retains that property at the end of that time step. It is shown that RKL1 and RKL2 methods are stable for all values of the diffusion coefficient up to the maximum value. We call this a convex monotonicity preserving property and show by examples that it is very useful in parabolic problems with variable diffusion coefficients. This includes variable coefficient parabolic equations that might give rise to skew symmetric terms. The RKC1 and RKC2 schemes do not share this convex monotonicity preserving property. One-dimensional and two-dimensional von Neumann stability analyses of RKC1, RKC2, RKL1 and RKL2 are also presented, showing that the latter two have some advantages. The paper includes several details to facilitate implementation. A detailed accuracy analysis is presented to show that the methods reach their design accuracies. A stringent set of test problems is also presented. To demonstrate the robustness and versatility of our methods, we show their successful operation on problems involving linear and non-linear heat conduction and viscosity, resistive magnetohydrodynamics, ambipolar diffusion dominated magnetohydrodynamics, level set methods and flux limited radiation diffusion. In a prior paper (Meyer, Balsara and Aslam 2012 [36]) we have also presented an extensive test-suite showing that the RKL2 method works robustly in the presence of shocks in an anisotropically conducting, magnetized plasma.
Analysis of 3D poroelastodynamics using BEM based on modified time-step scheme
NASA Astrophysics Data System (ADS)
Igumnov, L. A.; Petrov, A. N.; Vorobtsov, I. V.
2017-10-01
The development of 3d boundary elements modeling of dynamic partially saturated poroelastic media using a stepping scheme is presented in this paper. Boundary Element Method (BEM) in Laplace domain and the time-stepping scheme for numerical inversion of the Laplace transform are used to solve the boundary value problem. The modified stepping scheme with a varied integration step for quadrature coefficients calculation using the symmetry of the integrand function and integral formulas of Strongly Oscillating Functions was applied. The problem with force acting on a poroelastic prismatic console end was solved using the developed method. A comparison of the results obtained by the traditional stepping scheme with the solutions obtained by this modified scheme shows that the computational efficiency is better with usage of combined formulas.
Efficiency and Accuracy of Time-Accurate Turbulent Navier-Stokes Computations
NASA Technical Reports Server (NTRS)
Rumsey, Christopher L.; Sanetrik, Mark D.; Biedron, Robert T.; Melson, N. Duane; Parlette, Edward B.
1995-01-01
The accuracy and efficiency of two types of subiterations in both explicit and implicit Navier-Stokes codes are explored for unsteady laminar circular-cylinder flow and unsteady turbulent flow over an 18-percent-thick circular-arc (biconvex) airfoil. Grid and time-step studies are used to assess the numerical accuracy of the methods. Nonsubiterative time-stepping schemes and schemes with physical time subiterations are subject to time-step limitations in practice that are removed by pseudo time sub-iterations. Computations for the circular-arc airfoil indicate that a one-equation turbulence model predicts the unsteady separated flow better than an algebraic turbulence model; also, the hysteresis with Mach number of the self-excited unsteadiness due to shock and boundary-layer separation is well predicted.
Ejupi, Andreas; Gschwind, Yves J; Brodie, Matthew; Zagler, Wolfgang L; Lord, Stephen R; Delbaere, Kim
2016-01-01
Quick protective reactions such as reaching or stepping are important to avoid a fall or minimize injuries. We developed Kinect-based choice reaching and stepping reaction time tests (Kinect-based CRTs) and evaluated their ability to differentiate between older fallers and non-fallers and the feasibility of administering them at home. A total of 94 community-dwelling older people were assessed on the Kinect-based CRTs in the laboratory and were followed-up for falls for 6 months. Additionally, a subgroup (n = 20) conducted the Kinect-based CRTs at home. Signal processing algorithms were developed to extract features for reaction, movement and the total time from the Kinect skeleton data. Nineteen participants (20.2 %) reported a fall in the 6 months following the assessment. The reaction time (fallers: 797 ± 136 ms, non-fallers: 714 ± 89 ms), movement time (fallers: 392 ± 50 ms, non-fallers: 358 ± 51 ms) and total time (fallers: 1189 ± 170 ms, non-fallers: 1072 ± 109 ms) of the reaching reaction time test differentiated well between the fallers and non-fallers. The stepping reaction time test did not significantly discriminate between the two groups in the prospective study. The correlations between the laboratory and in-home assessments were 0.689 for the reaching reaction time and 0.860 for stepping reaction time. The study findings indicate that the Kinect-based CRT tests are feasible to administer in clinical and in-home settings, and thus represents an important step towards the development of sensor-based fall risk self-assessments. With further validation, the assessments may prove useful as a fall risk screen and home-based assessment measures for monitoring changes over time and effects of fall prevention interventions.
Frazier, Zachary
2012-01-01
Abstract Particle-based Brownian dynamics simulations offer the opportunity to not only simulate diffusion of particles but also the reactions between them. They therefore provide an opportunity to integrate varied biological data into spatially explicit models of biological processes, such as signal transduction or mitosis. However, particle based reaction-diffusion methods often are hampered by the relatively small time step needed for accurate description of the reaction-diffusion framework. Such small time steps often prevent simulation times that are relevant for biological processes. It is therefore of great importance to develop reaction-diffusion methods that tolerate larger time steps while maintaining relatively high accuracy. Here, we provide an algorithm, which detects potential particle collisions prior to a BD-based particle displacement and at the same time rigorously obeys the detailed balance rule of equilibrium reactions. We can show that for reaction-diffusion processes of particles mimicking proteins, the method can increase the typical BD time step by an order of magnitude while maintaining similar accuracy in the reaction diffusion modelling. PMID:22697237
NASA Technical Reports Server (NTRS)
Chang, Chau-Lyan; Venkatachari, Balaji Shankar; Cheng, Gary
2013-01-01
With the wide availability of affordable multiple-core parallel supercomputers, next generation numerical simulations of flow physics are being focused on unsteady computations for problems involving multiple time scales and multiple physics. These simulations require higher solution accuracy than most algorithms and computational fluid dynamics codes currently available. This paper focuses on the developmental effort for high-fidelity multi-dimensional, unstructured-mesh flow solvers using the space-time conservation element, solution element (CESE) framework. Two approaches have been investigated in this research in order to provide high-accuracy, cross-cutting numerical simulations for a variety of flow regimes: 1) time-accurate local time stepping and 2) highorder CESE method. The first approach utilizes consistent numerical formulations in the space-time flux integration to preserve temporal conservation across the cells with different marching time steps. Such approach relieves the stringent time step constraint associated with the smallest time step in the computational domain while preserving temporal accuracy for all the cells. For flows involving multiple scales, both numerical accuracy and efficiency can be significantly enhanced. The second approach extends the current CESE solver to higher-order accuracy. Unlike other existing explicit high-order methods for unstructured meshes, the CESE framework maintains a CFL condition of one for arbitrarily high-order formulations while retaining the same compact stencil as its second-order counterpart. For large-scale unsteady computations, this feature substantially enhances numerical efficiency. Numerical formulations and validations using benchmark problems are discussed in this paper along with realistic examples.
NASA Astrophysics Data System (ADS)
Murray, William Breen
Boca de Potrerillos is an archaeological site located in the municipio of Mina, Nuevo León, about 60 km. northwest of Monterrey, Mexicós third largest city. Its principal feature is one of the largest concentrations of petroglyphs in the country. Archaeoastronomical features include petroglyphic markers of the cardinal directions, dot configurations which count lunar synodic periods, and one of the earliest horizon calendars in North America. They indicate that the site was probably used for sky observation from the Middle Archaic time period onward and may represent evidence of the initial stages in the development of Mesoamerican numeration and astronomy.
Wave Driven Fluid-Sediment Interactions over Rippled Beds
NASA Astrophysics Data System (ADS)
Foster, Diane; Nichols, Claire
2008-11-01
Empirical investigations relating vortex shedding over rippled beds to oscillatory flows date back to Darwin in 1883. Observations of the shedding induced by oscillating forcing over fixed beds have shown vortical structures to reach maximum strength at 90 degrees when the horizontal velocity is largest. The objective of this effort is to examine the vortex generation and ejection over movable rippled beds in a full-scale, free surface wave environment. Observations of the two-dimensional time-varying velocity field over a movable sediment bed were obtained with a submersible Particle Image Velocimetry (PIV) system in two wave flumes. One wave flume was full scale and had a natural sand bed and the other flume had an artificial sediment bed with a specific gravity of 1.6. Full scale observations over an irregularly rippled bed show that the vortices generated during offshore directed flow over the steeper bed form slope were regularly ejected into the water column and were consistent with conceptual models of the oscillatory flow over a backward facing step. The results also show that vortices remain coherent during ejection when the background flow stalls (i.e. both the velocity and acceleration temporarily approach zero). These results offer new insight into fluid sediment interaction over rippled beds.
Foot force production and asymmetries in elite rowers.
Buckeridge, Erica M; Bull, Anthony M J; McGregor, Alison H
2014-03-01
The rowing stroke is a leg-driven action, in which forces developed by the lower limbs provide a large proportion of power delivered to the oars. In terms of both performance and injury, it is important to initiate each stroke with powerful and symmetrical loading of the foot stretchers. The aims of this study were to assess the reliability of foot force measured by footplates developed for the Concept2 indoor ergometer and to examine the magnitude and symmetry of bilateral foot forces in different groups of rowers. Five heavyweight female scullers, six heavyweight female sweep rowers, and six lightweight male (LWM) rowers performed an incremental step test on the Concept2 ergometer. Vertical, horizontal, and resultant forces were recorded bilaterally, and asymmetries were quantified using the absolute symmetry index. Foot force was measured with high consistency (coefficient of multiple determination > 0.976 +/- 0.010). Relative resultant, vertical, and horizontal forces were largest in LWM rowers, whilst average foot forces significantly increased across stroke rates for all three groups of rowers. Asymmetries ranged from 5.3% for average resultant force to 28.9% for timing of peak vertical force. Asymmetries were not sensitive to stroke rate or rowing group, however, large inter-subject variability in asymmetries was evident.
Quicker Q-Learning in Multi-Agent Systems
NASA Technical Reports Server (NTRS)
Agogino, Adrian K.; Tumer, Kagan
2005-01-01
Multi-agent learning in Markov Decisions Problems is challenging because of the presence ot two credit assignment problems: 1) How to credit an action taken at time step t for rewards received at t' greater than t; and 2) How to credit an action taken by agent i considering the system reward is a function of the actions of all the agents. The first credit assignment problem is typically addressed with temporal difference methods such as Q-learning OK TD(lambda) The second credit assi,onment problem is typically addressed either by hand-crafting reward functions that assign proper credit to an agent, or by making certain independence assumptions about an agent's state-space and reward function. To address both credit assignment problems simultaneously, we propose the Q Updates with Immediate Counterfactual Rewards-learning (QUICR-learning) designed to improve both the convergence properties and performance of Q-learning in large multi-agent problems. Instead of assuming that an agent s value function can be made independent of other agents, this method suppresses the impact of other agents using counterfactual rewards. Results on multi-agent grid-world problems over multiple topologies show that QUICR-learning can achieve up to thirty fold improvements in performance over both conventional and local Q-learning in the largest tested systems.
Efficient Maximum Likelihood Estimation for Pedigree Data with the Sum-Product Algorithm.
Engelhardt, Alexander; Rieger, Anna; Tresch, Achim; Mansmann, Ulrich
2016-01-01
We analyze data sets consisting of pedigrees with age at onset of colorectal cancer (CRC) as phenotype. The occurrence of familial clusters of CRC suggests the existence of a latent, inheritable risk factor. We aimed to compute the probability of a family possessing this risk factor as well as the hazard rate increase for these risk factor carriers. Due to the inheritability of this risk factor, the estimation necessitates a costly marginalization of the likelihood. We propose an improved EM algorithm by applying factor graphs and the sum-product algorithm in the E-step. This reduces the computational complexity from exponential to linear in the number of family members. Our algorithm is as precise as a direct likelihood maximization in a simulation study and a real family study on CRC risk. For 250 simulated families of size 19 and 21, the runtime of our algorithm is faster by a factor of 4 and 29, respectively. On the largest family (23 members) in the real data, our algorithm is 6 times faster. We introduce a flexible and runtime-efficient tool for statistical inference in biomedical event data with latent variables that opens the door for advanced analyses of pedigree data. © 2017 S. Karger AG, Basel.
Weatherbee, Courtney R.; Pechal, Jennifer L.; Stamper, Trevor; Benbow, M. Eric
2017-01-01
Common forensic entomology practice has been to collect the largest Diptera larvae from a scene and use published developmental data, with temperature data from the nearest weather station, to estimate larval development time and post-colonization intervals (PCIs). To evaluate the accuracy of PCI estimates among Calliphoridae species and spatially distinct temperature sources, larval communities and ambient air temperature were collected at replicate swine carcasses (N = 6) throughout decomposition. Expected accumulated degree hours (ADH) associated with Cochliomyia macellaria and Phormia regina third instars (presence and length) were calculated using published developmental data sets. Actual ADH ranges were calculated using temperatures recorded from multiple sources at varying distances (0.90 m–7.61 km) from the study carcasses: individual temperature loggers at each carcass, a local weather station, and a regional weather station. Third instars greatly varied in length and abundance. The expected ADH range for each species successfully encompassed the average actual ADH for each temperature source, but overall under-represented the range. For both calliphorid species, weather station data were associated with more accurate PCI estimates than temperature loggers associated with each carcass. These results provide an important step towards improving entomological evidence collection and analysis techniques, and developing forensic error rates. PMID:28375172
NASA Astrophysics Data System (ADS)
Dinger, R.; Kinzel, G.; Lam, W.; Jones, S.
1993-01-01
Studies were conducted of the enhanced radar cross section (RCS) and improved inverse synthetic aperture radar (ISAR) image quality that may result at millimeter-wave (mmw) frequencies. To study the potential for mmw radar in these areas, a program was initiated in FY-90 to design and fabricate a 49.0- to 49.5-GHz stepped-frequency radar. After conducting simultaneous measurements of the RCS of an airborne Piper Navajo twin-engine aircraft at 9.0 and 49.0 GHz, the RCS at 49.0 GHz was always found to be higher than at 9.0 GHz by an amount that depended on the target aspect angle. The largest increase was 19 dB and was measured at nose-on incidence; at other angles of incidence, the increase ranged from 3 to 10 dB. The increase averaged over a 360-degree aspect-angle change was 7.2 dB. The 49.0-GHz radar has demonstrated a capability to gather well-calibrated millimeter-wave RCS data of flying targets. In addition, the successful ISAR images obtainable with short aperture time suggest that 49.0-GHz radar may have a role to play in noncooperative target identification (NCTI).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Giacomelli, L.; Department of Physics, Università degli Studi di Milano-Bicocca, Milano; Conroy, S.
The Joint European Torus (JET, Culham, UK) is the largest tokamak in the world devoted to nuclear fusion experiments of magnetic confined Deuterium (D)/Deuterium-Tritium (DT) plasmas. Neutrons produced in these plasmas are measured using various types of neutron detectors and spectrometers. Two of these instruments on JET make use of organic liquid scintillator detectors. The neutron emission profile monitor implements 19 liquid scintillation counters to detect the 2.45 MeV neutron emission from D plasmas. A new compact neutron spectrometer is operational at JET since 2010 to measure the neutron energy spectra from both D and DT plasmas. Liquid scintillation detectorsmore » are sensitive to both neutron and gamma radiation but give light responses of different decay time such that pulse shape discrimination techniques can be applied to identify the neutron contribution of interest from the data. The most common technique consists of integrating the radiation pulse shapes within different ranges of their rising and/or trailing edges. In this article, a step forward in this type of analysis is presented. The method applies a tomographic analysis of the 3-dimensional neutron and gamma pulse shape and pulse height distribution data obtained from liquid scintillation detectors such that n/γ discrimination can be improved to lower energies and additional information can be gained on neutron contributions to the gamma events and vice versa.« less
NASA Astrophysics Data System (ADS)
Woeger, Julia; Kinoshita, Shunichi; Wolfgang, Eder; Briguglio, Antonino; Hohenegger, Johann
2016-04-01
Operculina complanata was collected in 20 and 50 m depth around the Island of Sesoko belonging to Japans southernmost prefecture Okinawa in a series of monthly sampling over a period of 16 months (Apr.2014-July2015). A minimum of 8 specimens (4 among the smallest and 4 among the largest) per sampling were cultured in a long term experiment that was set up to approximate conditions in the field as closely as possible. A set up allowing recognition of individual specimens enabled consistent documentation of chamber formation, which in combination with μ-CT-scanning after the investigation period permitted the assignment of growth steps to specific time periods. These data were used to fit various mathematical models to describe growth (exponential-, logistic-, generalized logistic-, Gompertz-function) and chamber building rate (Michaelis-Menten-, Bertalanffy- function) of Operculina complanata. The mathematically retrieved maximum lifespan and mean chamber building rate found in cultured Operculina complanata were further compared to first results obtained by the simultaneously conducted "natural laboratory approach". Even though these comparisons hint at a somewhat stunted growth and truncated life spans of Operculina complanata in culture, they represent a possibility to assess and improve the quality of further cultivation set ups, opening new prospects to a better understanding of the their theoretical niches.
Oudenhoven, Laura M; Boes, Judith M; Hak, Laura; Faber, Gert S; Houdijk, Han
2017-01-25
Running specific prostheses (RSP) are designed to replicate the spring-like behaviour of the human leg during running, by incorporating a real physical spring in the prosthesis. Leg stiffness is an important parameter in running as it is strongly related to step frequency and running economy. To be able to select a prosthesis that contributes to the required leg stiffness of the athlete, it needs to be known to what extent the behaviour of the prosthetic leg during running is dominated by the stiffness of the prosthesis or whether it can be regulated by adaptations of the residual joints. The aim of this study was to investigate whether and how athletes with an RSP could regulate leg stiffness during distance running at different step frequencies. Seven endurance runners with an unilateral transtibial amputation performed five running trials on a treadmill at a fixed speed, while different step frequencies were imposed (preferred step frequency (PSF) and -15%, -7.5%, +7.5% and +15% of PSF). Among others, step time, ground contact time, flight time, leg stiffness and joint kinetics were measured for both legs. In the intact leg, increasing step frequency was accompanied by a decrease in both contact and flight time, while in the prosthetic leg contact time remained constant and only flight time decreased. In accordance, leg stiffness increased in the intact leg, but not in the prosthetic leg. Although a substantial contribution of the residual leg to total leg stiffness was observed, this contribution did not change considerably with changing step frequency. Amputee athletes do not seem to be able to alter prosthetic leg stiffness to regulate step frequency during running. This invariant behaviour indicates that RSP stiffness has a large effect on total leg stiffness and therefore can have an important influence on running performance. Nevertheless, since prosthetic leg stiffness was considerably lower than stiffness of the RSP, compliance of the residual leg should not be ignored when selecting RSP stiffness. Copyright © 2016 Elsevier Ltd. All rights reserved.
Finite cohesion due to chain entanglement in polymer melts.
Cheng, Shiwang; Lu, Yuyuan; Liu, Gengxin; Wang, Shi-Qing
2016-04-14
Three different types of experiments, quiescent stress relaxation, delayed rate-switching during stress relaxation, and elastic recovery after step strain, are carried out in this work to elucidate the existence of a finite cohesion barrier against free chain retraction in entangled polymers. Our experiments show that there is little hastened stress relaxation from step-wise shear up to γ = 0.7 and step-wise extension up to the stretching ratio λ = 1.5 at any time before or after the Rouse time. In contrast, a noticeable stress drop stemming from the built-in barrier-free chain retraction is predicted using the GLaMM model. In other words, the experiment reveals a threshold magnitude of step-wise deformation below which the stress relaxation follows identical dynamics whereas the GLaMM or Doi-Edwards model indicates a monotonic acceleration of the stress relaxation dynamics as a function of the magnitude of the step-wise deformation. Furthermore, a sudden application of startup extension during different stages of stress relaxation after a step-wise extension, i.e. the delayed rate-switching experiment, shows that the geometric condensation of entanglement strands in the cross-sectional area survives beyond the reptation time τd that is over 100 times the Rouse time τR. Our results point to the existence of a cohesion barrier that can prevent free chain retraction upon moderate deformation in well-entangled polymer melts.
Retail Trade. Industry Training Monograph No. 7.
ERIC Educational Resources Information Center
Dumbrell, Tom
Australia's retailing sector is the largest single industry of employment, with more than 1.2 million workers. It is characterized by high levels of part-time and casual employment; a young work force, including many young people still in full-time education; and employment widely distributed geographically. Over the past 10 years, employment has…
26 CFR 1.403(b)-5 - Nondiscrimination rules.
Code of Federal Regulations, 2010 CFR
2010-04-01
...)(1) of this section, an employee is not treated as being permitted to have section 403(b) elective... under the contract with the largest limitation, and applies to part-time employees as well as full-time...) General rule. Under section 403(b)(12)(A)(i), employer contributions and after-tax employee contributions...
Kim, Hong-Seok; Choi, Dasom; Kang, Il-Byeong; Kim, Dong-Hyeon; Yim, Jin-Hyeok; Kim, Young-Ji; Chon, Jung-Whan; Oh, Deog-Hwan; Seo, Kun-Ho
2017-02-01
Culture-based detection of nontyphoidal Salmonella spp. in foods requires at least four working days; therefore, new detection methods that shorten the test time are needed. In this study, we developed a novel single-step Salmonella enrichment broth, SSE-1, and compared its detection capability with that of commercial single-step ONE broth-Salmonella (OBS) medium and a conventional two-step enrichment method using buffered peptone water and Rappaport-Vassiliadis soy broth (BPW-RVS). Minimally processed lettuce samples were artificially inoculated with low levels of healthy and cold-injured Salmonella Enteritidis (10 0 or 10 1 colony-forming unit/25 g), incubated in OBS, BPW-RVS, and SSE-1 broths, and streaked on xylose lysine deoxycholate (XLD) agar. Salmonella recoverability was significantly higher in BPW-RVS (79.2%) and SSE-1 (83.3%) compared to OBS (39.3%) (p < 0.05). Our data suggest that the SSE-1 single-step enrichment broth could completely replace two-step enrichment with reduced enrichment time from 48 to 24 h, performing better than commercial single-step enrichment medium in the conventional nonchromogenic Salmonella detection, thus saving time, labor, and cost.
NASA Astrophysics Data System (ADS)
Jeffery, David J.; Mazzali, Paolo A.
2007-08-01
Giant steps is a technique to accelerate Monte Carlo radiative transfer in optically-thick cells (which are isotropic and homogeneous in matter properties and into which astrophysical atmospheres are divided) by greatly reducing the number of Monte Carlo steps needed to propagate photon packets through such cells. In an optically-thick cell, packets starting from any point (which can be regarded a point source) well away from the cell wall act essentially as packets diffusing from the point source in an infinite, isotropic, homogeneous atmosphere. One can replace many ordinary Monte Carlo steps that a packet diffusing from the point source takes by a randomly directed giant step whose length is slightly less than the distance to the nearest cell wall point from the point source. The giant step is assigned a time duration equal to the time for the RMS radius for a burst of packets diffusing from the point source to have reached the giant step length. We call assigning giant-step time durations this way RMS-radius (RMSR) synchronization. Propagating packets by series of giant steps in giant-steps random walks in the interiors of optically-thick cells constitutes the technique of giant steps. Giant steps effectively replaces the exact diffusion treatment of ordinary Monte Carlo radiative transfer in optically-thick cells by an approximate diffusion treatment. In this paper, we describe the basic idea of giant steps and report demonstration giant-steps flux calculations for the grey atmosphere. Speed-up factors of order 100 are obtained relative to ordinary Monte Carlo radiative transfer. In practical applications, speed-up factors of order ten and perhaps more are possible. The speed-up factor is likely to be significantly application-dependent and there is a trade-off between speed-up and accuracy. This paper and past work suggest that giant-steps error can probably be kept to a few percent by using sufficiently large boundary-layer optical depths while still maintaining large speed-up factors. Thus, giant steps can be characterized as a moderate accuracy radiative transfer technique. For many applications, the loss of some accuracy may be a tolerable price to pay for the speed-ups gained by using giant steps.
Confirming criticality safety of TRU waste with neutron measurements and risk analyses
DOE Office of Scientific and Technical Information (OSTI.GOV)
Winn, W.G.; Hochel, R.D.
1992-04-01
The criticality safety of {sup 239}Pu in 55-gallon drums stored in TRU waste containers (culverts) is confirmed using NDA neutron measurements and risk analyses. The neutron measurements yield a {sup 239}Pu mass and k{sub eff} for a culvert, which contains up to 14 drums. Conservative probabilistic risk analyses were developed for both drums and culverts. Overall {sup 239}Pu mass estimates are less than a calculated safety limit of 2800 g per culvert. The largest measured k{sub eff} is 0.904. The largest probability for a critical drum is 6.9 {times} 10{sup {minus}8} and that for a culvert is 1.72 {times} 10{supmore » {minus}7}. All examined suspect culverts, totaling 118 in number, are appraised as safe based on these observations.« less
Trabant, Dennis C.
1999-01-01
The volume of four of the largest glaciers on Iliamna Volcano was estimated using the volume model developed for evaluating glacier volumes on Redoubt Volcano. The volume model is controlled by simulated valley cross sections that are constructed by fitting third-order polynomials to the shape of the valley walls exposed above the glacier surface. Critical cross sections were field checked by sounding with ice-penetrating radar during July 1998. The estimated volumes of perennial snow and glacier ice for Tuxedni, Lateral, Red, and Umbrella Glaciers are 8.6, 0.85, 4.7, and 0.60 cubic kilometers respectively. The estimated volume of snow and ice on the upper 1,000 meters of the volcano is about 1 cubic kilometer. The volume estimates are thought to have errors of no more than ?25 percent. The volumes estimated for the four largest glaciers are more than three times the total volume of snow and ice on Mount Rainier and about 82 times the total volume of snow and ice that was on Mount St. Helens before its May 18, 1980 eruption. Volcanoes mantled by substantial snow and ice covers have produced the largest and most catastrophic lahars and floods. Therefore, it is prudent to expect that, during an eruptive episode, flooding and lahars threaten all of the drainages heading on Iliamna Volcano. On the other hand, debris avalanches can happen any time. Fortunately, their influence is generally limited to the area within a few kilometers of the summit.
NASA Astrophysics Data System (ADS)
Setiawan, A.; Wangsaputra, R.; Martawirya, Y. Y.; Halim, A. H.
2016-02-01
This paper deals with Flexible Manufacturing System (FMS) production rescheduling due to unavailability of cutting tools caused either of cutting tool failure or life time limit. The FMS consists of parallel identical machines integrated with an automatic material handling system and it runs fully automatically. Each machine has a same cutting tool configuration that consists of different geometrical cutting tool types on each tool magazine. The job usually takes two stages. Each stage has sequential operations allocated to machines considering the cutting tool life. In the real situation, the cutting tool can fail before the cutting tool life is reached. The objective in this paper is to develop a dynamic scheduling algorithm when a cutting tool is broken during unmanned and a rescheduling needed. The algorithm consists of four steps. The first step is generating initial schedule, the second step is determination the cutting tool failure time, the third step is determination of system status at cutting tool failure time and the fourth step is the rescheduling for unfinished jobs. The approaches to solve the problem are complete-reactive scheduling and robust-proactive scheduling. The new schedules result differences starting time and completion time of each operations from the initial schedule.
Gnedin, Nickolay Y.; Semenov, Vadim A.; Kravtsov, Andrey V.
2018-01-30
In this study, an optimally efficient explicit numerical scheme for solving fluid dynamics equations, or any other parabolic or hyperbolic system of partial differential equations, should allow local regions to advance in time with their own, locally constrained time steps. However, such a scheme can result in violation of the Courant-Friedrichs-Lewy (CFL) condition, which is manifestly non-local. Although the violations can be considered to be "weak" in a certain sense and the corresponding numerical solution may be stable, such calculation does not guarantee the correct propagation speed for arbitrary waves. We use an experimental fluid dynamics code that allows cubicmore » "patches" of grid cells to step with independent, locally constrained time steps to demonstrate how the CFL condition can be enforced by imposing a condition on the time steps of neighboring patches. We perform several numerical tests that illustrate errors introduced in the numerical solutions by weak CFL condition violations and show how strict enforcement of the CFL condition eliminates these errors. In all our tests the strict enforcement of the CFL condition does not impose a significant performance penalty.« less
Light propagation in the averaged universe
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bagheri, Samae; Schwarz, Dominik J., E-mail: s_bagheri@physik.uni-bielefeld.de, E-mail: dschwarz@physik.uni-bielefeld.de
Cosmic structures determine how light propagates through the Universe and consequently must be taken into account in the interpretation of observations. In the standard cosmological model at the largest scales, such structures are either ignored or treated as small perturbations to an isotropic and homogeneous Universe. This isotropic and homogeneous model is commonly assumed to emerge from some averaging process at the largest scales. We assume that there exists an averaging procedure that preserves the causal structure of space-time. Based on that assumption, we study the effects of averaging the geometry of space-time and derive an averaged version of themore » null geodesic equation of motion. For the averaged geometry we then assume a flat Friedmann-Lemaître (FL) model and find that light propagation in this averaged FL model is not given by null geodesics of that model, but rather by a modified light propagation equation that contains an effective Hubble expansion rate, which differs from the Hubble rate of the averaged space-time.« less
U.S. metric board 1979 survey of selected large U.S. firms and industries
NASA Astrophysics Data System (ADS)
King, L. L.
1980-05-01
A mail survey of randomly chosen 202 of the 1000 largest manufacturing and mining firms, as listed by Fortune magazine, was conducted in late 1979 and early 1980. About 64 percent (112 firms) responded with useful data. Among the findings are: about 63 percent of the largest firms produce at least one metric product; about 48 percent of exported sales are of metric products; about three quarters of the firms selling metric products sell products labelled in customary and metric units (soft conversion); about half the firms selling metric products sell hard converted products (products manufactured in metric units); little corporate coordination and planning seems to accompany conversion to the metric system; about one-third of the firms see laws and reputation impeding conversion; over 50 percent see lack of customer demand as inhibiting conversion; and the most realistic time period for conversion is 10 years, the minimum time for conversion (under pressure) is three years, and the perferred time (at the firm's own pace) is eight years.
NASA Astrophysics Data System (ADS)
Sangwal, K.; Torrent-Burgues, J.; Sanz, F.; Gorostiza, P.
1997-02-01
The experimental results of the formation of step bunches and macrosteps on the {100} face of L-arginine phosphate monohydrate crystals grown from aqueous solutions at different supersaturations studied by using atomic force microscopy are described and discussed. It was observed that (1) the step height does not remain constant with increasing time but fluctuates within a particular range of heights, which depends on the region of step bunches, (2) the maximum height and the slope of bunched steps increases with growth time as well as supersaturation used for growth, and that (3) the slope of steps of relatively small heights is usually low with a value of about 8° and does not depend on the region of formation of step bunches, but the slope of steps of large heights is up to 21°. Analysis of the experimental results showed that (1) at a particular value of supersaturation the ratio of the average step height to the average step spacing is a constant, suggesting that growth of the {100} face of L-arginine phosphate monohydrate crystals occurs by direct integration of growth entities to growth steps, and that (2) the formation of step bunches and macrosteps follows the dynamic theory of faceting, advanced by Vlachos et al.
Lin, Cheng-Chieh; Creath, Robert A; Rogers, Mark W
2016-01-01
In people with Parkinson disease (PD), difficulties with initiating stepping may be related to impairments of anticipatory postural adjustments (APAs). Increased variability in step length and step time has been observed in gait initiation in individuals with PD. In this study, we investigated whether the ability to generate consistent APAs during gait initiation is compromised in these individuals. Fifteen subjects with PD and 8 healthy control subjects were instructed to take rapid forward steps after a verbal cue. The changes in vertical force and ankle marker position were recorded via force platforms and a 3-dimensional motion capture system, respectively. Means, standard deviations, and coefficients of variation of both timing and magnitude of vertical force, as well as stepping variables, were calculated. During the postural phase of gait initiation the interval was longer and the force modulation was smaller in subjects with PD. Both the variability of timing and force modulation were larger in subjects with PD. Individuals with PD also had a longer time to complete the first step, but no significant differences were found for the variability of step time, length, and speed between groups. The increased variability of APAs during gait initiation in subjects with PD could affect posture-locomotion coupling, and lead to start hesitation, and even falls. Future studies are needed to investigate the effect of rehabilitation interventions on the variability of APAs during gait initiation in individuals with PD.Video abstract available for more insights from the authors (see Supplemental Digital Content 1, http://links.lww.com/JNPT/A119).
Konik, Anita; Kuklewicz, Stanisław; Rosłoniec, Ewelina; Zając, Marcin; Spannbauer, Anna; Nowobilski, Roman; Mika, Piotr
2016-01-01
The purpose of the study was to evaluate selected temporal and spatial gait parameters in patients with intermittent claudication after completion of 12-week supervised treadmill walking training. The study included 36 patients (26 males and 10 females) aged: mean 64 (SD 7.7) with intermittent claudication. All patients were tested on treadmill (Gait Trainer, Biodex). Before the programme and after its completion, the following gait biomechanical parameters were tested: step length (cm), step cycle (cycle/s), leg support time (%), coefficient of step variation (%) as well as pain-free walking time (PFWT) and maximal walking time (MWT) were measured. Training was conducted in accordance with the current TASC II guidelines. After 12 weeks of training, patients showed significant change in gait biomechanics consisting in decreased frequency of step cycle (p < 0.05) and extended step length (p < 0.05). PFWT increased by 96% (p < 0.05). MWT increased by 100% (p < 0.05). After completing the training, patients' gait was more regular, which was expressed via statistically significant decrease of coefficient of variation (p < 0.05) for both legs. No statistically significant relation between the post-training improvement of PFWT and MWT and step length increase and decreased frequency of step cycle was observed (p > 0.05). Twelve-week treadmill walking training programme may lead to significant improvement of temporal and spatial gait parameters in patients with intermittent claudication. Twelve-week treadmill walking training programme may lead to significant improvement of pain-free walking time and maximum walking time in patients with intermittent claudication.