Implicit time accurate simulation of unsteady flow
NASA Astrophysics Data System (ADS)
van Buuren, René; Kuerten, Hans; Geurts, Bernard J.
2001-03-01
Implicit time integration was studied in the context of unsteady shock-boundary layer interaction flow. With an explicit second-order Runge-Kutta scheme, a reference solution to compare with the implicit second-order Crank-Nicolson scheme was determined. The time step in the explicit scheme is restricted by both temporal accuracy as well as stability requirements, whereas in the A-stable implicit scheme, the time step has to obey temporal resolution requirements and numerical convergence conditions. The non-linear discrete equations for each time step are solved iteratively by adding a pseudo-time derivative. The quasi-Newton approach is adopted and the linear systems that arise are approximately solved with a symmetric block Gauss-Seidel solver. As a guiding principle for properly setting numerical time integration parameters that yield an efficient time accurate capturing of the solution, the global error caused by the temporal integration is compared with the error resulting from the spatial discretization. Focus is on the sensitivity of properties of the solution in relation to the time step. Numerical simulations show that the time step needed for acceptable accuracy can be considerably larger than the explicit stability time step; typical ratios range from 20 to 80. At large time steps, convergence problems that are closely related to a highly complex structure of the basins of attraction of the iterative method may occur. Copyright
Aparna, Deshpande; Kumar, Sunil; Kamalkumar, Shukla
2017-10-27
To determine percentage of patients of necrotizing pancreatitis (NP) requiring intervention and the types of interventions performed. Outcomes of patients of step up necrosectomy to those of direct necrosectomy were compared. Operative mortality, overall mortality, morbidity and overall length of stay were determined. After institutional ethics committee clearance and waiver of consent, records of patients of pancreatitis were reviewed. After excluding patients as per criteria, epidemiologic and clinical data of patients of NP was noted. Treatment protocol was reviewed. Data of patients in whom step-up approach was used was compared to those in whom it was not used. A total of 41 interventions were required in 39% patients. About 60% interventions targeted the pancreatic necrosis while the rest were required to deal with the complications of the necrosis. Image guided percutaneous catheter drainage was done in 9 patients for infected necrosis all of whom required further necrosectomy and in 3 patients with sterile necrosis. Direct retroperitoneal or anterior necrosectomy was performed in 15 patients. The average time to first intervention was 19.6 d in the non step-up group (range 11-36) vs 18.22 d in the Step-up group (range 13-25). The average hospital stay in non step-up group was 33.3 d vs 38 d in step up group. The mortality in the step-up group was 0% (0/9) vs 13% (2/15) in the non step up group. Overall mortality was 10.3% while post-operative mortality was 8.3%. Average hospital stay was 22.25 d. Early conservative management plays an important role in management of NP. In patients who require intervention, the approach used and the timing of intervention should be based upon the clinical condition and local expertise available. Delaying intervention and use of minimal invasive means when intervention is necessary is desirable. The step-up approach should be used whenever possible. Even when the classical retroperitoneal catheter drainage is not feasible, there should be an attempt to follow principles of step-up technique to buy time. The outcome of patients in the step-up group compared to the non step-up group is comparable in our series. Interventions for bowel diversion, bypass and hemorrhage control should be done at the appropriate times.
Impaired Response Selection During Stepping Predicts Falls in Older People-A Cohort Study.
Schoene, Daniel; Delbaere, Kim; Lord, Stephen R
2017-08-01
Response inhibition, an important executive function, has been identified as a risk factor for falls in older people. This study investigated whether step tests that include different levels of response inhibition differ in their ability to predict falls and whether such associations are mediated by measures of attention, speed, and/or balance. A cohort study with a 12-month follow-up was conducted in community-dwelling older people without major cognitive and mobility impairments. Participants underwent 3 step tests: (1) choice stepping reaction time (CSRT) requiring rapid decision making and step initiation; (2) inhibitory choice stepping reaction time (iCSRT) requiring additional response inhibition and response-selection (go/no-go); and (3) a Stroop Stepping Test (SST) under congruent and incongruent conditions requiring conflict resolution. Participants also completed tests of processing speed, balance, and attention as potential mediators. Ninety-three of the 212 participants (44%) fell in the follow-up period. Of the step tests, only components of the iCSRT task predicted falls in this time with the relative risk per standard deviation for the reaction time (iCSRT-RT) = 1.23 (95%CI = 1.10-1.37). Multiple mediation analysis indicated that the iCSRT-RT was independently associated with falls and not mediated through slow processing speed, poor balance, or inattention. Combined stepping and response inhibition as measured in a go/no-go test stepping paradigm predicted falls in older people. This suggests that integrity of the response-selection component of a voluntary stepping response is crucial for minimizing fall risk. Copyright © 2017 AMDA – The Society for Post-Acute and Long-Term Care Medicine. Published by Elsevier Inc. All rights reserved.
NASA Technical Reports Server (NTRS)
Chao, W. C.
1982-01-01
With appropriate modifications, a recently proposed explicit-multiple-time-step scheme (EMTSS) is incorporated into the UCLA model. In this scheme, the linearized terms in the governing equations that generate the gravity waves are split into different vertical modes. Each mode is integrated with an optimal time step, and at periodic intervals these modes are recombined. The other terms are integrated with a time step dictated by the CFL condition for low-frequency waves. This large time step requires a special modification of the advective terms in the polar region to maintain stability. Test runs for 72 h show that EMTSS is a stable, efficient and accurate scheme.
Next Steps in Network Time Synchronization For Navy Shipboard Applications
2008-12-01
40th Annual Precise Time and Time Interval (PTTI) Meeting NEXT STEPS IN NETWORK TIME SYNCHRONIZATION FOR NAVY SHIPBOARD APPLICATIONS...dynamic manner than in previous designs. This new paradigm creates significant network time synchronization challenges. The Navy has been...deploying the Network Time Protocol (NTP) in shipboard computing infrastructures to meet the current network time synchronization requirements
El-Gohary, Mahmoud; Peterson, Daniel; Gera, Geetanjali; Horak, Fay B; Huisinga, Jessie M
2017-07-01
To test the validity of wearable inertial sensors to provide objective measures of postural stepping responses to the push and release clinical test in people with multiple sclerosis. Cross-sectional study. University medical center balance disorder laboratory. Total sample N=73; persons with multiple sclerosis (PwMS) n=52; healthy controls n=21. Stepping latency, time and number of steps required to reach stability, and initial step length were calculated using 3 inertial measurement units placed on participants' lumbar spine and feet. Correlations between inertial sensor measures and measures obtained from the laboratory-based systems were moderate to strong and statistically significant for all variables: time to release (r=.992), latency (r=.655), time to stability (r=.847), time of first heel strike (r=.665), number of steps (r=.825), and first step length (r=.592). Compared with healthy controls, PwMS demonstrated a longer time to stability and required a larger number of steps to reach stability. The instrumented push and release test is a valid measure of postural responses in PwMS and could be used as a clinical outcome measures for patient care decisions or for clinical trials aimed at improving postural control in PwMS. Copyright © 2016 American Congress of Rehabilitation Medicine. Published by Elsevier Inc. All rights reserved.
Improvement of CFD Methods for Modeling Full Scale Circulating Fluidized Bed Combustion Systems
NASA Astrophysics Data System (ADS)
Shah, Srujal; Klajny, Marcin; Myöhänen, Kari; Hyppänen, Timo
With the currently available methods of computational fluid dynamics (CFD), the task of simulating full scale circulating fluidized bed combustors is very challenging. In order to simulate the complex fluidization process, the size of calculation cells should be small and the calculation should be transient with small time step size. For full scale systems, these requirements lead to very large meshes and very long calculation times, so that the simulation in practice is difficult. This study investigates the requirements of cell size and the time step size for accurate simulations, and the filtering effects caused by coarser mesh and longer time step. A modeling study of a full scale CFB furnace is presented and the model results are compared with experimental data.
NASA Technical Reports Server (NTRS)
Chan, Daniel C.; Darian, Armen; Sindir, Munir
1992-01-01
We have applied and compared the efficiency and accuracy of two commonly used numerical methods for the solution of Navier-Stokes equations. The artificial compressibility method augments the continuity equation with a transient pressure term and allows one to solve the modified equations as a coupled system. Due to its implicit nature, one can have the luxury of taking a large temporal integration step at the expense of higher memory requirement and larger operation counts per step. Meanwhile, the fractional step method splits the Navier-Stokes equations into a sequence of differential operators and integrates them in multiple steps. The memory requirement and operation count per time step are low, however, the restriction on the size of time marching step is more severe. To explore the strengths and weaknesses of these two methods, we used them for the computation of a two-dimensional driven cavity flow with Reynolds number of 100 and 1000, respectively. Three grid sizes, 41 x 41, 81 x 81, and 161 x 161 were used. The computations were considered after the L2-norm of the change of the dependent variables in two consecutive time steps has fallen below 10(exp -5).
A quick response four decade logarithmic high-voltage stepping supply
NASA Technical Reports Server (NTRS)
Doong, H.
1978-01-01
An improved high-voltage stepping supply, for space instrumentation is described where low power consumption and fast settling time between steps are required. The high-voltage stepping supply, utilizing an average power of 750 milliwatts, delivers a pair of mirror images with 64 level logarithmic outputs. It covers a four decade range of + or - 2500 to + or - 0.29 volts having an output stability of + or - 0.5 percent or + or - 20 millivolts for all line load and temperature variations. The supply provides a typical step setting time of 1 millisecond with 100 microseconds for the lower two decades. The versatile design features of the high-voltage stepping supply provides a quick response staircase generator as described or a fixed voltage with the option to change levels as required over large dynamic ranges without circuit modifications. The concept can be implemented up to + or - 5000 volts. With these design features, the high-voltage stepping supply should find numerous applications where charged particle detection, electro-optical systems, and high voltage scientific instruments are used.
Adaptive Time Stepping for Transient Network Flow Simulation in Rocket Propulsion Systems
NASA Technical Reports Server (NTRS)
Majumdar, Alok K.; Ravindran, S. S.
2017-01-01
Fluid and thermal transients found in rocket propulsion systems such as propellant feedline system is a complex process involving fast phases followed by slow phases. Therefore their time accurate computation requires use of short time step initially followed by the use of much larger time step. Yet there are instances that involve fast-slow-fast phases. In this paper, we present a feedback control based adaptive time stepping algorithm, and discuss its use in network flow simulation of fluid and thermal transients. The time step is automatically controlled during the simulation by monitoring changes in certain key variables and by feedback. In order to demonstrate the viability of time adaptivity for engineering problems, we applied it to simulate water hammer and cryogenic chill down in pipelines. Our comparison and validation demonstrate the accuracy and efficiency of this adaptive strategy.
Optimal subinterval selection approach for power system transient stability simulation
Kim, Soobae; Overbye, Thomas J.
2015-10-21
Power system transient stability analysis requires an appropriate integration time step to avoid numerical instability as well as to reduce computational demands. For fast system dynamics, which vary more rapidly than what the time step covers, a fraction of the time step, called a subinterval, is used. However, the optimal value of this subinterval is not easily determined because the analysis of the system dynamics might be required. This selection is usually made from engineering experiences, and perhaps trial and error. This paper proposes an optimal subinterval selection approach for power system transient stability analysis, which is based on modalmore » analysis using a single machine infinite bus (SMIB) system. Fast system dynamics are identified with the modal analysis and the SMIB system is used focusing on fast local modes. An appropriate subinterval time step from the proposed approach can reduce computational burden and achieve accurate simulation responses as well. As a result, the performance of the proposed method is demonstrated with the GSO 37-bus system.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mather, Barry
The increasing deployment of distribution-connected photovoltaic (DPV) systems requires utilities to complete complex interconnection studies. Relatively simple interconnection study methods worked well for low penetrations of photovoltaic systems, but more complicated quasi-static time-series (QSTS) analysis is required to make better interconnection decisions as DPV penetration levels increase. Tools and methods must be developed to support this. This paper presents a variable-time-step solver for QSTS analysis that significantly shortens the computational time and effort to complete a detailed analysis of the operation of a distribution circuit with many DPV systems. Specifically, it demonstrates that the proposed variable-time-step solver can reduce themore » required computational time by as much as 84% without introducing any important errors to metrics, such as the highest and lowest voltage occurring on the feeder, number of voltage regulator tap operations, and total amount of losses realized in the distribution circuit during a 1-yr period. Further improvement in computational speed is possible with the introduction of only modest errors in these metrics, such as a 91 percent reduction with less than 5 percent error when predicting voltage regulator operations.« less
Single step optimization of manipulator maneuvers with variable structure control
NASA Technical Reports Server (NTRS)
Chen, N.; Dwyer, T. A. W., III
1987-01-01
One step ahead optimization has been recently proposed for spacecraft attitude maneuvers as well as for robot manipulator maneuvers. Such a technique yields a discrete time control algorithm implementable as a sequence of state-dependent, quadratic programming problems for acceleration optimization. Its sensitivity to model accuracy, for the required inversion of the system dynamics, is shown in this paper to be alleviated by a fast variable structure control correction, acting between the sampling intervals of the slow one step ahead discrete time acceleration command generation algorithm. The slow and fast looping concept chosen follows that recently proposed for optimal aiming strategies with variable structure control. Accelerations required by the VSC correction are reserved during the slow one step ahead command generation so that the ability to overshoot the sliding surface is guaranteed.
Gama-Arachchige, N. S.; Baskin, J. M.; Geneve, R. L.; Baskin, C. C.
2013-01-01
Background and Aims Physical dormancy (PY)-break in some annual plant species is a two-step process controlled by two different temperature and/or moisture regimes. The thermal time model has been used to quantify PY-break in several species of Fabaceae, but not to describe stepwise PY-break. The primary aims of this study were to quantify the thermal requirement for sensitivity induction by developing a thermal time model and to propose a mechanism for stepwise PY-breaking in the winter annual Geranium carolinianum. Methods Seeds of G. carolinianum were stored under dry conditions at different constant and alternating temperatures to induce sensitivity (step I). Sensitivity induction was analysed based on the thermal time approach using the Gompertz function. The effect of temperature on step II was studied by incubating sensitive seeds at low temperatures. Scanning electron microscopy, penetrometer techniques, and different humidity levels and temperatures were used to explain the mechanism of stepwise PY-break. Key Results The base temperature (Tb) for sensitivity induction was 17·2 °C and constant for all seed fractions of the population. Thermal time for sensitivity induction during step I in the PY-breaking process agreed with the three-parameter Gompertz model. Step II (PY-break) did not agree with the thermal time concept. Q10 values for the rate of sensitivity induction and PY-break were between 2·0 and 3·5 and between 0·02 and 0·1, respectively. The force required to separate the water gap palisade layer from the sub-palisade layer was significantly reduced after sensitivity induction. Conclusions Step I and step II in PY-breaking of G. carolinianum are controlled by chemical and physical processes, respectively. This study indicates the feasibility of applying the developed thermal time model to predict or manipulate sensitivity induction in seeds with two-step PY-breaking processes. The model is the first and most detailed one yet developed for sensitivity induction in PY-break. PMID:23456728
Newmark local time stepping on high-performance computing architectures
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rietmann, Max, E-mail: max.rietmann@erdw.ethz.ch; Institute of Geophysics, ETH Zurich; Grote, Marcus, E-mail: marcus.grote@unibas.ch
In multi-scale complex media, finite element meshes often require areas of local refinement, creating small elements that can dramatically reduce the global time-step for wave-propagation problems due to the CFL condition. Local time stepping (LTS) algorithms allow an explicit time-stepping scheme to adapt the time-step to the element size, allowing near-optimal time-steps everywhere in the mesh. We develop an efficient multilevel LTS-Newmark scheme and implement it in a widely used continuous finite element seismic wave-propagation package. In particular, we extend the standard LTS formulation with adaptations to continuous finite element methods that can be implemented very efficiently with very strongmore » element-size contrasts (more than 100x). Capable of running on large CPU and GPU clusters, we present both synthetic validation examples and large scale, realistic application examples to demonstrate the performance and applicability of the method and implementation on thousands of CPU cores and hundreds of GPUs.« less
Geometric multigrid for an implicit-time immersed boundary method
DOE Office of Scientific and Technical Information (OSTI.GOV)
Guy, Robert D.; Philip, Bobby; Griffith, Boyce E.
2014-10-12
The immersed boundary (IB) method is an approach to fluid-structure interaction that uses Lagrangian variables to describe the deformations and resulting forces of the structure and Eulerian variables to describe the motion and forces of the fluid. Explicit time stepping schemes for the IB method require solvers only for Eulerian equations, for which fast Cartesian grid solution methods are available. Such methods are relatively straightforward to develop and are widely used in practice but often require very small time steps to maintain stability. Implicit-time IB methods permit the stable use of large time steps, but efficient implementations of such methodsmore » require significantly more complex solvers that effectively treat both Lagrangian and Eulerian variables simultaneously. Moreover, several different approaches to solving the coupled Lagrangian-Eulerian equations have been proposed, but a complete understanding of this problem is still emerging. This paper presents a geometric multigrid method for an implicit-time discretization of the IB equations. This multigrid scheme uses a generalization of box relaxation that is shown to handle problems in which the physical stiffness of the structure is very large. Numerical examples are provided to illustrate the effectiveness and efficiency of the algorithms described herein. Finally, these tests show that using multigrid as a preconditioner for a Krylov method yields improvements in both robustness and efficiency as compared to using multigrid as a solver. They also demonstrate that with a time step 100–1000 times larger than that permitted by an explicit IB method, the multigrid-preconditioned implicit IB method is approximately 50–200 times more efficient than the explicit method.« less
Rational reduction of periodic propagators for off-period observations.
Blanton, Wyndham B; Logan, John W; Pines, Alexander
2004-02-01
Many common solid-state nuclear magnetic resonance problems take advantage of the periodicity of the underlying Hamiltonian to simplify the computation of an observation. Most of the time-domain methods used, however, require the time step between observations to be some integer or reciprocal-integer multiple of the period, thereby restricting the observation bandwidth. Calculations of off-period observations are usually reduced to brute force direct methods resulting in many demanding matrix multiplications. For large spin systems, the matrix multiplication becomes the limiting step. A simple method that can dramatically reduce the number of matrix multiplications required to calculate the time evolution when the observation time step is some rational fraction of the period of the Hamiltonian is presented. The algorithm implements two different optimization routines. One uses pattern matching and additional memory storage, while the other recursively generates the propagators via time shifting. The net result is a significant speed improvement for some types of time-domain calculations.
Code of Federal Regulations, 2010 CFR
2010-04-01
... significant steps an H-1C employer must take to recruit and retain U.S. nurses? 655.1114 Section 655.1114... Workers as Registered Nurses? § 655.1114 Element IV—What are the timely and significant steps an H-1C employer must take to recruit and retain U.S. nurses? (a) The fourth attestation element requires that the...
NASA Technical Reports Server (NTRS)
Kiris, Cetin; Kwak, Dochan
2001-01-01
Two numerical procedures, one based on artificial compressibility method and the other pressure projection method, are outlined for obtaining time-accurate solutions of the incompressible Navier-Stokes equations. The performance of the two method are compared by obtaining unsteady solutions for the evolution of twin vortices behind a at plate. Calculated results are compared with experimental and other numerical results. For an un- steady ow which requires small physical time step, pressure projection method was found to be computationally efficient since it does not require any subiterations procedure. It was observed that the artificial compressibility method requires a fast convergence scheme at each physical time step in order to satisfy incompressibility condition. This was obtained by using a GMRES-ILU(0) solver in our computations. When a line-relaxation scheme was used, the time accuracy was degraded and time-accurate computations became very expensive.
Decker, Leslie M; Cignetti, Fabien; Hunt, Nathaniel; Potter, Jane F; Stergiou, Nicholas; Studenski, Stephanie A
2016-08-01
A U-shaped relationship between cognitive demand and gait control may exist in dual-task situations, reflecting opposing effects of external focus of attention and attentional resource competition. The purpose of the study was twofold: to examine whether gait control, as evaluated from step-to-step variability, is related to cognitive task difficulty in a U-shaped manner and to determine whether age modifies this relationship. Young and older adults walked on a treadmill without attentional requirement and while performing a dichotic listening task under three attention conditions: non-forced (NF), forced-right (FR), and forced-left (FL). The conditions increased in their attentional demand and requirement for inhibitory control. Gait control was evaluated by the variability of step parameters related to balance control (step width) and rhythmic stepping pattern (step length and step time). A U-shaped relationship was found for step width variability in both young and older adults and for step time variability in older adults only. Cognitive performance during dual tasking was maintained in both young and older adults. The U-shaped relationship, which presumably results from a trade-off between an external focus of attention and competition for attentional resources, implies that higher-level cognitive processes are involved in walking in young and older adults. Specifically, while these processes are initially involved only in the control of (lateral) balance during gait, they become necessary for the control of (fore-aft) rhythmic stepping pattern in older adults, suggesting that attentional resources turn out to be needed in all facets of walking with aging. Finally, despite the cognitive resources required by walking, both young and older adults spontaneously adopted a "posture second" strategy, prioritizing the cognitive task over the gait task.
A Coordinated Initialization Process for the Distributed Space Exploration Simulation
NASA Technical Reports Server (NTRS)
Crues, Edwin Z.; Phillips, Robert G.; Dexter, Dan; Hasan, David
2007-01-01
A viewgraph presentation on the federate initialization process for the Distributed Space Exploration Simulation (DSES) is described. The topics include: 1) Background: DSES; 2) Simulation requirements; 3) Nine Step Initialization; 4) Step 1: Create the Federation; 5) Step 2: Publish and Subscribe; 6) Step 3: Create Object Instances; 7) Step 4: Confirm All Federates Have Joined; 8) Step 5: Achieve initialize Synchronization Point; 9) Step 6: Update Object Instances With Initial Data; 10) Step 7: Wait for Object Reflections; 11) Step 8: Set Up Time Management; 12) Step 9: Achieve startup Synchronization Point; and 13) Conclusions
Analysis on burnup step effect for evaluating reactor criticality and fuel breeding ratio
DOE Office of Scientific and Technical Information (OSTI.GOV)
Saputra, Geby; Purnama, Aditya Rizki; Permana, Sidik
Criticality condition of the reactors is one of the important factors for evaluating reactor operation and nuclear fuel breeding ratio is another factor to show nuclear fuel sustainability. This study analyzes the effect of burnup steps and cycle operation step for evaluating the criticality condition of the reactor as well as the performance of nuclear fuel breeding or breeding ratio (BR). Burnup step is performed based on a day step analysis which is varied from 10 days up to 800 days and for cycle operation from 1 cycle up to 8 cycles reactor operations. In addition, calculation efficiency based onmore » the variation of computer processors to run the analysis in term of time (time efficiency in the calculation) have been also investigated. Optimization method for reactor design analysis which is used a large fast breeder reactor type as a reference case was performed by adopting an established reactor design code of JOINT-FR. The results show a criticality condition becomes higher for smaller burnup step (day) and for breeding ratio becomes less for smaller burnup step (day). Some nuclides contribute to make better criticality when smaller burnup step due to individul nuclide half-live. Calculation time for different burnup step shows a correlation with the time consuming requirement for more details step calculation, although the consuming time is not directly equivalent with the how many time the burnup time step is divided.« less
Postural adjustment errors during lateral step initiation in older and younger adults
Sparto, Patrick J.; Fuhrman, Susan I.; Redfern, Mark S.; Perera, Subashan; Jennings, J. Richard; Furman, Joseph M.
2016-01-01
The purpose was to examine age differences and varying levels of step response inhibition on the performance of a voluntary lateral step initiation task. Seventy older adults (70 – 94 y) and twenty younger adults (21 – 58 y) performed visually-cued step initiation conditions based on direction and spatial location of arrows, ranging from a simple choice reaction time task to a perceptual inhibition task that included incongruous cues about which direction to step (e.g. a left pointing arrow appearing on the right side of a monitor). Evidence of postural adjustment errors and step latencies were recorded from vertical ground reaction forces exerted by the stepping leg. Compared with younger adults, older adults demonstrated greater variability in step behavior, generated more postural adjustment errors during conditions requiring inhibition, and had greater step initiation latencies that increased more than younger adults as the inhibition requirements of the condition became greater. Step task performance was related to clinical balance test performance more than executive function task performance. PMID:25595953
Postural adjustment errors during lateral step initiation in older and younger adults
Sparto, Patrick J.; Fuhrman, Susan I.; Redfern, Mark S.; Perera, Subashan; Jennings, J. Richard; Furman, Joseph M.
2014-01-01
The purpose was to examine age differences and varying levels of step response inhibition on the performance of a voluntary lateral step initiation task. Seventy older adults (70 – 94 y) and twenty younger adults (21 – 58 y) performed visually-cued step initiation conditions based on direction and spatial location of arrows, ranging from a simple choice reaction time task to a perceptual inhibition task that included incongruous cues about which direction to step (e.g. a left pointing arrow appearing on the right side of a monitor). Evidence of postural adjustment errors and step latencies were recorded from vertical ground reaction forces exerted by the stepping leg. Compared with younger adults, older adults demonstrated greater variability in step behavior, generated more postural adjustment errors during conditions requiring inhibition, and had greater step initiation latencies that increased more than younger adults as the inhibition requirements of the condition became greater. Step task performance was related to clinical balance test performance more than executive function task performance. PMID:25183162
Molecular dynamics based enhanced sampling of collective variables with very large time steps.
Chen, Pei-Yang; Tuckerman, Mark E
2018-01-14
Enhanced sampling techniques that target a set of collective variables and that use molecular dynamics as the driving engine have seen widespread application in the computational molecular sciences as a means to explore the free-energy landscapes of complex systems. The use of molecular dynamics as the fundamental driver of the sampling requires the introduction of a time step whose magnitude is limited by the fastest motions in a system. While standard multiple time-stepping methods allow larger time steps to be employed for the slower and computationally more expensive forces, the maximum achievable increase in time step is limited by resonance phenomena, which inextricably couple fast and slow motions. Recently, we introduced deterministic and stochastic resonance-free multiple time step algorithms for molecular dynamics that solve this resonance problem and allow ten- to twenty-fold gains in the large time step compared to standard multiple time step algorithms [P. Minary et al., Phys. Rev. Lett. 93, 150201 (2004); B. Leimkuhler et al., Mol. Phys. 111, 3579-3594 (2013)]. These methods are based on the imposition of isokinetic constraints that couple the physical system to Nosé-Hoover chains or Nosé-Hoover Langevin schemes. In this paper, we show how to adapt these methods for collective variable-based enhanced sampling techniques, specifically adiabatic free-energy dynamics/temperature-accelerated molecular dynamics, unified free-energy dynamics, and by extension, metadynamics, thus allowing simulations employing these methods to employ similarly very large time steps. The combination of resonance-free multiple time step integrators with free-energy-based enhanced sampling significantly improves the efficiency of conformational exploration.
Molecular dynamics based enhanced sampling of collective variables with very large time steps
NASA Astrophysics Data System (ADS)
Chen, Pei-Yang; Tuckerman, Mark E.
2018-01-01
Enhanced sampling techniques that target a set of collective variables and that use molecular dynamics as the driving engine have seen widespread application in the computational molecular sciences as a means to explore the free-energy landscapes of complex systems. The use of molecular dynamics as the fundamental driver of the sampling requires the introduction of a time step whose magnitude is limited by the fastest motions in a system. While standard multiple time-stepping methods allow larger time steps to be employed for the slower and computationally more expensive forces, the maximum achievable increase in time step is limited by resonance phenomena, which inextricably couple fast and slow motions. Recently, we introduced deterministic and stochastic resonance-free multiple time step algorithms for molecular dynamics that solve this resonance problem and allow ten- to twenty-fold gains in the large time step compared to standard multiple time step algorithms [P. Minary et al., Phys. Rev. Lett. 93, 150201 (2004); B. Leimkuhler et al., Mol. Phys. 111, 3579-3594 (2013)]. These methods are based on the imposition of isokinetic constraints that couple the physical system to Nosé-Hoover chains or Nosé-Hoover Langevin schemes. In this paper, we show how to adapt these methods for collective variable-based enhanced sampling techniques, specifically adiabatic free-energy dynamics/temperature-accelerated molecular dynamics, unified free-energy dynamics, and by extension, metadynamics, thus allowing simulations employing these methods to employ similarly very large time steps. The combination of resonance-free multiple time step integrators with free-energy-based enhanced sampling significantly improves the efficiency of conformational exploration.
Implementation of Competency-Based Pharmacy Education (CBPE)
Koster, Andries; Schalekamp, Tom; Meijerman, Irma
2017-01-01
Implementation of competency-based pharmacy education (CBPE) is a time-consuming, complicated process, which requires agreement on the tasks of a pharmacist, commitment, institutional stability, and a goal-directed developmental perspective of all stakeholders involved. In this article the main steps in the development of a fully-developed competency-based pharmacy curriculum (bachelor, master) are described and tips are given for a successful implementation. After the choice for entering into CBPE is made and a competency framework is adopted (step 1), intended learning outcomes are defined (step 2), followed by analyzing the required developmental trajectory (step 3) and the selection of appropriate assessment methods (step 4). Designing the teaching-learning environment involves the selection of learning activities, student experiences, and instructional methods (step 5). Finally, an iterative process of evaluation and adjustment of individual courses, and the curriculum as a whole, is entered (step 6). Successful implementation of CBPE requires a system of effective quality management and continuous professional development as a teacher. In this article suggestions for the organization of CBPE and references to more detailed literature are given, hoping to facilitate the implementation of CBPE. PMID:28970422
On the correct use of stepped-sine excitations for the measurement of time-varying bioimpedance.
Louarroudi, E; Sanchez, B
2017-02-01
When a linear time-varying (LTV) bioimpedance is measured using stepped-sine excitations, a compromise must be made: the temporal distortions affecting the data depend on the experimental time, which in turn sets the data accuracy and limits the temporal bandwidth of the system that needs to be measured. Here, the experimental time required to measure linear time-invariant bioimpedance with a specified accuracy is analyzed for different stepped-sine excitation setups. We provide simple equations that allow the reader to know whether LTV bioimpedance can be measured through repeated time- invariant stepped-sine experiments. Bioimpedance technology is on the rise thanks to a plethora of healthcare monitoring applications. The results presented can help to avoid distortions in the data while measuring accurately non-stationary physiological phenomena. The impact of the work presented is broad, including the potential of enhancing bioimpedance studies and healthcare devices using bioimpedance technology.
Hoogkamer, Wouter; Potocanac, Zrinka; Van Calenbergh, Frank; Duysens, Jacques
2017-10-01
Online gait corrections are frequently used to restore gait stability and prevent falling. They require shorter response times than voluntary movements which suggests that subcortical pathways contribute to the execution of online gait corrections. To evaluate the potential role of the cerebellum in these pathways we tested the hypotheses that online gait corrections would be less accurate in individuals with focal cerebellar damage than in neurologically intact controls and that this difference would be more pronounced for shorter available response times and for short step gait corrections. We projected virtual stepping stones on an instrumented treadmill while some of the approaching stepping stones were shifted forward or backward, requiring participants to adjust their foot placement. Varying the timing of those shifts allowed us to address the effect of available response time on foot placement error. In agreement with our hypothesis, individuals with focal cerebellar lesions were less accurate in adjusting their foot placement in reaction to suddenly shifted stepping stones than neurologically intact controls. However, the cerebellar lesion group's foot placement error did not increase more with decreasing available response distance or for short step versus long step adjustments compared to the control group. Furthermore, foot placement error for the non-shifting stepping stones was also larger in the cerebellar lesion group as compared to the control group. Consequently, the reduced ability to accurately adjust foot placement during walking in individuals with focal cerebellar lesions appears to be a general movement control deficit, which could contribute to increased fall risk. Copyright © 2017 Elsevier B.V. All rights reserved.
NASA Technical Reports Server (NTRS)
Harp, J. L., Jr.; Oatway, T. P.
1975-01-01
A research effort was conducted with the goal of reducing computer time of a Navier Stokes Computer Code for prediction of viscous flow fields about lifting bodies. A two-dimensional, time-dependent, laminar, transonic computer code (STOKES) was modified to incorporate a non-uniform timestep procedure. The non-uniform time-step requires updating of a zone only as often as required by its own stability criteria or that of its immediate neighbors. In the uniform timestep scheme each zone is updated as often as required by the least stable zone of the finite difference mesh. Because of less frequent update of program variables it was expected that the nonuniform timestep would result in a reduction of execution time by a factor of five to ten. Available funding was exhausted prior to successful demonstration of the benefits to be derived from the non-uniform time-step method.
Adaptive Implicit Non-Equilibrium Radiation Diffusion
DOE Office of Scientific and Technical Information (OSTI.GOV)
Philip, Bobby; Wang, Zhen; Berrill, Mark A
2013-01-01
We describe methods for accurate and efficient long term time integra- tion of non-equilibrium radiation diffusion systems: implicit time integration for effi- cient long term time integration of stiff multiphysics systems, local control theory based step size control to minimize the required global number of time steps while control- ling accuracy, dynamic 3D adaptive mesh refinement (AMR) to minimize memory and computational costs, Jacobian Free Newton-Krylov methods on AMR grids for efficient nonlinear solution, and optimal multilevel preconditioner components that provide level independent solver convergence.
Interactive real time flow simulations
NASA Technical Reports Server (NTRS)
Sadrehaghighi, I.; Tiwari, S. N.
1990-01-01
An interactive real time flow simulation technique is developed for an unsteady channel flow. A finite-volume algorithm in conjunction with a Runge-Kutta time stepping scheme was developed for two-dimensional Euler equations. A global time step was used to accelerate convergence of steady-state calculations. A raster image generation routine was developed for high speed image transmission which allows the user to have direct interaction with the solution development. In addition to theory and results, the hardware and software requirements are discussed.
Exoplanet Direct Imaging: Coronagraph Probe Mission Study EXO-C
NASA Technical Reports Server (NTRS)
Stapelfeldt, Karl R.
2013-01-01
Flagship mission for spectroscopy of ExoEarths is a long-term priority for space astrophysics (Astro2010). Requires 10(exp 10) contrast at 3 lambda/D separation, ( (is) greater than 10,000 times beyond HST performance) and large telescope (is) greater than 4m aperture. Big step. Mission for spectroscopy of giant planets and imaging of disks requires 10(exp 9) contrast at 3 lambda/D (already demonstrated in lab) and (is) approximately 1.5m telescope. Should be much more affordable, good intermediate step.Various PIs have proposed many versions of the latter mission 17 times since 1999; no unified approach.
A local time stepping algorithm for GPU-accelerated 2D shallow water models
NASA Astrophysics Data System (ADS)
Dazzi, Susanna; Vacondio, Renato; Dal Palù, Alessandro; Mignosa, Paolo
2018-01-01
In the simulation of flooding events, mesh refinement is often required to capture local bathymetric features and/or to detail areas of interest; however, if an explicit finite volume scheme is adopted, the presence of small cells in the domain can restrict the allowable time step due to the stability condition, thus reducing the computational efficiency. With the aim of overcoming this problem, the paper proposes the application of a Local Time Stepping (LTS) strategy to a GPU-accelerated 2D shallow water numerical model able to handle non-uniform structured meshes. The algorithm is specifically designed to exploit the computational capability of GPUs, minimizing the overheads associated with the LTS implementation. The results of theoretical and field-scale test cases show that the LTS model guarantees appreciable reductions in the execution time compared to the traditional Global Time Stepping strategy, without compromising the solution accuracy.
33 CFR 230.17 - Filing requirements.
Code of Federal Regulations, 2010 CFR
2010-07-01
... supplement, district commanders will establish a time schedule for each step of the process based upon considerations listed in 40 CFR 1501.8 and upon other management considerations. The time required from the... reviews by division and the incorporation of division's comments in the EIS. HQUSACE and/or division will...
Caron, Jessica; Light, Janice; Drager, Kathryn
2016-01-01
Typically, the vocabulary in augmentative and alternative communication (AAC) technologies is pre-programmed by manufacturers or by parents and professionals outside of daily interactions. Because vocabulary needs are difficult to predict, young children who use aided AAC often do not have access to vocabulary concepts as the need and interest arises in their daily interactions, limiting their vocabulary acquisition and use. Ideally, parents and professionals would be able to add vocabulary to AAC technologies "just-in-time" as required during daily interactions. This study compared the effects of two AAC applications for mobile technologies: GoTalk Now (which required more programming steps) and EasyVSD (which required fewer programming steps) on the number of visual scene displays (VSDs) and hotspots created in 10-min interactions between eight professionals and preschool-aged children with typical development. The results indicated that, although all of the professionals were able to create VSDs and add vocabulary during interactions with the children, they created more VSDs and hotspots with the app with fewer programming steps than with the one with more steps, and child engagement and programming participation levels were high with both apps, but higher levels for both variables were observed with the app with fewer programming steps than with the one with more steps. These results suggest that apps with fewer programming steps may reduce operational demands and better support professionals to (a) respond to the child's input, (b) use just-in-time programming during interactions, (c) provide access to more vocabulary, and (d) increase participation.
NASA Technical Reports Server (NTRS)
Frost, J. D., Jr.
1970-01-01
Electronic instrument automatically monitors the stages of sleep of a human subject. The analyzer provides a series of discrete voltage steps with each step corresponding to a clinical assessment of level of consciousness. It is based on the operation of an EEG and requires very little telemetry bandwidth or time.
Morisse Pradier, H; Sénéchal, A; Philit, F; Tronc, F; Maury, J-M; Grima, R; Flamens, C; Paulus, S; Neidecker, J; Mornex, J-F
2016-02-01
Lung transplantation (LT) is now considered as an excellent treatment option for selected patients with end-stage pulmonary diseases, such as COPD, cystic fibrosis, idiopathic pulmonary fibrosis, and pulmonary arterial hypertension. The 2 goals of LT are to provide a survival benefit and to improve quality of life. The 3-step decision process leading to LT is discussed in this review. The first step is the selection of candidates, which requires a careful examination in order to check absolute and relative contraindications. The second step is the timing of listing for LT; it requires the knowledge of disease-specific prognostic factors available in international guidelines, and discussed in this paper. The third step is the choice of procedure: indications of heart-lung, single-lung, and bilateral-lung transplantation are described. In conclusion, this document provides guidelines to help pulmonologists in the referral and selection processes of candidates for transplantation in order to optimize the outcome of LT. Copyright © 2014 Elsevier Masson SAS. All rights reserved.
The "Motor" in Implicit Motor Sequence Learning: A Foot-stepping Serial Reaction Time Task.
Du, Yue; Clark, Jane E
2018-05-03
This protocol describes a modified serial reaction time (SRT) task used to study implicit motor sequence learning. Unlike the classic SRT task that involves finger-pressing movements while sitting, the modified SRT task requires participants to step with both feet while maintaining a standing posture. This stepping task necessitates whole body actions that impose postural challenges. The foot-stepping task complements the classic SRT task in several ways. The foot-stepping SRT task is a better proxy for the daily activities that require ongoing postural control, and thus may help us better understand sequence learning in real-life situations. In addition, response time serves as an indicator of sequence learning in the classic SRT task, but it is unclear whether response time, reaction time (RT) representing mental process, or movement time (MT) reflecting the movement itself, is a key player in motor sequence learning. The foot-stepping SRT task allows researchers to disentangle response time into RT and MT, which may clarify how motor planning and movement execution are involved in sequence learning. Lastly, postural control and cognition are interactively related, but little is known about how postural control interacts with learning motor sequences. With a motion capture system, the movement of the whole body (e.g., the center of mass (COM)) can be recorded. Such measures allow us to reveal the dynamic processes underlying discrete responses measured by RT and MT, and may aid in elucidating the relationship between postural control and the explicit and implicit processes involved in sequence learning. Details of the experimental set-up, procedure, and data processing are described. The representative data are adopted from one of our previous studies. Results are related to response time, RT, and MT, as well as the relationship between the anticipatory postural response and the explicit processes involved in implicit motor sequence learning.
NASA Astrophysics Data System (ADS)
Salatino, Maria
2017-06-01
In the current submm and mm cosmology experiments the focal planes are populated by kilopixel transition edge sensors (TESes). Varying incoming power load requires frequent rebiasing of the TESes through standard current-voltage (IV) acquisition. The time required to perform IVs on such large arrays and the resulting transient heating of the bath reduces the sky observation time. We explore a bias step method that significantly reduces the time required for the rebiasing process. This exploits the detectors' responses to the injection of a small square wave signal on top of the dc bias current and knowledge of the shape of the detector transition R(T,I). This method has been tested on two detector arrays of the Atacama Cosmology Telescope (ACT). In this paper, we focus on the first step of the method, the estimate of the TES %Rn.
2012-06-07
scheme for the VOF requires the use of the explicit solver to advance the solution in time. The drawback of using the explicit solver is that such ap...proach required much smaller time steps to guarantee that a converged and stable solution is obtained during each fractional time step (Global...Comparable results were obtained for the solutions with the RSM model. 50x 25x 100x25x 25x200x 0.000 0.002 0.004 0.006 0.008 0.010 0 100 200 300
Sumner, Walton; Xu, Jin Zhong
2002-01-01
The American Board of Family Practice is developing a patient simulation program to evaluate diagnostic and management skills. The simulator must give temporally and physiologically reasonable answers to symptom questions such as "Have you been tired?" A three-step process generates symptom histories. In the first step, the simulator determines points in time where it should calculate instantaneous symptom status. In the second step, a Bayesian network implementing a roughly physiologic model of the symptom generates a value on a severity scale at each sampling time. Positive, zero, and negative values represent increased, normal, and decreased status, as applicable. The simulator plots these values over time. In the third step, another Bayesian network inspects this plot and reports how the symptom changed over time. This mechanism handles major trends, multiple and concurrent symptom causes, and gradually effective treatments. Other temporal insights, such as observations about short-term symptom relief, require complimentary mechanisms.
Agglomeration Multigrid for an Unstructured-Grid Flow Solver
NASA Technical Reports Server (NTRS)
Frink, Neal; Pandya, Mohagna J.
2004-01-01
An agglomeration multigrid scheme has been implemented into the sequential version of the NASA code USM3Dns, tetrahedral cell-centered finite volume Euler/Navier-Stokes flow solver. Efficiency and robustness of the multigrid-enhanced flow solver have been assessed for three configurations assuming an inviscid flow and one configuration assuming a viscous fully turbulent flow. The inviscid studies include a transonic flow over the ONERA M6 wing and a generic business jet with flow-through nacelles and a low subsonic flow over a high-lift trapezoidal wing. The viscous case includes a fully turbulent flow over the RAE 2822 rectangular wing. The multigrid solutions converged with 12%-33% of the Central Processing Unit (CPU) time required by the solutions obtained without multigrid. For all of the inviscid cases, multigrid in conjunction with an explicit time-stepping scheme performed the best with regard to the run time memory and CPU time requirements. However, for the viscous case multigrid had to be used with an implicit backward Euler time-stepping scheme that increased the run time memory requirement by 22% as compared to the run made without multigrid.
Rapee, Ronald M; Lyneham, Heidi J; Wuthrich, Viviana; Chatterton, Mary Lou; Hudson, Jennifer L; Kangas, Maria; Mihalopoulos, Cathrine
2017-10-01
Stepped care is embraced as an ideal model of service delivery but is minimally evaluated. The aim of this study was to evaluate the efficacy of cognitive-behavioral therapy (CBT) for child anxiety delivered via a stepped-care framework compared against a single, empirically validated program. A total of 281 youth with anxiety disorders (6-17 years of age) were randomly allocated to receive either empirically validated treatment or stepped care involving the following: (1) low intensity; (2) standard CBT; and (3) individually tailored treatment. Therapist qualifications increased at each step. Interventions did not differ significantly on any outcome measures. Total therapist time per child was significantly shorter to deliver stepped care (774 minutes) compared with best practice (897 minutes). Within stepped care, the first 2 steps returned the strongest treatment gains. Stepped care and a single empirically validated program for youth with anxiety produced similar efficacy, but stepped care required slightly less therapist time. Restricting stepped care to only steps 1 and 2 would have led to considerable time saving with modest loss in efficacy. Clinical trial registration information-A Randomised Controlled Trial of Standard Care Versus Stepped Care for Children and Adolescents With Anxiety Disorders; http://anzctr.org.au/; ACTRN12612000351819. Copyright © 2017 American Academy of Child and Adolescent Psychiatry. Published by Elsevier Inc. All rights reserved.
Parallel Multi-Step/Multi-Rate Integration of Two-Time Scale Dynamic Systems
NASA Technical Reports Server (NTRS)
Chang, Johnny T.; Ploen, Scott R.; Sohl, Garett. A,; Martin, Bryan J.
2004-01-01
Increasing demands on the fidelity of simulations for real-time and high-fidelity simulations are stressing the capacity of modern processors. New integration techniques are required that provide maximum efficiency for systems that are parallelizable. However many current techniques make assumptions that are at odds with non-cascadable systems. A new serial multi-step/multi-rate integration algorithm for dual-timescale continuous state systems is presented which applies to these systems, and is extended to a parallel multi-step/multi-rate algorithm. The superior performance of both algorithms is demonstrated through a representative example.
Tetraethylene glycol promoted two-step, one-pot rapid synthesis of indole-3-[1- 11C]acetic acid
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lee, Sojeong; Qu, Wenchao; Alexoff, David L.
2014-12-12
An operationally friendly, two-step, one-pot process has been developed for the rapid synthesis of carbon-11 labeled indole-3-acetic acid ([ 11]IAA or [ 11]auxin). By replacing an aprotic polar solvent with tetraethylene glycol, nucleophilic [ 11]cyanation and alkaline hydrolysis reactions were performed consecutively in a single pot without a time-consuming intermediate purification step. The entire production time for this updated procedure is 55 min, which dramatically simplifies the entire synthesis and reduces the starting radioactivity required for a whole plant imaging study.
Development of iterative techniques for the solution of unsteady compressible viscous flows
NASA Technical Reports Server (NTRS)
Sankar, Lakshmi N.; Hixon, Duane
1991-01-01
Efficient iterative solution methods are being developed for the numerical solution of two- and three-dimensional compressible Navier-Stokes equations. Iterative time marching methods have several advantages over classical multi-step explicit time marching schemes, and non-iterative implicit time marching schemes. Iterative schemes have better stability characteristics than non-iterative explicit and implicit schemes. Thus, the extra work required by iterative schemes can also be designed to perform efficiently on current and future generation scalable, missively parallel machines. An obvious candidate for iteratively solving the system of coupled nonlinear algebraic equations arising in CFD applications is the Newton method. Newton's method was implemented in existing finite difference and finite volume methods. Depending on the complexity of the problem, the number of Newton iterations needed per step to solve the discretized system of equations can, however, vary dramatically from a few to several hundred. Another popular approach based on the classical conjugate gradient method, known as the GMRES (Generalized Minimum Residual) algorithm is investigated. The GMRES algorithm was used in the past by a number of researchers for solving steady viscous and inviscid flow problems with considerable success. Here, the suitability of this algorithm is investigated for solving the system of nonlinear equations that arise in unsteady Navier-Stokes solvers at each time step. Unlike the Newton method which attempts to drive the error in the solution at each and every node down to zero, the GMRES algorithm only seeks to minimize the L2 norm of the error. In the GMRES algorithm the changes in the flow properties from one time step to the next are assumed to be the sum of a set of orthogonal vectors. By choosing the number of vectors to a reasonably small value N (between 5 and 20) the work required for advancing the solution from one time step to the next may be kept to (N+1) times that of a noniterative scheme. Many of the operations required by the GMRES algorithm such as matrix-vector multiplies, matrix additions and subtractions can all be vectorized and parallelized efficiently.
Results of a State-Wide Evaluation of “Paperwork Burden” in Addiction Treatment
Carise, Deni; Love, Meghan; Zur, Julia; McLellan, A. Thomas; Kemp, Jack
2009-01-01
This article chronicles three steps taken by research, clinical and state staff towards assessing, evaluating and streamlining clinical and administrative paperwork at all public outpatient addiction treatment programs in 1 state. The first step was an accounting of all paperwork requirements at each program. Step two included the development of time estimates for the paperwork requirements, synthesis of information across sites, providing written evaluation of the need, utility and redundancy of all forms (paperwork) collected, and suggestions for eliminating unused or unnecessary data collection and streamlining the remaining data collection. Thirdly, the state agency hosted a meeting with the state staff, researchers and staff from all programs and agencies with state-funded contracts and took action. Paperwork reductions over the course of a 6-month outpatient treatment episode were estimated at 4 – 6 hours, with most of the time burden being eliminated from the intake process. PMID:19150201
Rapid gait termination: effects of age, walking surfaces and footwear characteristics.
Menant, Jasmine C; Steele, Julie R; Menz, Hylton B; Munro, Bridget J; Lord, Stephen R
2009-07-01
The aim of this study was to systematically investigate the influence of various walking surfaces and footwear characteristics on the ability to terminate gait rapidly in 10 young and 26 older people. Subjects walked at a self-selected speed in eight randomized shoe conditions (standard versus elevated heel, soft sole, hard sole, high-collar, flared sole, bevelled heel and tread sole) on three surfaces: control, irregular and wet. In response to an audible cue, subjects were required to stop as quickly as possible in three out of eight walking trials in each condition. Time to last foot contact, total stopping time, stopping distance, number of steps to stop, step length and step width post-cue and base of support length at total stop were calculated from kinematic data collected using two CODA scanner units. The older subjects took more time and a longer distance to last foot contact and were more frequently classified as using a three or more-steps stopping strategy compared to the young subjects. The wet surface impeded gait termination, as indicated by greater total stopping time and stopping distance. Subjects required more time to terminate gait in the soft sole shoes compared to the standard shoes. In contrast, the high-collar shoes reduced total stopping time on the wet surface. These findings suggest that older adults have more difficulty terminating gait rapidly than their younger counterparts and that footwear is likely to influence whole-body stability during challenging postural tasks on wet surfaces.
Ohrt, Thomas; Odenwälder, Peter; Dannenberg, Julia; Prior, Mira; Warkocki, Zbigniew; Schmitzová, Jana; Karaduman, Ramazan; Gregor, Ingo; Enderlein, Jörg; Fabrizio, Patrizia; Lührmann, Reinhard
2013-01-01
Step 2 catalysis of pre-mRNA splicing entails the excision of the intron and ligation of the 5′ and 3′ exons. The tasks of the splicing factors Prp16, Slu7, Prp18, and Prp22 in the formation of the step 2 active site of the spliceosome and in exon ligation, and the timing of their recruitment, remain poorly understood. Using a purified yeast in vitro splicing system, we show that only the DEAH-box ATPase Prp16 is required for formation of a functional step 2 active site and for exon ligation. Efficient docking of the 3′ splice site (3′SS) to the active site requires only Slu7/Prp18 but not Prp22. Spliceosome remodeling by Prp16 appears to be subtle as only the step 1 factor Cwc25 is dissociated prior to step 2 catalysis, with its release dependent on docking of the 3′SS to the active site and Prp16 action. We show by fluorescence cross-correlation spectroscopy that Slu7/Prp18 and Prp16 bind early to distinct, low-affinity binding sites on the step-1-activated B* spliceosome, which are subsequently converted into high-affinity sites. Our results shed new light on the factor requirements for step 2 catalysis and the dynamics of step 1 and 2 factors during the catalytic steps of splicing. PMID:23685439
Running DNA Mini-Gels in 20 Minutes or Less Using Sodium Boric Acid Buffer
ERIC Educational Resources Information Center
Jenkins, Kristin P.; Bielec, Barbara
2006-01-01
Providing a biotechnology experience for students can be challenging on several levels, and time is a real constraint for many experiments. Many DNA based methods require a gel electrophoresis step, and although some biotechnology procedures have convenient break points, gel electrophoresis does not. In addition to the time required for loading…
Role of step size and max dwell time in anatomy based inverse optimization for prostate implants
Manikandan, Arjunan; Sarkar, Biplab; Rajendran, Vivek Thirupathur; King, Paul R.; Sresty, N.V. Madhusudhana; Holla, Ragavendra; Kotur, Sachin; Nadendla, Sujatha
2013-01-01
In high dose rate (HDR) brachytherapy, the source dwell times and dwell positions are vital parameters in achieving a desirable implant dose distribution. Inverse treatment planning requires an optimal choice of these parameters to achieve the desired target coverage with the lowest achievable dose to the organs at risk (OAR). This study was designed to evaluate the optimum source step size and maximum source dwell time for prostate brachytherapy implants using an Ir-192 source. In total, one hundred inverse treatment plans were generated for the four patients included in this study. Twenty-five treatment plans were created for each patient by varying the step size and maximum source dwell time during anatomy-based, inverse-planned optimization. Other relevant treatment planning parameters were kept constant, including the dose constraints and source dwell positions. Each plan was evaluated for target coverage, urethral and rectal dose sparing, treatment time, relative target dose homogeneity, and nonuniformity ratio. The plans with 0.5 cm step size were seen to have clinically acceptable tumor coverage, minimal normal structure doses, and minimum treatment time as compared with the other step sizes. The target coverage for this step size is 87% of the prescription dose, while the urethral and maximum rectal doses were 107.3 and 68.7%, respectively. No appreciable difference in plan quality was observed with variation in maximum source dwell time. The step size plays a significant role in plan optimization for prostate implants. Our study supports use of a 0.5 cm step size for prostate implants. PMID:24049323
Systems Maintenance Automated Repair Tasks (SMART)
NASA Technical Reports Server (NTRS)
Schuh, Joseph; Mitchell, Brent; Locklear, Louis; Belson, Martin A.; Al-Shihabi, Mary Jo Y.; King, Nadean; Norena, Elkin; Hardin, Derek
2010-01-01
SMART is a uniform automated discrepancy analysis and repair-authoring platform that improves technical accuracy and timely delivery of repair procedures for a given discrepancy (see figure a). SMART will minimize data errors, create uniform repair processes, and enhance the existing knowledge base of engineering repair processes. This innovation is the first tool developed that links the hardware specification requirements with the actual repair methods, sequences, and required equipment. SMART is flexibly designed to be useable by multiple engineering groups requiring decision analysis, and by any work authorization and disposition platform (see figure b). The organizational logic creates the link between specification requirements of the hardware, and specific procedures required to repair discrepancies. The first segment in the SMART process uses a decision analysis tree to define all the permutations between component/ subcomponent/discrepancy/repair on the hardware. The second segment uses a repair matrix to define what the steps and sequences are for any repair defined in the decision tree. This segment also allows for the selection of specific steps from multivariable steps. SMART will also be able to interface with outside databases and to store information from them to be inserted into the repair-procedure document. Some of the steps will be identified as optional, and would only be used based on the location and the current configuration of the hardware. The output from this analysis would be sent to a work authoring system in the form of a predefined sequence of steps containing required actions, tools, parts, materials, certifications, and specific requirements controlling quality, functional requirements, and limitations.
Dynamic Pathfinders: Leveraging Your OPAC to Create Resource Guides
ERIC Educational Resources Information Center
Hunter, Ben
2008-01-01
Library pathfinders are a time-tested method of leading library users to important resources. However, paper-based pathfinders suffer from space limitations, and both paper-based and Web-based pathfinders require frequent updates to keep up with new library acquisitions. This article details a step-by-step method to create an online dynamic…
Monteiro, Kristina A; George, Paul; Dollase, Richard; Dumenco, Luba
2017-01-01
The use of multiple academic indicators to identify students at risk of experiencing difficulty completing licensure requirements provides an opportunity to increase support services prior to high-stakes licensure examinations, including the United States Medical Licensure Examination (USMLE) Step 2 clinical knowledge (CK). Step 2 CK is becoming increasingly important in decision-making by residency directors because of increasing undergraduate medical enrollment and limited available residency vacancies. We created and validated a regression equation to predict students' Step 2 CK scores from previous academic indicators to identify students at risk, with sufficient time to intervene with additional support services as necessary. Data from three cohorts of students (N=218) with preclinical mean course exam score, National Board of Medical Examination subject examinations, and USMLE Step 1 and Step 2 CK between 2011 and 2013 were used in analyses. The authors created models capable of predicting Step 2 CK scores from academic indicators to identify at-risk students. In model 1, preclinical mean course exam score and Step 1 score accounted for 56% of the variance in Step 2 CK score. The second series of models included mean preclinical course exam score, Step 1 score, and scores on three NBME subject exams, and accounted for 67%-69% of the variance in Step 2 CK score. The authors validated the findings on the most recent cohort of graduating students (N=89) and predicted Step 2 CK score within a mean of four points (SD=8). The authors suggest using the first model as a needs assessment to gauge the level of future support required after completion of preclinical course requirements, and rescreening after three of six clerkships to identify students who might benefit from additional support before taking USMLE Step 2 CK.
Using lean methodology to improve productivity in a hospital oncology pharmacy.
Sullivan, Peter; Soefje, Scott; Reinhart, David; McGeary, Catherine; Cabie, Eric D
2014-09-01
Quality improvements achieved by a hospital pharmacy through the use of lean methodology to guide i.v. compounding workflow changes are described. The outpatient oncology pharmacy of Yale-New Haven Hospital conducted a quality-improvement initiative to identify and implement workflow changes to support a major expansion of chemotherapy services. Applying concepts of lean methodology (i.e., elimination of non-value-added steps and waste in the production process), the pharmacy team performed a failure mode and effects analysis, workflow mapping, and impact analysis; staff pharmacists and pharmacy technicians identified 38 opportunities to decrease waste and increase efficiency. Three workflow processes (order verification, compounding, and delivery) accounted for 24 of 38 recommendations and were targeted for lean process improvements. The workflow was decreased to 14 steps, eliminating 6 non-value-added steps, and pharmacy staff resources and schedules were realigned with the streamlined workflow. The time required for pharmacist verification of patient-specific oncology orders was decreased by 33%; the time required for product verification was decreased by 52%. The average medication delivery time was decreased by 47%. The results of baseline and postimplementation time trials indicated a decrease in overall turnaround time to about 70 minutes, compared with a baseline time of about 90 minutes. The use of lean methodology to identify non-value-added steps in oncology order processing and the implementation of staff-recommended workflow changes resulted in an overall reduction in the turnaround time per dose. Copyright © 2014 by the American Society of Health-System Pharmacists, Inc. All rights reserved.
Semi-autonomous remote sensing time series generation tool
NASA Astrophysics Data System (ADS)
Babu, Dinesh Kumar; Kaufmann, Christof; Schmidt, Marco; Dhams, Thorsten; Conrad, Christopher
2017-10-01
High spatial and temporal resolution data is vital for crop monitoring and phenology change detection. Due to the lack of satellite architecture and frequent cloud cover issues, availability of daily high spatial data is still far from reality. Remote sensing time series generation of high spatial and temporal data by data fusion seems to be a practical alternative. However, it is not an easy process, since it involves multiple steps and also requires multiple tools. In this paper, a framework of Geo Information System (GIS) based tool is presented for semi-autonomous time series generation. This tool will eliminate the difficulties by automating all the steps and enable the users to generate synthetic time series data with ease. Firstly, all the steps required for the time series generation process are identified and grouped into blocks based on their functionalities. Later two main frameworks are created, one to perform all the pre-processing steps on various satellite data and the other one to perform data fusion to generate time series. The two frameworks can be used individually to perform specific tasks or they could be combined to perform both the processes in one go. This tool can handle most of the known geo data formats currently available which makes it a generic tool for time series generation of various remote sensing satellite data. This tool is developed as a common platform with good interface which provides lot of functionalities to enable further development of more remote sensing applications. A detailed description on the capabilities and the advantages of the frameworks are given in this paper.
Extension of a streamwise upwind algorithm to a moving grid system
NASA Technical Reports Server (NTRS)
Obayashi, Shigeru; Goorjian, Peter M.; Guruswamy, Guru P.
1990-01-01
A new streamwise upwind algorithm was derived to compute unsteady flow fields with the use of a moving-grid system. The temporally nonconservative LU-ADI (lower-upper-factored, alternating-direction-implicit) method was applied for time marching computations. A comparison of the temporally nonconservative method with a time-conservative implicit upwind method indicates that the solutions are insensitive to the conservative properties of the implicit solvers when practical time steps are used. Using this new method, computations were made for an oscillating wing at a transonic Mach number. The computed results confirm that the present upwind scheme captures the shock motion better than the central-difference scheme based on the beam-warming algorithm. The new upwind option of the code allows larger time-steps and thus is more efficient, even though it requires slightly more computational time per time step than the central-difference option.
Visualization of time-varying MRI data for MS lesion analysis
NASA Astrophysics Data System (ADS)
Tory, Melanie K.; Moeller, Torsten; Atkins, M. Stella
2001-05-01
Conventional methods to diagnose and follow treatment of Multiple Sclerosis require radiologists and technicians to compare current images with older images of a particular patient, on a slic-by-slice basis. Although there has been progress in creating 3D displays of medical images, little attempt has been made to design visual tools that emphasize change over time. We implemented several ideas that attempt to address this deficiency. In one approach, isosurfaces of segmented lesions at each time step were displayed either on the same image (each time step in a different color), or consecutively in an animation. In a second approach, voxel- wise differences between time steps were calculated and displayed statically using ray casting. Animation was used to show cumulative changes over time. Finally, in a method borrowed from computational fluid dynamics (CFD), glyphs (small arrow-like objects) were rendered with a surface model of the lesions to indicate changes at localized points.
MIMO equalization with adaptive step size for few-mode fiber transmission systems.
van Uden, Roy G H; Okonkwo, Chigo M; Sleiffer, Vincent A J M; de Waardt, Hugo; Koonen, Antonius M J
2014-01-13
Optical multiple-input multiple-output (MIMO) transmission systems generally employ minimum mean squared error time or frequency domain equalizers. Using an experimental 3-mode dual polarization coherent transmission setup, we show that the convergence time of the MMSE time domain equalizer (TDE) and frequency domain equalizer (FDE) can be reduced by approximately 50% and 30%, respectively. The criterion used to estimate the system convergence time is the time it takes for the MIMO equalizer to reach an average output error which is within a margin of 5% of the average output error after 50,000 symbols. The convergence reduction difference between the TDE and FDE is attributed to the limited maximum step size for stable convergence of the frequency domain equalizer. The adaptive step size requires a small overhead in the form of a lookup table. It is highlighted that the convergence time reduction is achieved without sacrificing optical signal-to-noise ratio performance.
Mathematical Modelling of Waveguiding Techniques and Electron Transport. Volume 1.
1984-01-01
II& 100 ,,.,,hi , ""%’. "-.. v -.. , ., - , , .. . . . . . .. .. . . . . .. w ,, lit ( Hit -~~ ~ (i7(2J- LKI-r~p T (4\\ 01I10T J~n Ks~ (EL 1011 -A...at the end of each output time step. The difficulty here is that the last working time step is then simply what is required to hit the output time... Tabata (2 2 ) curve fit algorithm. The comparison of the energy deposition profiles for the 1.0 MeV case is given in Table 4. More complete tables are
Quick, Jacob A; MacIntyre, Allan D; Barnes, Stephen L
2014-02-01
Surgical airway creation has a high potential for disaster. Conventional methods can be cumbersome and require special instruments. A simple method utilizing three steps and readily available equipment exists, but has yet to be adequately tested. Our objective was to compare conventional cricothyroidotomy with the three-step method utilizing high-fidelity simulation. Utilizing a high-fidelity simulator, 12 experienced flight nurses and paramedics performed both methods after a didactic lecture, simulator briefing, and demonstration of each technique. Six participants performed the three-step method first, and the remaining 6 performed the conventional method first. Each participant was filmed and timed. We analyzed videos with respect to the number of hand repositions, number of airway instrumentations, and technical complications. Times to successful completion were measured from incision to balloon inflation. The three-step method was completed faster (52.1 s vs. 87.3 s; p = 0.007) as compared with conventional surgical cricothyroidotomy. The two methods did not differ statistically regarding number of hand movements (3.75 vs. 5.25; p = 0.12) or instrumentations of the airway (1.08 vs. 1.33; p = 0.07). The three-step method resulted in 100% successful airway placement on the first attempt, compared with 75% of the conventional method (p = 0.11). Technical complications occurred more with the conventional method (33% vs. 0%; p = 0.05). The three-step method, using an elastic bougie with an endotracheal tube, was shown to require fewer total hand movements, took less time to complete, resulted in more successful airway placement, and had fewer complications compared with traditional cricothyroidotomy. Published by Elsevier Inc.
Efficient and accurate time-stepping schemes for integrate-and-fire neuronal networks.
Shelley, M J; Tao, L
2001-01-01
To avoid the numerical errors associated with resetting the potential following a spike in simulations of integrate-and-fire neuronal networks, Hansel et al. and Shelley independently developed a modified time-stepping method. Their particular scheme consists of second-order Runge-Kutta time-stepping, a linear interpolant to find spike times, and a recalibration of postspike potential using the spike times. Here we show analytically that such a scheme is second order, discuss the conditions under which efficient, higher-order algorithms can be constructed to treat resets, and develop a modified fourth-order scheme. To support our analysis, we simulate a system of integrate-and-fire conductance-based point neurons with all-to-all coupling. For six-digit accuracy, our modified Runge-Kutta fourth-order scheme needs a time-step of Delta(t) = 0.5 x 10(-3) seconds, whereas to achieve comparable accuracy using a recalibrated second-order or a first-order algorithm requires time-steps of 10(-5) seconds or 10(-9) seconds, respectively. Furthermore, since the cortico-cortical conductances in standard integrate-and-fire neuronal networks do not depend on the value of the membrane potential, we can attain fourth-order accuracy with computational costs normally associated with second-order schemes.
Does the use of automated fetal biometry improve clinical work flow efficiency?
Espinoza, Jimmy; Good, Sara; Russell, Evie; Lee, Wesley
2013-05-01
This study was designed to compare the work flow efficiency of manual measurements of 5 fetal parameters with a novel technique that automatically measures these parameters from 2-dimensional sonograms. This prospective study included 200 singleton pregnancies between 15 and 40 weeks' gestation. Patients were randomly allocated to either manual (n = 100) or automatic (n = 100) fetal biometry. The automatic measurement was performed using a commercially available software application. A digital video recorder captured all on-screen activity associated with the sonographic examination. The examination time and number of steps required to obtain fetal measurements were compared between manual and automatic methods. The mean time required to obtain the biometric measurements was significantly shorter using the automated technique than the manual approach (P < .001 for all comparisons). Similarly, the mean number of steps required to perform these measurements was significantly fewer with automatic measurements compared to the manual technique (P < .001). In summary, automated biometry reduced the examination time required for standard fetal measurements. This approach may improve work flow efficiency in busy obstetric sonography practices.
Kim, Hong-Seok; Choi, Dasom; Kang, Il-Byeong; Kim, Dong-Hyeon; Yim, Jin-Hyeok; Kim, Young-Ji; Chon, Jung-Whan; Oh, Deog-Hwan; Seo, Kun-Ho
2017-02-01
Culture-based detection of nontyphoidal Salmonella spp. in foods requires at least four working days; therefore, new detection methods that shorten the test time are needed. In this study, we developed a novel single-step Salmonella enrichment broth, SSE-1, and compared its detection capability with that of commercial single-step ONE broth-Salmonella (OBS) medium and a conventional two-step enrichment method using buffered peptone water and Rappaport-Vassiliadis soy broth (BPW-RVS). Minimally processed lettuce samples were artificially inoculated with low levels of healthy and cold-injured Salmonella Enteritidis (10 0 or 10 1 colony-forming unit/25 g), incubated in OBS, BPW-RVS, and SSE-1 broths, and streaked on xylose lysine deoxycholate (XLD) agar. Salmonella recoverability was significantly higher in BPW-RVS (79.2%) and SSE-1 (83.3%) compared to OBS (39.3%) (p < 0.05). Our data suggest that the SSE-1 single-step enrichment broth could completely replace two-step enrichment with reduced enrichment time from 48 to 24 h, performing better than commercial single-step enrichment medium in the conventional nonchromogenic Salmonella detection, thus saving time, labor, and cost.
Automating the evaluation of flood damages: methodology and potential gains
NASA Astrophysics Data System (ADS)
Eleutério, Julian; Martinez, Edgar Daniel
2010-05-01
The evaluation of flood damage potential consists of three main steps: assessing and processing data, combining data and calculating potential damages. The first step consists of modelling hazard and assessing vulnerability. In general, this step of the evaluation demands more time and investments than the others. The second step of the evaluation consists of combining spatial data on hazard with spatial data on vulnerability. Geographic Information System (GIS) is a fundamental tool in the realization of this step. GIS software allows the simultaneous analysis of spatial and matrix data. The third step of the evaluation consists of calculating potential damages by means of damage-functions or contingent analysis. All steps demand time and expertise. However, the last two steps must be realized several times when comparing different management scenarios. In addition, uncertainty analysis and sensitivity test are made during the second and third steps of the evaluation. The feasibility of these steps could be relevant in the choice of the extent of the evaluation. Low feasibility could lead to choosing not to evaluate uncertainty or to limit the number of scenario comparisons. Several computer models have been developed over time in order to evaluate the flood risk. GIS software is largely used to realise flood risk analysis. The software is used to combine and process different types of data, and to visualise the risk and the evaluation results. The main advantages of using a GIS in these analyses are: the possibility of "easily" realising the analyses several times, in order to compare different scenarios and study uncertainty; the generation of datasets which could be used any time in future to support territorial decision making; the possibility of adding information over time to update the dataset and make other analyses. However, these analyses require personnel specialisation and time. The use of GIS software to evaluate the flood risk requires personnel with a double professional specialisation. The professional should be proficient in GIS software and in flood damage analysis (which is already a multidisciplinary field). Great effort is necessary in order to correctly evaluate flood damages, and the updating and the improvement of the evaluation over time become a difficult task. The automation of this process should bring great advance in flood management studies over time, especially for public utilities. This study has two specific objectives: (1) show the entire process of automation of the second and third steps of flood damage evaluations; and (2) analyse the induced potential gains in terms of time and expertise needed in the analysis. A programming language is used within GIS software in order to automate hazard and vulnerability data combination and potential damages calculation. We discuss the overall process of flood damage evaluation. The main result of this study is a computational tool which allows significant operational gains on flood loss analyses. We quantify these gains by means of a hypothetical example. The tool significantly reduces the time of analysis and the needs for expertise. An indirect gain is that sensitivity and cost-benefit analyses can be more easily realized.
A collaborative approach to lean laboratory workstation design reduces wasted technologist travel.
Yerian, Lisa M; Seestadt, Joseph A; Gomez, Erron R; Marchant, Kandice K
2012-08-01
Lean methodologies have been applied in many industries to reduce waste. We applied Lean techniques to redesign laboratory workstations with the aim of reducing the number of times employees must leave their workstations to complete their tasks. At baseline in 68 workflows (aggregates or sequence of process steps) studied, 251 (38%) of 664 tasks required workers to walk away from their workstations. After analysis and redesign, only 59 (9%) of the 664 tasks required technologists to leave their workstations to complete these tasks. On average, 3.4 travel events were removed for each workstation. Time studies in a single laboratory section demonstrated that workers spend 8 to 70 seconds in travel each time they step away from the workstation. The redesigned workstations will allow employees to spend less time travelling around the laboratory. Additional benefits include employee training in waste identification, improved overall laboratory layout, and identification of other process improvement opportunities in our laboratory.
Tan, Swee Jin; Phan, Huan; Gerry, Benjamin Michael; Kuhn, Alexandre; Hong, Lewis Zuocheng; Min Ong, Yao; Poon, Polly Suk Yean; Unger, Marc Alexander; Jones, Robert C; Quake, Stephen R; Burkholder, William F
2013-01-01
Library preparation for next-generation DNA sequencing (NGS) remains a key bottleneck in the sequencing process which can be relieved through improved automation and miniaturization. We describe a microfluidic device for automating laboratory protocols that require one or more column chromatography steps and demonstrate its utility for preparing Next Generation sequencing libraries for the Illumina and Ion Torrent platforms. Sixteen different libraries can be generated simultaneously with significantly reduced reagent cost and hands-on time compared to manual library preparation. Using an appropriate column matrix and buffers, size selection can be performed on-chip following end-repair, dA tailing, and linker ligation, so that the libraries eluted from the chip are ready for sequencing. The core architecture of the device ensures uniform, reproducible column packing without user supervision and accommodates multiple routine protocol steps in any sequence, such as reagent mixing and incubation; column packing, loading, washing, elution, and regeneration; capture of eluted material for use as a substrate in a later step of the protocol; and removal of one column matrix so that two or more column matrices with different functional properties can be used in the same protocol. The microfluidic device is mounted on a plastic carrier so that reagents and products can be aliquoted and recovered using standard pipettors and liquid handling robots. The carrier-mounted device is operated using a benchtop controller that seals and operates the device with programmable temperature control, eliminating any requirement for the user to manually attach tubing or connectors. In addition to NGS library preparation, the device and controller are suitable for automating other time-consuming and error-prone laboratory protocols requiring column chromatography steps, such as chromatin immunoprecipitation.
Tan, Swee Jin; Phan, Huan; Gerry, Benjamin Michael; Kuhn, Alexandre; Hong, Lewis Zuocheng; Min Ong, Yao; Poon, Polly Suk Yean; Unger, Marc Alexander; Jones, Robert C.; Quake, Stephen R.; Burkholder, William F.
2013-01-01
Library preparation for next-generation DNA sequencing (NGS) remains a key bottleneck in the sequencing process which can be relieved through improved automation and miniaturization. We describe a microfluidic device for automating laboratory protocols that require one or more column chromatography steps and demonstrate its utility for preparing Next Generation sequencing libraries for the Illumina and Ion Torrent platforms. Sixteen different libraries can be generated simultaneously with significantly reduced reagent cost and hands-on time compared to manual library preparation. Using an appropriate column matrix and buffers, size selection can be performed on-chip following end-repair, dA tailing, and linker ligation, so that the libraries eluted from the chip are ready for sequencing. The core architecture of the device ensures uniform, reproducible column packing without user supervision and accommodates multiple routine protocol steps in any sequence, such as reagent mixing and incubation; column packing, loading, washing, elution, and regeneration; capture of eluted material for use as a substrate in a later step of the protocol; and removal of one column matrix so that two or more column matrices with different functional properties can be used in the same protocol. The microfluidic device is mounted on a plastic carrier so that reagents and products can be aliquoted and recovered using standard pipettors and liquid handling robots. The carrier-mounted device is operated using a benchtop controller that seals and operates the device with programmable temperature control, eliminating any requirement for the user to manually attach tubing or connectors. In addition to NGS library preparation, the device and controller are suitable for automating other time-consuming and error-prone laboratory protocols requiring column chromatography steps, such as chromatin immunoprecipitation. PMID:23894273
Evaluation of a transfinite element numerical solution method for nonlinear heat transfer problems
NASA Technical Reports Server (NTRS)
Cerro, J. A.; Scotti, S. J.
1991-01-01
Laplace transform techniques have been widely used to solve linear, transient field problems. A transform-based algorithm enables calculation of the response at selected times of interest without the need for stepping in time as required by conventional time integration schemes. The elimination of time stepping can substantially reduce computer time when transform techniques are implemented in a numerical finite element program. The coupling of transform techniques with spatial discretization techniques such as the finite element method has resulted in what are known as transfinite element methods. Recently attempts have been made to extend the transfinite element method to solve nonlinear, transient field problems. This paper examines the theoretical basis and numerical implementation of one such algorithm, applied to nonlinear heat transfer problems. The problem is linearized and solved by requiring a numerical iteration at selected times of interest. While shown to be acceptable for weakly nonlinear problems, this algorithm is ineffective as a general nonlinear solution method.
A road map to the new frontier: finding ETI
NASA Astrophysics Data System (ADS)
Bertaux, J. L.
2014-04-01
An obvious New Frontier for humanity is to locate our nearest neighbors technically advanced (ETI, extra-terrestrial intelligence). This quest can be achieved with three steps. 1. find the nearest exoplanets in the habitable zone (HZ) 2. find biosignatures in their spectra 3. find signs of advance technology. We argue that steps 2 and 3 will require space telescopes that need to be oriented to targets already identified in step 1 as hosting exoplanets of Earth or super Earth size in the habitable zone. We show that non-transiting planets in HZ are 3 to 9 times nearer the sun than transiting planets, the gain factor being a function of star temperature. The requirement for step 1 is within the reach of a network of 2.5 m diameter ground-based automated telescopes associated with HARPS-type spectrometers.
Mass production of silicon pore optics for ATHENA
NASA Astrophysics Data System (ADS)
Wille, Eric; Bavdaz, Marcos; Collon, Maximilien
2016-07-01
Silicon Pore Optics (SPO) provide high angular resolution with low effective area density as required for the Advanced Telescope for High Energy Astrophysics (Athena). The x-ray telescope consists of several hundreds of SPO mirror modules. During the development of the process steps of the SPO technology, specific requirements of a future mass production have been considered right from the beginning. The manufacturing methods heavily utilise off-the-shelf equipment from the semiconductor industry, robotic automation and parallel processing. This allows to upscale the present production flow in a cost effective way, to produce hundreds of mirror modules per year. Considering manufacturing predictions based on the current technology status, we present an analysis of the time and resources required for the Athena flight programme. This includes the full production process starting with Si wafers up to the integration of the mirror modules. We present the times required for the individual process steps and identify the equipment required to produce two mirror modules per day. A preliminary timeline for building and commissioning the required infrastructure, and for flight model production of about 1000 mirror modules, is presented.
Keilholz, L; Willner, J; Thiel, H-J; Zamboglou, N; Sack, H; Popp, W
2014-01-01
In order to evaluate resource requirements, the German Society of Radiation Oncology (DEGRO) recorded the times needed for core procedures in the radio-oncological treatment of various cancer types within the scope of its QUIRO trial. The present study investigated the personnel and infrastructural resources required in radiotherapy of prostate cancer. The investigation was carried out in the setting of definitive radiotherapy of prostate cancer patients between July and October 2008 at two radiotherapy centers, both with well-trained staff and modern technical facilities at their disposal. Personnel attendance times and room occupancy times required for core procedures (modules) were each measured prospectively by two independently trained observers using time measurements differentiated on the basis of professional group (physician, physicist, and technician), 3D conformal (3D-cRT), and intensity-modulated radiotherapy (IMRT). Total time requirements of 983 min for 3D-cRT and 1485 min for step-and-shoot IMRT were measured for the technician (in terms of professional group) in all modules recorded and over the entire course of radiotherapy for prostate cancer (72-76 Gy). Times needed for the medical specialist/physician were 255 min (3D-cRT) and 271 min (IMRT), times of the physicist were 181 min (3D-cRT) and 213 min (IMRT). The difference in time was significant, although variations in time spans occurred primarily as a result of various problems during patient treatment. This investigation has permitted, for the first time, a realistic estimation of average personnel and infrastructural requirements for core procedures in quality-assured definitive radiotherapy of prostate cancer. The increased time needed for IMRT applies to the step-and-shoot procedure with verification measurements for each irradiation planning.
2010-01-01
Background Numerous pen devices are available to administer recombinant Human Growth Hormone (rhGH), and both patients and health plans have varying issues to consider when selecting a particular product and device for daily use. Therefore, the present study utilized multi-dimensional product analysis to assess potential time involvement, required weekly administration steps, and utilization costs relative to daily rhGH administration. Methods Study objectives were to conduct 1) Time-and-Motion (TM) simulations in a randomized block design that allowed time and steps comparisons related to rhGH preparation, administration and storage, and 2) a Cost Minimization Analysis (CMA) relative to opportunity and supply costs. Nurses naïve to rhGH administration and devices were recruited to evaluate four rhGH pen devices (2 in liquid form, 2 requiring reconstitution) via TM simulations. Five videotaped and timed trials for each product were evaluated based on: 1) Learning (initial use instructions), 2) Preparation (arrange device for use), 3) Administration (actual simulation manikin injection), and 4) Storage (maintain product viability between doses), in addition to assessment of steps required for weekly use. The CMA applied micro-costing techniques related to opportunity costs for caregivers (categorized as wages), non-drug medical supplies, and drug product costs. Results Norditropin® NordiFlex and Norditropin® NordiPen (NNF and NNP, Novo Nordisk, Inc., Bagsværd, Denmark) took less weekly Total Time (p < 0.05) to use than either of the comparator products, Genotropin® Pen (GTP, Pfizer, Inc, New York, New York) or HumatroPen® (HTP, Eli Lilly and Company, Indianapolis, Indiana). Time savings were directly related to differences in new package Preparation times (NNF (1.35 minutes), NNP (2.48 minutes) GTP (4.11 minutes), HTP (8.64 minutes), p < 0.05)). Administration and Storage times were not statistically different. NNF (15.8 minutes) and NNP (16.2 minutes) also took less time to Learn than HTP (24.0 minutes) and GTP (26.0 minutes), p < 0.05). The number of weekly required administration steps was also least with NNF and NNP. Opportunity cost savings were greater in devices that were easier to prepare for use; GTP represented an 11.8% drug product savings over NNF, NNP and HTP at time of study. Overall supply costs represented <1% of drug costs for all devices. Conclusions Time-and-motion simulation data used to support a micro-cost analysis demonstrated that the pen device with the greater time demand has highest net costs. PMID:20377905
Nickman, Nancy A; Haak, Sandra W; Kim, Jaewhan
2010-04-08
Numerous pen devices are available to administer recombinant Human Growth Hormone (rhGH), and both patients and health plans have varying issues to consider when selecting a particular product and device for daily use. Therefore, the present study utilized multi-dimensional product analysis to assess potential time involvement, required weekly administration steps, and utilization costs relative to daily rhGH administration. Study objectives were to conduct 1) Time-and-Motion (TM) simulations in a randomized block design that allowed time and steps comparisons related to rhGH preparation, administration and storage, and 2) a Cost Minimization Analysis (CMA) relative to opportunity and supply costs. Nurses naïve to rhGH administration and devices were recruited to evaluate four rhGH pen devices (2 in liquid form, 2 requiring reconstitution) via TM simulations. Five videotaped and timed trials for each product were evaluated based on: 1) Learning (initial use instructions), 2) Preparation (arrange device for use), 3) Administration (actual simulation manikin injection), and 4) Storage (maintain product viability between doses), in addition to assessment of steps required for weekly use. The CMA applied micro-costing techniques related to opportunity costs for caregivers (categorized as wages), non-drug medical supplies, and drug product costs. Norditropin(R) NordiFlex and Norditropin(R) NordiPen (NNF and NNP, Novo Nordisk, Inc., Bagsvaerd, Denmark) took less weekly Total Time (p < 0.05) to use than either of the comparator products, Genotropin(R) Pen (GTP, Pfizer, Inc, New York, New York) or HumatroPen(R) (HTP, Eli Lilly and Company, Indianapolis, Indiana). Time savings were directly related to differences in new package Preparation times (NNF (1.35 minutes), NNP (2.48 minutes) GTP (4.11 minutes), HTP (8.64 minutes), p < 0.05)). Administration and Storage times were not statistically different. NNF (15.8 minutes) and NNP (16.2 minutes) also took less time to Learn than HTP (24.0 minutes) and GTP (26.0 minutes), p < 0.05). The number of weekly required administration steps was also least with NNF and NNP. Opportunity cost savings were greater in devices that were easier to prepare for use; GTP represented an 11.8% drug product savings over NNF, NNP and HTP at time of study. Overall supply costs represented <1% of drug costs for all devices. Time-and-motion simulation data used to support a micro-cost analysis demonstrated that the pen device with the greater time demand has highest net costs.
NASA Technical Reports Server (NTRS)
Rogers, Stuart E.
1990-01-01
The current work is initiated in an effort to obtain an efficient, accurate, and robust algorithm for the numerical solution of the incompressible Navier-Stokes equations in two- and three-dimensional generalized curvilinear coordinates for both steady-state and time-dependent flow problems. This is accomplished with the use of the method of artificial compressibility and a high-order flux-difference splitting technique for the differencing of the convective terms. Time accuracy is obtained in the numerical solutions by subiterating the equations in psuedo-time for each physical time step. The system of equations is solved with a line-relaxation scheme which allows the use of very large pseudo-time steps leading to fast convergence for steady-state problems as well as for the subiterations of time-dependent problems. Numerous laminar test flow problems are computed and presented with a comparison against analytically known solutions or experimental results. These include the flow in a driven cavity, the flow over a backward-facing step, the steady and unsteady flow over a circular cylinder, flow over an oscillating plate, flow through a one-dimensional inviscid channel with oscillating back pressure, the steady-state flow through a square duct with a 90 degree bend, and the flow through an artificial heart configuration with moving boundaries. An adequate comparison with the analytical or experimental results is obtained in all cases. Numerical comparisons of the upwind differencing with central differencing plus artificial dissipation indicates that the upwind differencing provides a much more robust algorithm, which requires significantly less computing time. The time-dependent problems require on the order of 10 to 20 subiterations, indicating that the elliptical nature of the problem does require a substantial amount of computing effort.
Training Rapid Stepping Responses in an Individual With Stroke
Inness, Elizabeth L.; Komar, Janice; Biasin, Louis; Brunton, Karen; Lakhani, Bimal; McIlroy, William E.
2011-01-01
Background and Purpose Compensatory stepping reactions are important responses to prevent a fall following a postural perturbation. People with hemiparesis following a stroke show delayed initiation and execution of stepping reactions and often are found to be unable to initiate these steps with the more-affected limb. This case report describes a targeted training program involving repeated postural perturbations to improve control of compensatory stepping in an individual with stroke. Case Description Compensatory stepping reactions of a 68-year-old man were examined 52 days after left hemorrhagic stroke. He required assistance to prevent a fall in all trials administered during his initial examination because he showed weight-bearing asymmetry (with more weight borne on the more-affected right side), was unable to initiate stepping with the right leg (despite blocking of the left leg in some trials), and demonstrated delayed response times. The patient completed 6 perturbation training sessions (30–60 minutes per session) that aimed to improve preperturbation weight-bearing symmetry, to encourage stepping with the right limb, and to reduce step initiation and completion times. Outcomes Improved efficacy of compensatory stepping reactions with training and reduced reliance on assistance to prevent falling were observed. Improvements were noted in preperturbation asymmetry and step timing. Blocking the left foot was effective in encouraging stepping with the more-affected right foot. Discussion This case report demonstrates potential short-term adaptations in compensatory stepping reactions following perturbation training in an individual with stroke. Future work should investigate the links between improved compensatory step characteristics and fall risk in this vulnerable population. PMID:21511992
Ye, Jianchu; Tu, Song; Sha, Yong
2010-10-01
For the two-step transesterification biodiesel production made from the sunflower oil, based on the kinetics model of the homogeneous base-catalyzed transesterification and the liquid-liquid phase equilibrium of the transesterification product, the total methanol/oil mole ratio, the total reaction time, and the split ratios of methanol and reaction time between the two reactors in the stage of the two-step reaction are determined quantitatively. In consideration of the transesterification intermediate product, both the traditional distillation separation process and the improved separation process of the two-step reaction product are investigated in detail by means of the rigorous process simulation. In comparison with the traditional distillation process, the improved separation process of the two-step reaction product has distinct advantage in the energy duty and equipment requirement due to replacement of the costly methanol-biodiesel distillation column. Copyright 2010 Elsevier Ltd. All rights reserved.
More realistic power estimation for new user, active comparator studies: an empirical example.
Gokhale, Mugdha; Buse, John B; Pate, Virginia; Marquis, M Alison; Stürmer, Til
2016-04-01
Pharmacoepidemiologic studies are often expected to be sufficiently powered to study rare outcomes, but there is sequential loss of power with implementation of study design options minimizing bias. We illustrate this using a study comparing pancreatic cancer incidence after initiating dipeptidyl-peptidase-4 inhibitors (DPP-4i) versus thiazolidinediones or sulfonylureas. We identified Medicare beneficiaries with at least one claim of DPP-4i or comparators during 2007-2009 and then applied the following steps: (i) exclude prevalent users, (ii) require a second prescription of same drug, (iii) exclude prevalent cancers, (iv) exclude patients age <66 years and (v) censor for treatment changes during follow-up. Power to detect hazard ratios (effect measure strongly driven by the number of events) ≥ 2.0 estimated after step 5 was compared with the naïve power estimated prior to step 1. There were 19,388 and 28,846 DPP-4i and thiazolidinedione initiators during 2007-2009. The number of drug initiators dropped most after requiring a second prescription, outcomes dropped most after excluding patients with prevalent cancer and person-time dropped most after requiring a second prescription and as-treated censoring. The naïve power (>99%) was considerably higher than the power obtained after the final step (~75%). In designing new-user active-comparator studies, one should be mindful how steps minimizing bias affect sample-size, number of outcomes and person-time. While actual numbers will depend on specific settings, application of generic losses in percentages will improve estimates of power compared with the naive approach mostly ignoring steps taken to increase validity. Copyright © 2015 John Wiley & Sons, Ltd.
Design of a laser rangefinder for Martian terrain measurements. M.S. Thesis
NASA Technical Reports Server (NTRS)
Palumbo, D. L.
1973-01-01
Methods for using a laser for rangefinding are discussed. These are: (1) Optical Focusing, (2) the Phase Difference Method, and (3) Timed Pulse. For application on a Mars Rover, the Timed Pulse Method proves to be the better choice in view of the requirements set down. This is made possible by pulse expansion techniques described in detail. Initial steps taken toward building the range finder are given, followed by a conclusion which is actually a proposal for future steps.
Modeling laser-plasma acceleration in the laboratory frame
DOE Office of Scientific and Technical Information (OSTI.GOV)
None
2011-01-01
A simulation of laser-plasma acceleration in the laboratory frame. Both the laser and the wakefield buckets must be resolved over the entire domain of the plasma, requiring many cells and many time steps. While researchers often use a simulation window that moves with the pulse, this reduces only the multitude of cells, not the multitude of time steps. For an artistic impression of how to solve the simulation by using the boosted-frame method, watch the video "Modeling laser-plasma acceleration in the wakefield frame".
Advanced in Visualization of 3D Time-Dependent CFD Solutions
NASA Technical Reports Server (NTRS)
Lane, David A.; Lasinski, T. A. (Technical Monitor)
1995-01-01
Numerical simulations of complex 3D time-dependent (unsteady) flows are becoming increasingly feasible because of the progress in computing systems. Unfortunately, many existing flow visualization systems were developed for time-independent (steady) solutions and do not adequately depict solutions from unsteady flow simulations. Furthermore, most systems only handle one time step of the solutions individually and do not consider the time-dependent nature of the solutions. For example, instantaneous streamlines are computed by tracking the particles using one time step of the solution. However, for streaklines and timelines, particles need to be tracked through all time steps. Streaklines can reveal quite different information about the flow than those revealed by instantaneous streamlines. Comparisons of instantaneous streamlines with dynamic streaklines are shown. For a complex 3D flow simulation, it is common to generate a grid system with several millions of grid points and to have tens of thousands of time steps. The disk requirement for storing the flow data can easily be tens of gigabytes. Visualizing solutions of this magnitude is a challenging problem with today's computer hardware technology. Even interactive visualization of one time step of the flow data can be a problem for some existing flow visualization systems because of the size of the grid. Current approaches for visualizing complex 3D time-dependent CFD solutions are described. The flow visualization system developed at NASA Ames Research Center to compute time-dependent particle traces from unsteady CFD solutions is described. The system computes particle traces (streaklines) by integrating through the time steps. This system has been used by several NASA scientists to visualize their CFD time-dependent solutions. The flow visualization capabilities of this system are described, and visualization results are shown.
NASA Astrophysics Data System (ADS)
Djaman, Koffi; Irmak, Suat; Sall, Mamadou; Sow, Abdoulaye; Kabenge, Isa
2017-10-01
The objective of this study was to quantify differences associated with using 24-h time step reference evapotranspiration (ETo), as compared with the sum of hourly ETo computations with the standardized ASCE Penman-Monteith (ASCE-PM) model for semi-arid dry conditions at Fanaye and Ndiaye (Senegal) and semiarid humid conditions at Sapu (The Gambia) and Kankan (Guinea). The results showed that there was good agreement between the sum of hourly ETo and daily time step ETo at all four locations. The daily time step overestimated the daily ETo relative to the sum of hourly ETo by 1.3 to 8% for the whole study periods. However, there is location and monthly dependence of the magnitude of ETo values and the ratio of the ETo values estimated by both methods. Sum of hourly ETo tends to give higher ETo during winter time at Fanaye and Sapu, while the daily ETo was higher from March to November at the same weather stations. At Ndiaye and Kankan, daily time step estimates of ETo were high during the year. The simple linear regression slopes between the sum of 24-h ETo and the daily time step ETo at all weather stations varied from 1.02 to 1.08 with high coefficient of determination (R 2 ≥ 0.87). Application of the hourly ETo estimation method might help on accurate ETo estimation to meet irrigation requirement under precision agriculture.
Simplified jet-A kinetic mechanism for combustor application
NASA Technical Reports Server (NTRS)
Lee, Chi-Ming; Kundu, Krishna; Ghorashi, Bahman
1993-01-01
Successful modeling of combustion and emissions in gas turbine engine combustors requires an adequate description of the reaction mechanism. For hydrocarbon oxidation, detailed mechanisms are only available for the simplest types of hydrocarbons such as methane, ethane, acetylene, and propane. These detailed mechanisms contain a large number of chemical species participating simultaneously in many elementary kinetic steps. Current computational fluid dynamic (CFD) models must include fuel vaporization, fuel-air mixing, chemical reactions, and complicated boundary geometries. To simulate these conditions a very sophisticated computer model is required, which requires large computer memory capacity and long run times. Therefore, gas turbine combustion modeling has frequently been simplified by using global reaction mechanisms, which can predict only the quantities of interest: heat release rates, flame temperature, and emissions. Jet fuels are wide-boiling-range hydrocarbons with ranges extending through those of gasoline and kerosene. These fuels are chemically complex, often containing more than 300 components. Jet fuel typically can be characterized as containing 70 vol pct paraffin compounds and 25 vol pct aromatic compounds. A five-step Jet-A fuel mechanism which involves pyrolysis and subsequent oxidation of paraffin and aromatic compounds is presented here. This mechanism is verified by comparing with Jet-A fuel ignition delay time experimental data, and species concentrations obtained from flametube experiments. This five-step mechanism appears to be better than the current one- and two-step mechanisms.
Solving large mixed linear models using preconditioned conjugate gradient iteration.
Strandén, I; Lidauer, M
1999-12-01
Continuous evaluation of dairy cattle with a random regression test-day model requires a fast solving method and algorithm. A new computing technique feasible in Jacobi and conjugate gradient based iterative methods using iteration on data is presented. In the new computing technique, the calculations in multiplication of a vector by a matrix were recorded to three steps instead of the commonly used two steps. The three-step method was implemented in a general mixed linear model program that used preconditioned conjugate gradient iteration. Performance of this program in comparison to other general solving programs was assessed via estimation of breeding values using univariate, multivariate, and random regression test-day models. Central processing unit time per iteration with the new three-step technique was, at best, one-third that needed with the old technique. Performance was best with the test-day model, which was the largest and most complex model used. The new program did well in comparison to other general software. Programs keeping the mixed model equations in random access memory required at least 20 and 435% more time to solve the univariate and multivariate animal models, respectively. Computations of the second best iteration on data took approximately three and five times longer for the animal and test-day models, respectively, than did the new program. Good performance was due to fast computing time per iteration and quick convergence to the final solutions. Use of preconditioned conjugate gradient based methods in solving large breeding value problems is supported by our findings.
ERIC Educational Resources Information Center
Sammataro, Diana
This set of lesson plans, prepared for use by Peace Corps volunteers in the Philippines, has been designed as a step-by-step guide to teaching beekeeping. Each of the eight lesson plans contained in the manual consists of an objective, time requirements, materials needed, and information about various aspects of beekeeping. Lessons are illustrated…
Multi-site Stochastic Simulation of Daily Streamflow with Markov Chain and KNN Algorithm
NASA Astrophysics Data System (ADS)
Mathai, J.; Mujumdar, P.
2017-12-01
A key focus of this study is to develop a method which is physically consistent with the hydrologic processes that can capture short-term characteristics of daily hydrograph as well as the correlation of streamflow in temporal and spatial domains. In complex water resource systems, flow fluctuations at small time intervals require that discretisation be done at small time scales such as daily scales. Also, simultaneous generation of synthetic flows at different sites in the same basin are required. We propose a method to equip water managers with a streamflow generator within a stochastic streamflow simulation framework. The motivation for the proposed method is to generate sequences that extend beyond the variability represented in the historical record of streamflow time series. The method has two steps: In step 1, daily flow is generated independently at each station by a two-state Markov chain, with rising limb increments randomly sampled from a Gamma distribution and the falling limb modelled as exponential recession and in step 2, the streamflow generated in step 1 is input to a nonparametric K-nearest neighbor (KNN) time series bootstrap resampler. The KNN model, being data driven, does not require assumptions on the dependence structure of the time series. A major limitation of KNN based streamflow generators is that they do not produce new values, but merely reshuffle the historical data to generate realistic streamflow sequences. However, daily flow generated using the Markov chain approach is capable of generating a rich variety of streamflow sequences. Furthermore, the rising and falling limbs of daily hydrograph represent different physical processes, and hence they need to be modelled individually. Thus, our method combines the strengths of the two approaches. We show the utility of the method and improvement over the traditional KNN by simulating daily streamflow sequences at 7 locations in the Godavari River basin in India.
Norris, Michelle; Anderson, Ross; Motl, Robert W; Hayes, Sara; Coote, Susan
2017-03-01
The purpose of this study was to examine the minimum number of days needed to reliably estimate daily step count and energy expenditure (EE), in people with multiple sclerosis (MS) who walked unaided. Seven days of activity monitor data were collected for 26 participants with MS (age=44.5±11.9years; time since diagnosis=6.5±6.2years; Patient Determined Disease Steps=≤3). Mean daily step count and mean daily EE (kcal) were calculated for all combinations of days (127 combinations), and compared to the respective 7-day mean daily step count or mean daily EE using intra-class correlations (ICC), the Generalizability Theory and Bland-Altman. For step count, ICC values of 0.94-0.98 and a G-coefficient of 0.81 indicate a minimum of any random 2-day combination is required to reliably calculate mean daily step count. For EE, ICC values of 0.96-0.99 and a G-coefficient of 0.83 indicate a minimum of any random 4-day combination is required to reliably calculate mean daily EE. For Bland-Altman analyses all combinations of days, bar single day combinations, resulted in a mean bias within ±10%, when expressed as a percentage of the 7-day mean daily step count or mean daily EE. A minimum of 2days for step count and 4days for EE, regardless of day type, is needed to reliably estimate daily step count and daily EE, in people with MS who walk unaided. Copyright © 2017 Elsevier B.V. All rights reserved.
Two Independent Contributions to Step Variability during Over-Ground Human Walking
Collins, Steven H.; Kuo, Arthur D.
2013-01-01
Human walking exhibits small variations in both step length and step width, some of which may be related to active balance control. Lateral balance is thought to require integrative sensorimotor control through adjustment of step width rather than length, contributing to greater variability in step width. Here we propose that step length variations are largely explained by the typical human preference for step length to increase with walking speed, which itself normally exhibits some slow and spontaneous fluctuation. In contrast, step width variations should have little relation to speed if they are produced more for lateral balance. As a test, we examined hundreds of overground walking steps by healthy young adults (N = 14, age < 40 yrs.). We found that slow fluctuations in self-selected walking speed (2.3% coefficient of variation) could explain most of the variance in step length (59%, P < 0.01). The residual variability not explained by speed was small (1.5% coefficient of variation), suggesting that step length is actually quite precise if not for the slow speed fluctuations. Step width varied over faster time scales and was independent of speed fluctuations, with variance 4.3 times greater than that for step length (P < 0.01) after accounting for the speed effect. That difference was further magnified by walking with eyes closed, which appears detrimental to control of lateral balance. Humans appear to modulate fore-aft foot placement in precise accordance with slow fluctuations in walking speed, whereas the variability of lateral foot placement appears more closely related to balance. Step variability is separable in both direction and time scale into balance- and speed-related components. The separation of factors not related to balance may reveal which aspects of walking are most critical for the nervous system to control. PMID:24015308
Jones, Brian A; Hull, Melissa A; Potanos, Kristina M; Zurakowski, David; Fitzgibbons, Shimae C; Ching, Y Avery; Duggan, Christopher; Jaksic, Tom; Kim, Heung Bae
2016-01-01
Background The International Serial Transverse Enteroplasty (STEP) Data Registry is a voluntary online database created in 2004 to collect information on patients undergoing the STEP procedure. The aim of this study was to identify preoperative factors significantly associated with 1) transplantation or death, or 2) attainment of enteral autonomy following STEP. Study Design Data were collected from September 2004 to January 2010. Univariate and multivariate logistic regression analyses were applied to determine predictors of transplantation/death or enteral autonomy post-STEP. Time to reach full enteral nutrition was estimated using a Kaplan-Meier curve. Results Fourteen of the 111 patients in the Registry were excluded due to inadequate follow-up. Of the remaining 97 patients, 11 patients died, and 5 progressed to intestinal transplantation. On multivariate analysis, higher direct bilirubin and shorter pre-STEP bowel length were independently predictive of progression to transplantation or death (p = .05 and p < .001, respectively). Of the 78 patients who were ≥7 days of age and required parenteral nutrition (PN) at the time of STEP, 37 (47%) achieved enteral autonomy after the first STEP. Longer pre-STEP bowel length was also independently associated with enteral autonomy (p = .002). The median time to reach enteral autonomy based on Kaplan-Meier analysis was 21 months (95% CI: 12-30). Conclusions Overall mortality post-STEP was 11%. Pre-STEP risk factors for progressing to transplantation or death were higher direct bilirubin and shorter bowel length. Among patients who underwent STEP for short bowel syndrome, 47% attained full enteral nutrition post-STEP. Patients with longer pre-STEP bowel length were significantly more likely to achieve enteral autonomy. PMID:23357726
Symplectic molecular dynamics simulations on specially designed parallel computers.
Borstnik, Urban; Janezic, Dusanka
2005-01-01
We have developed a computer program for molecular dynamics (MD) simulation that implements the Split Integration Symplectic Method (SISM) and is designed to run on specialized parallel computers. The MD integration is performed by the SISM, which analytically treats high-frequency vibrational motion and thus enables the use of longer simulation time steps. The low-frequency motion is treated numerically on specially designed parallel computers, which decreases the computational time of each simulation time step. The combination of these approaches means that less time is required and fewer steps are needed and so enables fast MD simulations. We study the computational performance of MD simulation of molecular systems on specialized computers and provide a comparison to standard personal computers. The combination of the SISM with two specialized parallel computers is an effective way to increase the speed of MD simulations up to 16-fold over a single PC processor.
Okubo, Yoshiro; Schoene, Daniel; Lord, Stephen R
2017-04-01
To examine the effects of stepping interventions on fall risk factors and fall incidence in older people. Electronic databases (PubMed, EMBASE, CINAHL, Cochrane, CENTRAL) and reference lists of included articles from inception to March 2015. Randomised (RCT) or clinical controlled trials (CCT) of volitional and reactive stepping interventions that included older (minimum age 60) people providing data on falls or fall risk factors. Meta-analyses of seven RCTs (n=660) showed that the stepping interventions significantly reduced the rate of falls (rate ratio=0.48, 95% CI 0.36 to 0.65, p<0.0001, I 2 =0%) and the proportion of fallers (risk ratio=0.51, 95% CI 0.38 to 0.68, p<0.0001, I 2 =0%). Subgroup analyses stratified by reactive and volitional stepping interventions revealed a similar efficacy for rate of falls and proportion of fallers. A meta-analysis of two RCTs (n=62) showed that stepping interventions significantly reduced laboratory-induced falls, and meta-analysis findings of up to five RCTs and CCTs (n=36-416) revealed that stepping interventions significantly improved simple and choice stepping reaction time, single leg stance, timed up and go performance (p<0.05), but not measures of strength. The findings indicate that both reactive and volitional stepping interventions reduce falls among older adults by approximately 50%. This clinically significant reduction may be due to improvements in reaction time, gait, balance and balance recovery but not in strength. Further high-quality studies aimed at maximising the effectiveness and feasibility of stepping interventions are required. CRD42015017357. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/.
NASA Astrophysics Data System (ADS)
Zhu, Ying; Herbert, John M.
2018-01-01
The "real time" formulation of time-dependent density functional theory (TDDFT) involves integration of the time-dependent Kohn-Sham (TDKS) equation in order to describe the time evolution of the electron density following a perturbation. This approach, which is complementary to the more traditional linear-response formulation of TDDFT, is more efficient for computation of broad-band spectra (including core-excited states) and for systems where the density of states is large. Integration of the TDKS equation is complicated by the time-dependent nature of the effective Hamiltonian, and we introduce several predictor/corrector algorithms to propagate the density matrix, one of which can be viewed as a self-consistent extension of the widely used modified-midpoint algorithm. The predictor/corrector algorithms facilitate larger time steps and are shown to be more efficient despite requiring more than one Fock build per time step, and furthermore can be used to detect a divergent simulation on-the-fly, which can then be halted or else the time step modified.
Fast intersection detection algorithm for PC-based robot off-line programming
NASA Astrophysics Data System (ADS)
Fedrowitz, Christian H.
1994-11-01
This paper presents a method for fast and reliable collision detection in complex production cells. The algorithm is part of the PC-based robot off-line programming system of the University of Siegen (Ropsus). The method is based on a solid model which is managed by a simplified constructive solid geometry model (CSG-model). The collision detection problem is divided in two steps. In the first step the complexity of the problem is reduced in linear time. In the second step the remaining solids are tested for intersection. For this the Simplex algorithm, which is known from linear optimization, is used. It computes a point which is common to two convex polyhedra. The polyhedra intersect, if such a point exists. Regarding the simplified geometrical model of Ropsus the algorithm runs also in linear time. In conjunction with the first step a resultant collision detection algorithm is found which requires linear time in all. Moreover it computes the resultant intersection polyhedron using the dual transformation.
Mizota, Tomoko; Kurashima, Yo; Poudel, Saseem; Watanabe, Yusuke; Shichinohe, Toshiaki; Hirano, Satoshi
2018-07-01
Despite its advantages, few trainees outside of North America have access to simulation training. We hypothesized that a stepwise training method using tele-mentoring system would be an efficient technique for training in basic laparoscopic skills. Residents were randomized into two groups and trained to proficiency in intracorporeal suturing. The stepwise group (SG) practiced the task step-by-step, while the other group practiced comprehensively (CG). Each participant received weekly coaching via two-way web conferencing software. The duration of the coaching sessions and self-practice time were compared between the two groups. Twenty residents from 15 institutions participated, and all achieved proficiency. Coaching sessions using tele-mentoring system were completed without difficulties. The SG required significantly shorter coaching time per session than the CG (p = .002). There was no significant difference in self-practice time. The stepwise training method with the tele-mentoring system appears to make efficient use of surgical trainees' and trainers' time. Copyright © 2017 Elsevier Inc. All rights reserved.
Thermostating extended Lagrangian Born-Oppenheimer molecular dynamics.
Martínez, Enrique; Cawkwell, Marc J; Voter, Arthur F; Niklasson, Anders M N
2015-04-21
Extended Lagrangian Born-Oppenheimer molecular dynamics is developed and analyzed for applications in canonical (NVT) simulations. Three different approaches are considered: the Nosé and Andersen thermostats and Langevin dynamics. We have tested the temperature distribution under different conditions of self-consistent field (SCF) convergence and time step and compared the results to analytical predictions. We find that the simulations based on the extended Lagrangian Born-Oppenheimer framework provide accurate canonical distributions even under approximate SCF convergence, often requiring only a single diagonalization per time step, whereas regular Born-Oppenheimer formulations exhibit unphysical fluctuations unless a sufficiently high degree of convergence is reached at each time step. The thermostated extended Lagrangian framework thus offers an accurate approach to sample processes in the canonical ensemble at a fraction of the computational cost of regular Born-Oppenheimer molecular dynamics simulations.
NASA Technical Reports Server (NTRS)
Chang, Chau-Lyan; Venkatachari, Balaji Shankar; Cheng, Gary
2013-01-01
With the wide availability of affordable multiple-core parallel supercomputers, next generation numerical simulations of flow physics are being focused on unsteady computations for problems involving multiple time scales and multiple physics. These simulations require higher solution accuracy than most algorithms and computational fluid dynamics codes currently available. This paper focuses on the developmental effort for high-fidelity multi-dimensional, unstructured-mesh flow solvers using the space-time conservation element, solution element (CESE) framework. Two approaches have been investigated in this research in order to provide high-accuracy, cross-cutting numerical simulations for a variety of flow regimes: 1) time-accurate local time stepping and 2) highorder CESE method. The first approach utilizes consistent numerical formulations in the space-time flux integration to preserve temporal conservation across the cells with different marching time steps. Such approach relieves the stringent time step constraint associated with the smallest time step in the computational domain while preserving temporal accuracy for all the cells. For flows involving multiple scales, both numerical accuracy and efficiency can be significantly enhanced. The second approach extends the current CESE solver to higher-order accuracy. Unlike other existing explicit high-order methods for unstructured meshes, the CESE framework maintains a CFL condition of one for arbitrarily high-order formulations while retaining the same compact stencil as its second-order counterpart. For large-scale unsteady computations, this feature substantially enhances numerical efficiency. Numerical formulations and validations using benchmark problems are discussed in this paper along with realistic examples.
Implicit integration methods for dislocation dynamics
Gardner, D. J.; Woodward, C. S.; Reynolds, D. R.; ...
2015-01-20
In dislocation dynamics simulations, strain hardening simulations require integrating stiff systems of ordinary differential equations in time with expensive force calculations, discontinuous topological events, and rapidly changing problem size. Current solvers in use often result in small time steps and long simulation times. Faster solvers may help dislocation dynamics simulations accumulate plastic strains at strain rates comparable to experimental observations. Here, this paper investigates the viability of high order implicit time integrators and robust nonlinear solvers to reduce simulation run times while maintaining the accuracy of the computed solution. In particular, implicit Runge-Kutta time integrators are explored as a waymore » of providing greater accuracy over a larger time step than is typically done with the standard second-order trapezoidal method. In addition, both accelerated fixed point and Newton's method are investigated to provide fast and effective solves for the nonlinear systems that must be resolved within each time step. Results show that integrators of third order are the most effective, while accelerated fixed point and Newton's method both improve solver performance over the standard fixed point method used for the solution of the nonlinear systems.« less
NASA Astrophysics Data System (ADS)
Santi, S. S.; Renanto; Altway, A.
2018-01-01
The energy use system in a production process, in this case heat exchangers networks (HENs), is one element that plays a role in the smoothness and sustainability of the industry itself. Optimizing Heat Exchanger Networks (HENs) from process streams can have a major effect on the economic value of an industry as a whole. So the solving of design problems with heat integration becomes an important requirement. In a plant, heat integration can be carried out internally or in combination between process units. However, steps in the determination of suitable heat integration techniques require long calculations and require a long time. In this paper, we propose an alternative step in determining heat integration technique by investigating 6 hypothetical units using Pinch Analysis approach with objective function energy target and total annual cost target. The six hypothetical units consist of units A, B, C, D, E, and F, where each unit has the location of different process streams to the temperature pinch. The result is a potential heat integration (ΔH’) formula that can trim conventional steps from 7 steps to just 3 steps. While the determination of the preferred heat integration technique is to calculate the potential of heat integration (ΔH’) between the hypothetical process units. Completion of calculation using matlab language programming.
Mechanical analysis of statolith action in roots and rhizoids.
Todd, P
1994-01-01
Published observations on the response times following gravistimulation (horizontal positioning) of Chara rhizoids and developing roots of vascular plants with normal and "starchless" amyloplasts were reviewed and compared. Statolith motion was found to be consistent with gravitational sedimentation opposed by elastic deformation of an intracellular material. The time required for a statolith to sediment to equilibrium was calculated on the basis of its buoyant density and compared with observed sedimentation times. In the examples chosen, the response time following gravistimulation (from horizontal positioning to the return of downward growth) could be related to the statolith sedimentation time. Such a relationship implies that the transduction step is rapid in comparison with the perception step following gravistimulation of rhizoids and developing roots.
Faria, Eliney F; Caputo, Peter A; Wood, Christopher G; Karam, Jose A; Nogueras-González, Graciela M; Matin, Surena F
2014-02-01
Laparoscopic and robotic partial nephrectomy (LPN and RPN) are strongly related to influence of tumor complexity and learning curve. We analyzed a consecutive experience between RPN and LPN to discern if warm ischemia time (WIT) is in fact improved while accounting for these two confounding variables and if so by which particular aspect of WIT. This is a retrospective analysis of consecutive procedures performed by a single surgeon between 2002-2008 (LPN) and 2008-2012 (RPN). Specifically, individual steps, including tumor excision, suturing of intrarenal defect, and parenchyma, were recorded at the time of surgery. Multivariate and univariate analyzes were used to evaluate influence of learning curve, tumor complexity, and time kinetics of individual steps during WIT, to determine their influence in WIT. Additionally, we considered the effect of RPN on the learning curve. A total of 146 LPNs and 137 RPNs were included. Considering renal function, WIT, suturing time, renorrhaphy time were found statistically significant differences in favor of RPN (p < 0.05). In the univariate analysis, surgical procedure, learning curve, clinical tumor size, and RENAL nephrometry score were statistically significant predictors for WIT (p < 0.05). RPN decreased the WIT on average by approximately 7 min compared to LPN even when adjusting for learning curve, tumor complexity, and both together (p < 0.001). We found RPN was associated with a shorter WIT when controlling for influence of the learning curve and tumor complexity. The time required for tumor excision was not shortened but the time required for suturing steps was significantly shortened.
A high-throughput semi-automated preparation for filtered synaptoneurosomes.
Murphy, Kathryn M; Balsor, Justin; Beshara, Simon; Siu, Caitlin; Pinto, Joshua G A
2014-09-30
Synaptoneurosomes have become an important tool for studying synaptic proteins. The filtered synaptoneurosomes preparation originally developed by Hollingsworth et al. (1985) is widely used and is an easy method to prepare synaptoneurosomes. The hand processing steps in that preparation, however, are labor intensive and have become a bottleneck for current proteomic studies using synaptoneurosomes. For this reason, we developed new steps for tissue homogenization and filtration that transform the preparation of synaptoneurosomes to a high-throughput, semi-automated process. We implemented a standardized protocol with easy to follow steps for homogenizing multiple samples simultaneously using a FastPrep tissue homogenizer (MP Biomedicals, LLC) and then filtering all of the samples in centrifugal filter units (EMD Millipore, Corp). The new steps dramatically reduce the time to prepare synaptoneurosomes from hours to minutes, increase sample recovery, and nearly double enrichment for synaptic proteins. These steps are also compatible with biosafety requirements for working with pathogen infected brain tissue. The new high-throughput semi-automated steps to prepare synaptoneurosomes are timely technical advances for studies of low abundance synaptic proteins in valuable tissue samples. Copyright © 2014 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Iurashev, Dmytro; Campa, Giovanni; Anisimov, Vyacheslav V.; Cosatto, Ezio
2017-11-01
Currently, gas turbine manufacturers frequently face the problem of strong acoustic combustion driven oscillations inside combustion chambers. These combustion instabilities can cause extensive wear and sometimes even catastrophic damages to combustion hardware. This requires prevention of combustion instabilities, which, in turn, requires reliable and fast predictive tools. This work presents a three-step method to find stability margins within which gas turbines can be operated without going into self-excited pressure oscillations. As a first step, a set of unsteady Reynolds-averaged Navier-Stokes simulations with the Flame Speed Closure (FSC) model implemented in the OpenFOAM® environment are performed to obtain the flame describing function of the combustor set-up. The standard FSC model is extended in this work to take into account the combined effect of strain and heat losses on the flame. As a second step, a linear three-time-lag-distributed model for a perfectly premixed swirl-stabilized flame is extended to the nonlinear regime. The factors causing changes in the model parameters when applying high-amplitude velocity perturbations are analysed. As a third step, time-domain simulations employing a low-order network model implemented in Simulink® are performed. In this work, the proposed method is applied to a laboratory test rig. The proposed method permits not only the unsteady frequencies of acoustic oscillations to be computed, but the amplitudes of such oscillations as well. Knowing the amplitudes of unstable pressure oscillations, it is possible to determine how these oscillations are harmful to the combustor equipment. The proposed method has a low cost because it does not require any license for computational fluid dynamics software.
Magnetic timing valves for fluid control in paper-based microfluidics.
Li, Xiao; Zwanenburg, Philip; Liu, Xinyu
2013-07-07
Multi-step analytical tests, such as an enzyme-linked immunosorbent assay (ELISA), require delivery of multiple fluids into a reaction zone and counting the incubation time at different steps. This paper presents a new type of paper-based magnetic valves that can count the time and turn on or off a fluidic flow accordingly, enabling timed fluid control in paper-based microfluidics. The timing capability of these valves is realized using a paper timing channel with an ionic resistor, which can detect the event of a solution flowing through the resistor and trigger an electromagnet (through a simple circuit) to open or close a paper cantilever valve. Based on this principle, we developed normally-open and normally-closed valves with a timing period up to 30.3 ± 2.1 min (sufficient for an ELISA on paper-based platforms). Using the normally-open valve, we performed an enzyme-based colorimetric reaction commonly used for signal readout of ELISAs, which requires a timed delivery of an enzyme substrate to a reaction zone. This design adds a new fluid-control component to the tool set for developing paper-based microfluidic devices, and has the potential to improve the user-friendliness of these devices.
Multi-Step Time Series Forecasting with an Ensemble of Varied Length Mixture Models.
Ouyang, Yicun; Yin, Hujun
2018-05-01
Many real-world problems require modeling and forecasting of time series, such as weather temperature, electricity demand, stock prices and foreign exchange (FX) rates. Often, the tasks involve predicting over a long-term period, e.g. several weeks or months. Most existing time series models are inheritably for one-step prediction, that is, predicting one time point ahead. Multi-step or long-term prediction is difficult and challenging due to the lack of information and uncertainty or error accumulation. The main existing approaches, iterative and independent, either use one-step model recursively or treat the multi-step task as an independent model. They generally perform poorly in practical applications. In this paper, as an extension of the self-organizing mixture autoregressive (AR) model, the varied length mixture (VLM) models are proposed to model and forecast time series over multi-steps. The key idea is to preserve the dependencies between the time points within the prediction horizon. Training data are segmented to various lengths corresponding to various forecasting horizons, and the VLM models are trained in a self-organizing fashion on these segments to capture these dependencies in its component AR models of various predicting horizons. The VLM models form a probabilistic mixture of these varied length models. A combination of short and long VLM models and an ensemble of them are proposed to further enhance the prediction performance. The effectiveness of the proposed methods and their marked improvements over the existing methods are demonstrated through a number of experiments on synthetic data, real-world FX rates and weather temperatures.
Rapid Calculation of Spacecraft Trajectories Using Efficient Taylor Series Integration
NASA Technical Reports Server (NTRS)
Scott, James R.; Martini, Michael C.
2011-01-01
A variable-order, variable-step Taylor series integration algorithm was implemented in NASA Glenn's SNAP (Spacecraft N-body Analysis Program) code. SNAP is a high-fidelity trajectory propagation program that can propagate the trajectory of a spacecraft about virtually any body in the solar system. The Taylor series algorithm's very high order accuracy and excellent stability properties lead to large reductions in computer time relative to the code's existing 8th order Runge-Kutta scheme. Head-to-head comparison on near-Earth, lunar, Mars, and Europa missions showed that Taylor series integration is 15.8 times faster than Runge- Kutta on average, and is more accurate. These speedups were obtained for calculations involving central body, other body, thrust, and drag forces. Similar speedups have been obtained for calculations that include J2 spherical harmonic for central body gravitation. The algorithm includes a step size selection method that directly calculates the step size and never requires a repeat step. High-order Taylor series integration algorithms have been shown to provide major reductions in computer time over conventional integration methods in numerous scientific applications. The objective here was to directly implement Taylor series integration in an existing trajectory analysis code and demonstrate that large reductions in computer time (order of magnitude) could be achieved while simultaneously maintaining high accuracy. This software greatly accelerates the calculation of spacecraft trajectories. At each time level, the spacecraft position, velocity, and mass are expanded in a high-order Taylor series whose coefficients are obtained through efficient differentiation arithmetic. This makes it possible to take very large time steps at minimal cost, resulting in large savings in computer time. The Taylor series algorithm is implemented primarily through three subroutines: (1) a driver routine that automatically introduces auxiliary variables and sets up initial conditions and integrates; (2) a routine that calculates system reduced derivatives using recurrence relations for quotients and products; and (3) a routine that determines the step size and sums the series. The order of accuracy used in a trajectory calculation is arbitrary and can be set by the user. The algorithm directly calculates the motion of other planetary bodies and does not require ephemeris files (except to start the calculation). The code also runs with Taylor series and Runge-Kutta used interchangeably for different phases of a mission.
Sen. Coburn, Tom [R-OK
2010-09-15
Senate - 09/16/2010 Read the second time. Placed on Senate Legislative Calendar under General Orders. Calendar No. 565. (All Actions) Tracker: This bill has the status IntroducedHere are the steps for Status of Legislation:
Lucius, Aaron L; Maluf, Nasib K; Fischer, Christopher J; Lohman, Timothy M
2003-10-01
Helicase-catalyzed DNA unwinding is often studied using "all or none" assays that detect only the final product of fully unwound DNA. Even using these assays, quantitative analysis of DNA unwinding time courses for DNA duplexes of different lengths, L, using "n-step" sequential mechanisms, can reveal information about the number of intermediates in the unwinding reaction and the "kinetic step size", m, defined as the average number of basepairs unwound between two successive rate limiting steps in the unwinding cycle. Simultaneous nonlinear least-squares analysis using "n-step" sequential mechanisms has previously been limited by an inability to float the number of "unwinding steps", n, and m, in the fitting algorithm. Here we discuss the behavior of single turnover DNA unwinding time courses and describe novel methods for nonlinear least-squares analysis that overcome these problems. Analytic expressions for the time courses, f(ss)(t), when obtainable, can be written using gamma and incomplete gamma functions. When analytic expressions are not obtainable, the numerical solution of the inverse Laplace transform can be used to obtain f(ss)(t). Both methods allow n and m to be continuous fitting parameters. These approaches are generally applicable to enzymes that translocate along a lattice or require repetition of a series of steps before product formation.
Software for Automated Reading of STEP Files by I-DEAS(trademark)
NASA Technical Reports Server (NTRS)
Pinedo, John
2003-01-01
A program called "readstep" enables the I-DEAS(tm) computer-aided-design (CAD) software to automatically read Standard for the Exchange of Product Model Data (STEP) files. (The STEP format is one of several used to transfer data between dissimilar CAD programs.) Prior to the development of "readstep," it was necessary to read STEP files into I-DEAS(tm) one at a time in a slow process that required repeated intervention by the user. In operation, "readstep" prompts the user for the location of the desired STEP files and the names of the I-DEAS(tm) project and model file, then generates an I-DEAS(tm) program file called "readstep.prg" and two Unix shell programs called "runner" and "controller." The program "runner" runs I-DEAS(tm) sessions that execute readstep.prg, while "controller" controls the execution of "runner" and edits readstep.prg if necessary. The user sets "runner" and "controller" into execution simultaneously, and then no further intervention by the user is required. When "runner" has finished, the user should see only parts from successfully read STEP files present in the model file. STEP files that could not be read successfully (e.g., because of format errors) should be regenerated before attempting to read them again.
Two-Step Incision for Periarterial Sympathectomy of the Hand.
Jeon, Seung Bae; Ahn, Hee Chang; Ahn, Yong Su; Choi, Matthew Seung Suk
2015-11-01
Surgical scars on the palmar surface of the hand may lead to functional and also aesthetic and psychological consequences. The objective of this study was to introduce a new incision technique for periarterial sympathectomy of the hand and to compare the results of the new two-step incision technique with those of a Koman incision by using an objective questionnaire. A total of 40 patients (17 men and 23 women) with intractable Raynaud's disease or syndrome underwent surgery in our hospital, conducted by a single surgeon, between January 2008 and January 2013. Patients who had undergone extended sympathectomy or vessel graft were excluded. Clinical evaluation of postoperative scars was performed in both groups one year after surgery using the patient and observer scar assessment scale (POSAS) and the Wake Forest University rating scale. The total patient score was 8.59 (range, 6-15) in the two-step incision group and 9.62 (range, 7-18) in the Koman incision group. A significant difference was found between the groups in the total PS score (P-value=0.034) but not in the total observer score. Our analysis found no significant difference in preoperative and postoperative Wake Forest University rating scale scores between the two-step and Koman incision groups. The time required for recovery prior to returning to work after surgery was shorter in the two-step incision group, with a mean of 29.48 days in the two-step incision group and 34.15 days in the Koman incision group (P=0.03). Compared to the Koman incision, the new two-step incision technique provides better aesthetic results, similar symptom improvement, and a reduction in the recovery time required before returning to work. Furthermore, this incision allows the surgeon to access a wide surgical field and a sufficient exposure of anatomical structures.
Immobilization techniques to avoid enzyme loss from oxidase-based biosensors: a one-year study.
House, Jody L; Anderson, Ellen M; Ward, W Kenneth
2007-01-01
Continuous amperometric sensors that measure glucose or lactate require a stable sensitivity, and glutaraldehyde crosslinking has been used widely to avoid enzyme loss. Nonetheless, little data is published on the effectiveness of enzyme immobilization with glutaraldehyde. A combination of electrochemical testing and spectrophotometric assays was used to study the relationship between enzyme shedding and the fabrication procedure. In addition, we studied the relationship between the glutaraldehyde concentration and sensor performance over a period of one year. The enzyme immobilization process by glutaraldehyde crosslinking to glucose oxidase appears to require at least 24-hours at room temperature to reach completion. In addition, excess free glucose oxidase can be removed by soaking sensors in purified water for 20 minutes. Even with the addition of these steps, however, it appears that there is some free glucose oxidase entrapped within the enzyme layer which contributes to a decline in sensitivity over time. Although it reduces the ultimate sensitivity (probably via a change in the enzyme's natural conformation), glutaraldehyde concentration in the enzyme layer can be increased in order to minimize this instability. After exposure of oxidase enzymes to glutaraldehyde, effective crosslinking requires a rinse step and a 24-hour incubation step. In order to minimize the loss of sensor sensitivity over time, the glutaraldehyde concentration can be increased.
Alternative Attitude Commanding and Control for Precise Spacecraft Landing
NASA Technical Reports Server (NTRS)
Singh, Gurkirpal
2004-01-01
A report proposes an alternative method of control for precision landing on a remote planet. In the traditional method, the attitude of a spacecraft is required to track a commanded translational acceleration vector, which is generated at each time step by solving a two-point boundary value problem. No requirement of continuity is imposed on the acceleration. The translational acceleration does not necessarily vary smoothly. Tracking of a non-smooth acceleration causes the vehicle attitude to exhibit undesirable transients and poor pointing stability behavior. In the alternative method, the two-point boundary value problem is not solved at each time step. A smooth reference position profile is computed. The profile is recomputed only when the control errors get sufficiently large. The nominal attitude is still required to track the smooth reference acceleration command. A steering logic is proposed that controls the position and velocity errors about the reference profile by perturbing the attitude slightly about the nominal attitude. The overall pointing behavior is therefore smooth, greatly reducing the degree of pointing instability.
40 CFR 35.937-9 - Required solicitation and subagreement provisions.
Code of Federal Regulations, 2010 CFR
2010-07-01
...; (2) The time for performance and completion of the contract work, including where appropriate, dates for completion of significant project tasks; (3) Personnel and facilities necessary to accomplish the... for later tasks or steps at the time of contract execution, the contract should not include the...
Thermostating extended Lagrangian Born-Oppenheimer molecular dynamics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Martínez, Enrique; Cawkwell, Marc J.; Voter, Arthur F.
Here, Extended Lagrangian Born-Oppenheimer molecular dynamics is developed and analyzed for applications in canonical (NVT) simulations. Three different approaches are considered: the Nosé and Andersen thermostats and Langevin dynamics. We have tested the temperature distribution under different conditions of self-consistent field (SCF) convergence and time step and compared the results to analytical predictions. We find that the simulations based on the extended Lagrangian Born-Oppenheimer framework provide accurate canonical distributions even under approximate SCF convergence, often requiring only a single diagonalization per time step, whereas regular Born-Oppenheimer formulations exhibit unphysical fluctuations unless a sufficiently high degree of convergence is reached atmore » each time step. Lastly, the thermostated extended Lagrangian framework thus offers an accurate approach to sample processes in the canonical ensemble at a fraction of the computational cost of regular Born-Oppenheimer molecular dynamics simulations.« less
Thermostating extended Lagrangian Born-Oppenheimer molecular dynamics
Martínez, Enrique; Cawkwell, Marc J.; Voter, Arthur F.; ...
2015-04-21
Here, Extended Lagrangian Born-Oppenheimer molecular dynamics is developed and analyzed for applications in canonical (NVT) simulations. Three different approaches are considered: the Nosé and Andersen thermostats and Langevin dynamics. We have tested the temperature distribution under different conditions of self-consistent field (SCF) convergence and time step and compared the results to analytical predictions. We find that the simulations based on the extended Lagrangian Born-Oppenheimer framework provide accurate canonical distributions even under approximate SCF convergence, often requiring only a single diagonalization per time step, whereas regular Born-Oppenheimer formulations exhibit unphysical fluctuations unless a sufficiently high degree of convergence is reached atmore » each time step. Lastly, the thermostated extended Lagrangian framework thus offers an accurate approach to sample processes in the canonical ensemble at a fraction of the computational cost of regular Born-Oppenheimer molecular dynamics simulations.« less
Isaacson, Dylan; Ahmad, Tessnim; Metzler, Ian; Tzou, David T; Taguchi, Kazumi; Usawachintachit, Manint; Zetumer, Samuel; Sherer, Benjamin; Stoller, Marshall; Chi, Thomas
2017-10-01
Careful decontamination and sterilization of reusable flexible ureteroscopes used in ureterorenoscopy cases prevent the spread of infectious pathogens to patients and technicians. However, inefficient reprocessing and unavailability of ureteroscopes sent out for repair can contribute to expensive operating room (OR) delays. Time-driven activity-based costing (TDABC) was applied to describe the time and costs involved in reprocessing. Direct observation and timing were performed for all steps in reprocessing of reusable flexible ureteroscopes following operative procedures. Estimated times needed for each step by which damaged ureteroscopes identified during reprocessing are sent for repair were characterized through interviews with purchasing analyst staff. Process maps were created for reprocessing and repair detailing individual step times and their variances. Cost data for labor and disposables used were applied to calculate per minute and average step costs. Ten ureteroscopes were followed through reprocessing. Process mapping for ureteroscope reprocessing averaged 229.0 ± 74.4 minutes, whereas sending a ureteroscope for repair required an estimated 143 minutes per repair. Most steps demonstrated low variance between timed observations. Ureteroscope drying was the longest and highest variance step at 126.5 ± 55.7 minutes and was highly dependent on manual air flushing through the ureteroscope working channel and ureteroscope positioning in the drying cabinet. Total costs for reprocessing totaled $96.13 per episode, including the cost of labor and disposable items. Utilizing TDABC delineates the full spectrum of costs associated with ureteroscope reprocessing and identifies areas for process improvement to drive value-based care. At our institution, ureteroscope drying was one clearly identified target area. Implementing training in ureteroscope drying technique could save up to 2 hours per reprocessing event, potentially preventing expensive OR delays.
Towards real-time verification of CO2 emissions
NASA Astrophysics Data System (ADS)
Peters, Glen P.; Le Quéré, Corinne; Andrew, Robbie M.; Canadell, Josep G.; Friedlingstein, Pierre; Ilyina, Tatiana; Jackson, Robert B.; Joos, Fortunat; Korsbakken, Jan Ivar; McKinley, Galen A.; Sitch, Stephen; Tans, Pieter
2017-12-01
The Paris Agreement has increased the incentive to verify reported anthropogenic carbon dioxide emissions with independent Earth system observations. Reliable verification requires a step change in our understanding of carbon cycle variability.
Arts Require Timely Service (ARTS) Act
Rep. Berman, Howard L. [D-CA-28
2009-03-30
House - 04/27/2009 Referred to the Subcommittee on Immigration, Citizenship, Refugees, Border Security, and International Law. (All Actions) Tracker: This bill has the status IntroducedHere are the steps for Status of Legislation:
NASA Technical Reports Server (NTRS)
Kasahara, Hironori; Honda, Hiroki; Narita, Seinosuke
1989-01-01
Parallel processing of real-time dynamic systems simulation on a multiprocessor system named OSCAR is presented. In the simulation of dynamic systems, generally, the same calculation are repeated every time step. However, we cannot apply to Do-all or the Do-across techniques for parallel processing of the simulation since there exist data dependencies from the end of an iteration to the beginning of the next iteration and furthermore data-input and data-output are required every sampling time period. Therefore, parallelism inside the calculation required for a single time step, or a large basic block which consists of arithmetic assignment statements, must be used. In the proposed method, near fine grain tasks, each of which consists of one or more floating point operations, are generated to extract the parallelism from the calculation and assigned to processors by using optimal static scheduling at compile time in order to reduce large run time overhead caused by the use of near fine grain tasks. The practicality of the scheme is demonstrated on OSCAR (Optimally SCheduled Advanced multiprocessoR) which has been developed to extract advantageous features of static scheduling algorithms to the maximum extent.
A kinematic analysis of the rapid step test in balance-impaired and unimpaired older women.
Schulz, Brian W; Ashton-Miller, James A; Alexander, Neil B
2007-04-01
Little is known about the kinematic and kinetic determinants that might explain age and balance-impairment alterations in the results of volitional stepping performance tests. Maximal unipedal stance time (UST) was used to distinguish "balance-impaired" old (BI, UST<10s, N=15, mean age=76 years) from unimpaired old (O, UST>30s, N=12, mean age=71 years) before they and healthy young females (Y, UST>30s, N=13, mean age=23 years) performed the rapid step test (RST). The RST evaluates the time required to take volitional front, side, and back steps of at least 80% maximum step length in response to verbal commands. Kinematic and kinetic data were recorded during the RST. The results indicate that the initiation phase of the step was the major source of age- and balance impairment-related delays. The delays in BI were primarily caused by increased postural adjustments prior to step initiation, as measured by center-of-pressure (COP) path length (p<0.003). The Step landing phase showed similar, but non-significant, temporal trends. Step length and peak center-of-mass (COM) deceleration during the Step-Out landing decreased in O by 18% (p=0.0002) and 24% (p=0.001), respectively, and a further 12% (p=0.04) and 18% (p=0.08) in BI. We conclude that the delay in BI step initiation was due to the increase in their postural adjustments prior to step initiation.
NASA Technical Reports Server (NTRS)
Gottlieb, D.; Turkel, E.
1980-01-01
New methods are introduced for the time integration of the Fourier and Chebyshev methods of solution for dynamic differential equations. These methods are unconditionally stable, even though no matrix inversions are required. Time steps are chosen by accuracy requirements alone. For the Fourier method both leapfrog and Runge-Kutta methods are considered. For the Chebyshev method only Runge-Kutta schemes are tested. Numerical calculations are presented to verify the analytic results. Applications to the shallow water equations are presented.
Spatial Data Integration Using Ontology-Based Approach
NASA Astrophysics Data System (ADS)
Hasani, S.; Sadeghi-Niaraki, A.; Jelokhani-Niaraki, M.
2015-12-01
In today's world, the necessity for spatial data for various organizations is becoming so crucial that many of these organizations have begun to produce spatial data for that purpose. In some circumstances, the need to obtain real time integrated data requires sustainable mechanism to process real-time integration. Case in point, the disater management situations that requires obtaining real time data from various sources of information. One of the problematic challenges in the mentioned situation is the high degree of heterogeneity between different organizations data. To solve this issue, we introduce an ontology-based method to provide sharing and integration capabilities for the existing databases. In addition to resolving semantic heterogeneity, better access to information is also provided by our proposed method. Our approach is consisted of three steps, the first step is identification of the object in a relational database, then the semantic relationships between them are modelled and subsequently, the ontology of each database is created. In a second step, the relative ontology will be inserted into the database and the relationship of each class of ontology will be inserted into the new created column in database tables. Last step is consisted of a platform based on service-oriented architecture, which allows integration of data. This is done by using the concept of ontology mapping. The proposed approach, in addition to being fast and low cost, makes the process of data integration easy and the data remains unchanged and thus takes advantage of the legacy application provided.
Woodward, Carol S.; Gardner, David J.; Evans, Katherine J.
2015-01-01
Efficient solutions of global climate models require effectively handling disparate length and time scales. Implicit solution approaches allow time integration of the physical system with a step size governed by accuracy of the processes of interest rather than by stability of the fastest time scales present. Implicit approaches, however, require the solution of nonlinear systems within each time step. Usually, a Newton's method is applied to solve these systems. Each iteration of the Newton's method, in turn, requires the solution of a linear model of the nonlinear system. This model employs the Jacobian of the problem-defining nonlinear residual, but thismore » Jacobian can be costly to form. If a Krylov linear solver is used for the solution of the linear system, the action of the Jacobian matrix on a given vector is required. In the case of spectral element methods, the Jacobian is not calculated but only implemented through matrix-vector products. The matrix-vector multiply can also be approximated by a finite difference approximation which may introduce inaccuracy in the overall nonlinear solver. In this paper, we review the advantages and disadvantages of finite difference approximations of these matrix-vector products for climate dynamics within the spectral element shallow water dynamical core of the Community Atmosphere Model.« less
Leap Frog and Time Step Sub-Cycle Scheme for Coupled Neutronics and Thermal-Hydraulic Codes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lu, S.
2002-07-01
As the result of the advancing TCP/IP based inter-process communication technology, more and more legacy thermal-hydraulic codes have been coupled with neutronics codes to provide best-estimate capabilities for reactivity related reactor transient analysis. Most of the coupling schemes are based on closely coupled serial or parallel approaches. Therefore, the execution of the coupled codes usually requires significant CPU time, when a complicated system is analyzed. Leap Frog scheme has been used to reduce the run time. The extent of the decoupling is usually determined based on a trial and error process for a specific analysis. It is the intent ofmore » this paper to develop a set of general criteria, which can be used to invoke the automatic Leap Frog algorithm. The algorithm will not only provide the run time reduction but also preserve the accuracy. The criteria will also serve as the base of an automatic time step sub-cycle scheme when a sudden reactivity change is introduced and the thermal-hydraulic code is marching with a relatively large time step. (authors)« less
Regression Analysis of a Disease Onset Distribution Using Diagnosis Data
Young, Jessica G.; Jewell, Nicholas P.; Samuels, Steven J.
2008-01-01
Summary We consider methods for estimating the effect of a covariate on a disease onset distribution when the observed data structure consists of right-censored data on diagnosis times and current status data on onset times amongst individuals who have not yet been diagnosed. Dunson and Baird (2001, Biometrics 57, 306–403) approached this problem using maximum likelihood, under the assumption that the ratio of the diagnosis and onset distributions is monotonic nondecreasing. As an alternative, we propose a two-step estimator, an extension of the approach of van der Laan, Jewell, and Petersen (1997, Biometrika 84, 539–554) in the single sample setting, which is computationally much simpler and requires no assumptions on this ratio. A simulation study is performed comparing estimates obtained from these two approaches, as well as that from a standard current status analysis that ignores diagnosis data. Results indicate that the Dunson and Baird estimator outperforms the two-step estimator when the monotonicity assumption holds, but the reverse is true when the assumption fails. The simple current status estimator loses only a small amount of precision in comparison to the two-step procedure but requires monitoring time information for all individuals. In the data that motivated this work, a study of uterine fibroids and chemical exposure to dioxin, the monotonicity assumption is seen to fail. Here, the two-step and current status estimators both show no significant association between the level of dioxin exposure and the hazard for onset of uterine fibroids; the two-step estimator of the relative hazard associated with increasing levels of exposure has the least estimated variance amongst the three estimators considered. PMID:17680832
Non-equilibrium calculations of atmospheric processes initiated by electron impact.
NASA Astrophysics Data System (ADS)
Campbell, L.; Brunger, M. J.
2007-05-01
Electron impact in the atmosphere produces ionisation, dissociation, electronic excitation and vibrational excitation of atoms and molecules. The products can then take part in chemical reactions, recombination with electrons, or radiative or collisional deactivation. While most such processes are fast, some longer--lived species do not reach equilibrium. The electron source (photoelectrons or auroral electrons) also varies over time and longer-lived species can move substantially in altitude by molecular, ambipolar or eddy diffusion. Hence non-equilibrium calculations are required in some circumstances. Such time-step calculations need to have sufficiently short steps so that the fastest processes are still calculated correctly, but this can lead to computation times that are too large. Hence techniques to allow for longer time steps by incorporating equilibrium calculations are described. Examples are given for results of atmospheric non-equilibrium calculations, including the populations of the vibrational levels of ground state N2, the electron density and its dependence on vibrationally excited N2, predictions of nitric oxide density, and detailed processes during short duration auroral events.
Brandstetter, Markus; Genner, Andreas; Schwarzer, Clemens; Mujagic, Elvis; Strasser, Gottfried; Lendl, Bernhard
2014-02-10
We present the time-resolved comparison of pulsed 2nd order ring cavity surface emitting (RCSE) quantum cascade lasers (QCLs) and pulsed 1st order ridge-type distributed feedback (DFB) QCLs using a step-scan Fourier transform infrared (FT-IR) spectrometer. Laser devices were part of QCL arrays and fabricated from the same laser material. Required grating periods were adjusted to account for the grating order. The step-scan technique provided a spectral resolution of 0.1 cm(-1) and a time resolution of 2 ns. As a result, it was possible to gain information about the tuning behavior and potential mode-hops of the investigated lasers. Different cavity-lengths were compared, including 0.9 mm and 3.2 mm long ridge-type and 0.97 mm (circumference) ring-type cavities. RCSE QCLs were found to have improved emission properties in terms of line-stability, tuning rate and maximum emission time compared to ridge-type lasers.
Simplified jet fuel reaction mechanism for lean burn combustion application
NASA Technical Reports Server (NTRS)
Lee, Chi-Ming; Kundu, Krishna; Ghorashi, Bahman
1993-01-01
Successful modeling of combustion and emissions in gas turbine engine combustors requires an adequate description of the reaction mechanism. Detailed mechanisms contain a large number of chemical species participating simultaneously in many elementary kinetic steps. Current computational fluid dynamic models must include fuel vaporization, fuel-air mixing, chemical reactions, and complicated boundary geometries. A five-step Jet-A fuel mechanism which involves pyrolysis and subsequent oxidation of paraffin and aromatic compounds is presented. This mechanism is verified by comparing with Jet-A fuel ignition delay time experimental data, and species concentrations obtained from flametube experiments. This five-step mechanism appears to be better than the current one- and two-step mechanisms.
Does It Really Matter Where You Look When Walking on Stairs? Insights from a Dual-Task Study
Miyasike-daSilva, Veronica; McIlroy, William E.
2012-01-01
Although the visual system is known to provide relevant information to guide stair locomotion, there is less understanding of the specific contributions of foveal and peripheral visual field information. The present study investigated the specific role of foveal vision during stair locomotion and ground-stairs transitions by using a dual-task paradigm to influence the ability to rely on foveal vision. Fifteen healthy adults (26.9±3.3 years; 8 females) ascended a 7-step staircase under four conditions: no secondary tasks (CONTROL); gaze fixation on a fixed target located at the end of the pathway (TARGET); visual reaction time task (VRT); and auditory reaction time task (ART). Gaze fixations towards stair features were significantly reduced in TARGET and VRT compared to CONTROL and ART. Despite the reduced fixations, participants were able to successfully ascend stairs and rarely used the handrail. Step time was increased during VRT compared to CONTROL in most stair steps. Navigating on the transition steps did not require more gaze fixations than the middle steps. However, reaction time tended to increase during locomotion on transitions suggesting additional executive demands during this phase. These findings suggest that foveal vision may not be an essential source of visual information regarding stair features to guide stair walking, despite the unique control challenges at transition phases as highlighted by phase-specific challenges in dual-tasking. Instead, the tendency to look at the steps in usual conditions likely provides a stable reference frame for extraction of visual information regarding step features from the entire visual field. PMID:22970297
Personal computer study of finite-difference methods for the transonic small disturbance equation
NASA Technical Reports Server (NTRS)
Bland, Samuel R.
1989-01-01
Calculation of unsteady flow phenomena requires careful attention to the numerical treatment of the governing partial differential equations. The personal computer provides a convenient and useful tool for the development of meshes, algorithms, and boundary conditions needed to provide time accurate solution of these equations. The one-dimensional equation considered provides a suitable model for the study of wave propagation in the equations of transonic small disturbance potential flow. Numerical results for effects of mesh size, extent, and stretching, time step size, and choice of far-field boundary conditions are presented. Analysis of the discretized model problem supports these numerical results. Guidelines for suitable mesh and time step choices are given.
Light regulation of the growth response in corn root gravitropism
NASA Technical Reports Server (NTRS)
Kelly, M. O.; Leopold, A. C.
1992-01-01
Roots of Merit variety corn (Zea mays L.) require red light for orthogravitropic curvature. Experiments were undertaken to identify the step in the pathway from gravity perception to asymmetric growth on which light may act. Red light was effective in inducing gravitropism whether it was supplied concomitant with or as long as 30 minutes after the gravity stimulus (GS). The presentation time was the same whether the GS was supplied in red light or in darkness. Red light given before the GS slightly enhanced the rate of curvature but had little effect on the lag time or on the final curvature. This enhancement was expanded by a delay between the red light pulse and the GS. These results indicate that gravity perception and at least the initial transduction steps proceed in the dark. Light may regulate the final growth (motor) phase of gravitropism. The time required for full expression of the light enhancement of curvature is consistent with its involvement in some light-stimulated biosynthetic event.
5 CFR 531.504 - Level of performance required for quality step increase.
Code of Federal Regulations, 2010 CFR
2010-01-01
... step increase. 531.504 Section 531.504 Administrative Personnel OFFICE OF PERSONNEL MANAGEMENT CIVIL SERVICE REGULATIONS PAY UNDER THE GENERAL SCHEDULE Quality Step Increases § 531.504 Level of performance required for quality step increase. A quality step increase shall not be required but may be granted only...
Functional Fault Modeling of a Cryogenic System for Real-Time Fault Detection and Isolation
NASA Technical Reports Server (NTRS)
Ferrell, Bob; Lewis, Mark; Perotti, Jose; Oostdyk, Rebecca; Brown, Barbara
2010-01-01
The purpose of this paper is to present the model development process used to create a Functional Fault Model (FFM) of a liquid hydrogen (L H2) system that will be used for realtime fault isolation in a Fault Detection, Isolation and Recover (FDIR) system. The paper explains th e steps in the model development process and the data products required at each step, including examples of how the steps were performed fo r the LH2 system. It also shows the relationship between the FDIR req uirements and steps in the model development process. The paper concl udes with a description of a demonstration of the LH2 model developed using the process and future steps for integrating the model in a live operational environment.
Making a Computer Model of the Most Complex System Ever Built - Continuum
Eastern Interconnection, all as a function of time. All told, that's about 1,000 gigabytes of data the modeling software steps forward in time, those decisions affect how the grid operates under Interconnection at five-minute intervals for one year would have required more than 400 days of computing time
Brower, Kevin P; Ryakala, Venkat K; Bird, Ryan; Godawat, Rahul; Riske, Frank J; Konstantinov, Konstantin; Warikoo, Veena; Gamble, Jean
2014-01-01
Downstream sample purification for quality attribute analysis is a significant bottleneck in process development for non-antibody biologics. Multi-step chromatography process train purifications are typically required prior to many critical analytical tests. This prerequisite leads to limited throughput, long lead times to obtain purified product, and significant resource requirements. In this work, immunoaffinity purification technology has been leveraged to achieve single-step affinity purification of two different enzyme biotherapeutics (Fabrazyme® [agalsidase beta] and Enzyme 2) with polyclonal and monoclonal antibodies, respectively, as ligands. Target molecules were rapidly isolated from cell culture harvest in sufficient purity to enable analysis of critical quality attributes (CQAs). Most importantly, this is the first study that demonstrates the application of predictive analytics techniques to predict critical quality attributes of a commercial biologic. The data obtained using the affinity columns were used to generate appropriate models to predict quality attributes that would be obtained after traditional multi-step purification trains. These models empower process development decision-making with drug substance-equivalent product quality information without generation of actual drug substance. Optimization was performed to ensure maximum target recovery and minimal target protein degradation. The methodologies developed for Fabrazyme were successfully reapplied for Enzyme 2, indicating platform opportunities. The impact of the technology is significant, including reductions in time and personnel requirements, rapid product purification, and substantially increased throughput. Applications are discussed, including upstream and downstream process development support to achieve the principles of Quality by Design (QbD) as well as integration with bioprocesses as a process analytical technology (PAT). © 2014 American Institute of Chemical Engineers.
Azar, Nabih; Leblond, Veronique; Ouzegdouh, Maya; Button, Paul
2017-12-01
The Pitié Salpêtrière Hospital Hemobiotherapy Department, Paris, France, has been providing extracorporeal photopheresis (ECP) since November 2011, and started using the Therakos ® CELLEX ® fully integrated system in 2012. This report summarizes our single-center experience of transitioning from the use of multi-step ECP procedures to the fully integrated ECP system, considering the capacity and cost implications. The total number of ECP procedures performed 2011-2015 was derived from department records. The time taken to complete a single ECP treatment using a multi-step technique and the fully integrated system at our department was assessed. Resource costs (2014€) were obtained for materials and calculated for personnel time required. Time-driven activity-based costing methods were applied to provide a cost comparison. The number of ECP treatments per year increased from 225 (2012) to 727 (2015). The single multi-step procedure took 270 min compared to 120 min for the fully integrated system. The total calculated per-session cost of performing ECP using the multi-step procedure was greater than with the CELLEX ® system (€1,429.37 and €1,264.70 per treatment, respectively). For hospitals considering a transition from multi-step procedures to fully integrated methods for ECP where cost may be a barrier, time-driven activity-based costing should be utilized to gain a more comprehensive understanding the full benefit that such a transition offers. The example from our department confirmed that there were not just cost and time savings, but that the time efficiencies gained with CELLEX ® allow for more patient treatments per year. © 2017 The Authors Journal of Clinical Apheresis Published by Wiley Periodicals, Inc.
Rapid oxidation/stabilization technique for carbon foams, carbon fibers and C/C composites
Tan, Seng; Tan, Cher-Dip
2004-05-11
An enhanced method for the post processing, i.e. oxidation or stabilization, of carbon materials including, but not limited to, carbon foams, carbon fibers, dense carbon-carbon composites, carbon/ceramic and carbon/metal composites, which method requires relatively very short and more effective such processing steps. The introduction of an "oxygen spill over catalyst" into the carbon precursor by blending with the carbon starting material or exposure of the carbon precursor to such a material supplies required oxygen at the atomic level and permits oxidation/stabilization of carbon materials in a fraction of the time and with a fraction of the energy normally required to accomplish such carbon processing steps. Carbon based foams, solids, composites and fiber products made utilizing this method are also described.
Progress in development of HEDP capabilities in FLASH's Unsplit Staggered Mesh MHD solver
NASA Astrophysics Data System (ADS)
Lee, D.; Xia, G.; Daley, C.; Dubey, A.; Gopal, S.; Graziani, C.; Lamb, D.; Weide, K.
2011-11-01
FLASH is a publicly available astrophysical community code designed to solve highly compressible multi-physics reactive flows. We are adding capabilities to FLASH that will make it an open science code for the academic HEDP community. Among many important numerical requirements, we consider the following features to be important components necessary to meet our goals for FLASH as an HEDP open toolset. First, we are developing computationally efficient time-stepping integration methods that overcome the stiffness that arises in the equations describing a physical problem when there are disparate time scales. To this end, we are adding two different time-stepping schemes to FLASH that relax the time step limit when diffusive effects are present: an explicit super-time-stepping algorithm (Alexiades et al. in Com. Num. Mech. Eng. 12:31-42, 1996) and a Jacobian-Free Newton-Krylov implicit formulation. These two methods will be integrated into a robust, efficient, and high-order accurate Unsplit Staggered Mesh MHD (USM) solver (Lee and Deane in J. Comput. Phys. 227, 2009). Second, we have implemented an anisotropic Spitzer-Braginskii conductivity model to treat thermal heat conduction along magnetic field lines. Finally, we are implementing the Biermann Battery term to account for spontaneous generation of magnetic fields in the presence of non-parallel temperature and density gradients.
A Method for Response Time Measurement of Electrosensitive Protective Devices.
Dźwiarek, Marek
1996-01-01
A great step toward the improvement of safety at work was made when electrosensitive protective devices (ESPDs) were applied to the protection of press and robot-assisted manufacturing system operators. The way the device is mounted is crucial. The parameters of ESPD mounting that ensure safe distance from the controlled dangerous zone are response time, sensitivity, and the dimensions of the detection zone. The proposed experimental procedure of response time measurement is realized in two steps, with a test piece penetrating the detection zone twice. In the first step, low-speed penetration (at a speed v m ) enables the detection zone border to be localized. In the second step of measurement, the probe is injected at a high speed V d . The actuator rod position is measured and when it is equal to the value L registered by the earlier measurements, counting time begins as well as the monitoring of the state of the equipment under test (EUT) output relays. After the state changes, time tp is registered. The experimental procedure is realized on a special experimental stand. Because the stand has been constructed for certification purposes, the design satisfies the requirements imposed by Polski Komitet Normalizacyjny (PKN, 1995). The experimental results prove the measurement error to be smaller than ± 0.6 ms.
Hornby, T George; Holleran, Carey L; Leddy, Abigail L; Hennessy, Patrick; Leech, Kristan A; Connolly, Mark; Moore, Jennifer L; Straube, Donald; Lovell, Linda; Roth, Elliot
2015-01-01
Optimal physical therapy strategies to maximize locomotor function in patients early poststroke are not well established. Emerging data indicate that substantial amounts of task-specific stepping practice may improve locomotor function, although stepping practice provided during inpatient rehabilitation is limited (<300 steps/session). The purpose of this investigation was to determine the feasibility of providing focused stepping training to patients early poststroke and its potential association with walking and other mobility outcomes. Daily stepping was recorded on 201 patients <6 months poststroke (80% < 1 month) during inpatient rehabilitation following implementation of a focused training program to maximize stepping practice during clinical physical therapy sessions. Primary outcomes included distance and physical assistance required during a 6-minute walk test (6MWT) and balance using the Berg Balance Scale (BBS). Retrospective data analysis included multiple regression techniques to evaluate the contributions of demographics, training activities, and baseline motor function to primary outcomes at discharge. Median stepping activity recorded from patients was 1516 steps/d, which is 5 to 6 times greater than that typically observed. The number of steps per day was positively correlated with both discharge 6MWT and BBS and improvements from baseline (changes; r = 0.40-0.87), independently contributing 10% to 31% of the total variance. Stepping activity also predicted level of assistance at discharge and discharge location (home vs other facility). Providing focused, repeated stepping training was feasible early poststroke during inpatient rehabilitation and was related to mobility outcomes. Further research is required to evaluate the effectiveness of these training strategies on short- or long-term mobility outcomes as compared with conventional interventions. © The Author(s) 2015.
NASA Astrophysics Data System (ADS)
Pawar, V.; Weaver, C.; Jani, S.
2011-05-01
Zirconium and particularly Zr-2.5 wt%Nb (Zr2.5Nb) alloy are useful for engineering bearing applications because they can be oxidized in air to form a hard surface ceramic. Oxidized zirconium (OxZr) due to its abrasion resistant ceramic surface and biocompatible substrate alloy has been used as a bearing surface in total joint arthroplasty for several years. OxZr is characterized by hard zirconium oxide (oxide) formed on Zr2.5Nb using one step thermal oxidation carried out in air. Because the oxide is only at the surface, the bulk material behaves like a metal, with high toughness. The oxide, furthermore, exhibits high adhesion to the substrate because of an oxygen-rich diffusion hardened zone (DHZ) interposing between the oxide and the substrate. In this study, we demonstrate a two step process that forms a thicker DHZ and thus increased depth of hardening than that can be obtained using a one step oxidation process. The first step is thermal oxidation in air and the second step is a heat treatment in vacuum. The second step drives oxygen from the oxide formed in the first step deeper into the substrate to form a thicker DHZ. During the process only a portion of the oxide is dissolved. This new composition (DHOxZr) has approximately 4-6 μm oxide similar to that of OxZr. The nano-hardness of the oxide is similar but the DHZ is approximately 10 times thicker. The stoichiometry of the oxide is similar and a secondary phase rich in oxygen is present through the entire thickness. Due to the increased depth of hardening, the critical load required for the onset of oxide cracking is approximately 1.6 times more than that of the oxide of OxZr. This new composition has a potential to be used as a bearing surface in applications where greater depth of hardening is required.
Effective teaching of manual skills to physiotherapy students: a randomised clinical trial.
Rossettini, Giacomo; Rondoni, Angie; Palese, Alvisa; Cecchetto, Simone; Vicentini, Marco; Bettale, Fernanda; Furri, Laura; Testa, Marco
2017-08-01
To date, despite the relevance of manual skills laboratories in physiotherapy education, evidence on the effectiveness of different teaching methods is limited. Peyton's four-step and the 'See one, do one' approaches were compared for their effectiveness in teaching manual skills. A cluster randomised controlled trial was performed among final-year, right-handed physiotherapy students, without prior experience in manual therapy or skills laboratories. The manual technique of C1-C2 passive right rotation was taught by different experienced physiotherapist using Peyton's four-step approach (intervention group) and the 'See one, do one' approach (control group). Participants, teachers and assessors were blinded to the aims of the study. Primary outcomes were quality of performance at the end of the skills laboratories, and after 1 week and 1 month. Secondary outcomes were time required to teach, time required to perform the procedure and student satisfaction. A total of 39 students were included in the study (21 in the intervention group and 18 in the control group). Their main characteristics were homogeneous at baseline. The intervention group showed better quality of performance in the short, medium and long terms (F 1,111 = 35.91, p < 0.001). Both groups demonstrated decreased quality of performance over time (F 2,111 = 12.91, p < 0.001). The intervention group reported significantly greater mean ± standard deviation satisfaction (4.31 ± 1.23) than the control group (4.03 ± 1.31) (p < 0.001). Although there was no significant difference between the two methods in the time required for teaching, the time required by the intervention group to perform the procedure was significantly lower immediately after the skills laboratories and over time (p < 0.001). Peyton's four-step approach is more effective than the 'See one, do one' approach in skills laboratories aimed at developing physiotherapy student competence in C1-C2 passive mobilisation. © 2017 John Wiley & Sons Ltd and The Association for the Study of Medical Education.
NASA Astrophysics Data System (ADS)
Pohle, Ina; Niebisch, Michael; Zha, Tingting; Schümberg, Sabine; Müller, Hannes; Maurer, Thomas; Hinz, Christoph
2017-04-01
Rainfall variability within a storm is of major importance for fast hydrological processes, e.g. surface runoff, erosion and solute dissipation from surface soils. To investigate and simulate the impacts of within-storm variabilities on these processes, long time series of rainfall with high resolution are required. Yet, observed precipitation records of hourly or higher resolution are in most cases available only for a small number of stations and only for a few years. To obtain long time series of alternating rainfall events and interstorm periods while conserving the statistics of observed rainfall events, the Poisson model can be used. Multiplicative microcanonical random cascades have been widely applied to disaggregate rainfall time series from coarse to fine temporal resolution. We present a new coupling approach of the Poisson rectangular pulse model and the multiplicative microcanonical random cascade model that preserves the characteristics of rainfall events as well as inter-storm periods. In the first step, a Poisson rectangular pulse model is applied to generate discrete rainfall events (duration and mean intensity) and inter-storm periods (duration). The rainfall events are subsequently disaggregated to high-resolution time series (user-specified, e.g. 10 min resolution) by a multiplicative microcanonical random cascade model. One of the challenges of coupling these models is to parameterize the cascade model for the event durations generated by the Poisson model. In fact, the cascade model is best suited to downscale rainfall data with constant time step such as daily precipitation data. Without starting from a fixed time step duration (e.g. daily), the disaggregation of events requires some modifications of the multiplicative microcanonical random cascade model proposed by Olsson (1998): Firstly, the parameterization of the cascade model for events of different durations requires continuous functions for the probabilities of the multiplicative weights, which we implemented through sigmoid functions. Secondly, the branching of the first and last box is constrained to preserve the rainfall event durations generated by the Poisson rectangular pulse model. The event-based continuous time step rainfall generator has been developed and tested using 10 min and hourly rainfall data of four stations in North-Eastern Germany. The model performs well in comparison to observed rainfall in terms of event durations and mean event intensities as well as wet spell and dry spell durations. It is currently being tested using data from other stations across Germany and in different climate zones. Furthermore, the rainfall event generator is being applied in modelling approaches aimed at understanding the impact of rainfall variability on hydrological processes. Reference Olsson, J.: Evaluation of a scaling cascade model for temporal rainfall disaggregation, Hydrology and Earth System Sciences, 2, 19.30
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wemhoff, A P; Burnham, A K; Nichols III, A L
The reduction of the number of reactions in kinetic models for both the HMX beta-delta phase transition and thermal cookoff provides an attractive alternative to traditional multi-stage kinetic models due to reduced calibration effort requirements. In this study, we use the LLNL code ALE3D to provide calibrated kinetic parameters for a two-reaction bidirectional beta-delta HMX phase transition model based on Sandia Instrumented Thermal Ignition (SITI) and Scaled Thermal Explosion (STEX) temperature history curves, and a Prout-Tompkins cookoff model based on One-Dimensional Time to Explosion (ODTX) data. Results show that the two-reaction bidirectional beta-delta transition model presented here agrees as wellmore » with STEX and SITI temperature history curves as a reversible four-reaction Arrhenius model, yet requires an order of magnitude less computational effort. In addition, a single-reaction Prout-Tompkins model calibrated to ODTX data provides better agreement with ODTX data than a traditional multi-step Arrhenius model, and can contain up to 90% less chemistry-limited time steps for low-temperature ODTX simulations. Manual calibration methods for the Prout-Tompkins kinetics provide much better agreement with ODTX experimental data than parameters derived from Differential Scanning Calorimetry (DSC) measurements at atmospheric pressure. The predicted surface temperature at explosion for STEX cookoff simulations is a weak function of the cookoff model used, and a reduction of up to 15% of chemistry-limited time steps can be achieved by neglecting the beta-delta transition for this type of simulation. Finally, the inclusion of the beta-delta transition model in the overall kinetics model can affect the predicted time to explosion by 1% for the traditional multi-step Arrhenius approach, while up to 11% using a Prout-Tompkins cookoff model.« less
Oudenhoven, Laura M; Boes, Judith M; Hak, Laura; Faber, Gert S; Houdijk, Han
2017-01-25
Running specific prostheses (RSP) are designed to replicate the spring-like behaviour of the human leg during running, by incorporating a real physical spring in the prosthesis. Leg stiffness is an important parameter in running as it is strongly related to step frequency and running economy. To be able to select a prosthesis that contributes to the required leg stiffness of the athlete, it needs to be known to what extent the behaviour of the prosthetic leg during running is dominated by the stiffness of the prosthesis or whether it can be regulated by adaptations of the residual joints. The aim of this study was to investigate whether and how athletes with an RSP could regulate leg stiffness during distance running at different step frequencies. Seven endurance runners with an unilateral transtibial amputation performed five running trials on a treadmill at a fixed speed, while different step frequencies were imposed (preferred step frequency (PSF) and -15%, -7.5%, +7.5% and +15% of PSF). Among others, step time, ground contact time, flight time, leg stiffness and joint kinetics were measured for both legs. In the intact leg, increasing step frequency was accompanied by a decrease in both contact and flight time, while in the prosthetic leg contact time remained constant and only flight time decreased. In accordance, leg stiffness increased in the intact leg, but not in the prosthetic leg. Although a substantial contribution of the residual leg to total leg stiffness was observed, this contribution did not change considerably with changing step frequency. Amputee athletes do not seem to be able to alter prosthetic leg stiffness to regulate step frequency during running. This invariant behaviour indicates that RSP stiffness has a large effect on total leg stiffness and therefore can have an important influence on running performance. Nevertheless, since prosthetic leg stiffness was considerably lower than stiffness of the RSP, compliance of the residual leg should not be ignored when selecting RSP stiffness. Copyright © 2016 Elsevier Ltd. All rights reserved.
Epicenter location by analysis for interictal spikes
NASA Technical Reports Server (NTRS)
Hand, C.
2001-01-01
The MEG recording is a quick and painless process that requires no surgery. This approach has the potential to save time, reduce patient discomfort, and eliminates a painful and potentially dangerous surgical step in the treatment procedure.
Sample size calculation for stepped wedge and other longitudinal cluster randomised trials.
Hooper, Richard; Teerenstra, Steven; de Hoop, Esther; Eldridge, Sandra
2016-11-20
The sample size required for a cluster randomised trial is inflated compared with an individually randomised trial because outcomes of participants from the same cluster are correlated. Sample size calculations for longitudinal cluster randomised trials (including stepped wedge trials) need to take account of at least two levels of clustering: the clusters themselves and times within clusters. We derive formulae for sample size for repeated cross-section and closed cohort cluster randomised trials with normally distributed outcome measures, under a multilevel model allowing for variation between clusters and between times within clusters. Our formulae agree with those previously described for special cases such as crossover and analysis of covariance designs, although simulation suggests that the formulae could underestimate required sample size when the number of clusters is small. Whether using a formula or simulation, a sample size calculation requires estimates of nuisance parameters, which in our model include the intracluster correlation, cluster autocorrelation, and individual autocorrelation. A cluster autocorrelation less than 1 reflects a situation where individuals sampled from the same cluster at different times have less correlated outcomes than individuals sampled from the same cluster at the same time. Nuisance parameters could be estimated from time series obtained in similarly clustered settings with the same outcome measure, using analysis of variance to estimate variance components. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
Neural Correlates of Temporal Credit Assignment in the Parietal Lobe
Eisenberg, Ian; Gottlieb, Jacqueline
2014-01-01
Empirical studies of decision making have typically assumed that value learning is governed by time, such that a reward prediction error arising at a specific time triggers temporally-discounted learning for all preceding actions. However, in natural behavior, goals must be acquired through multiple actions, and each action can have different significance for the final outcome. As is recognized in computational research, carrying out multi-step actions requires the use of credit assignment mechanisms that focus learning on specific steps, but little is known about the neural correlates of these mechanisms. To investigate this question we recorded neurons in the monkey lateral intraparietal area (LIP) during a serial decision task where two consecutive eye movement decisions led to a final reward. The underlying decision trees were structured such that the two decisions had different relationships with the final reward, and the optimal strategy was to learn based on the final reward at one of the steps (the “F” step) but ignore changes in this reward at the remaining step (the “I” step). In two distinct contexts, the F step was either the first or the second in the sequence, controlling for effects of temporal discounting. We show that LIP neurons had the strongest value learning and strongest post-decision responses during the transition after the F step regardless of the serial position of this step. Thus, the neurons encode correlates of temporal credit assignment mechanisms that allocate learning to specific steps independently of temporal discounting. PMID:24523935
Effectiveness of en masse versus two-step retraction: a systematic review and meta-analysis.
Rizk, Mumen Z; Mohammed, Hisham; Ismael, Omar; Bearn, David R
2018-01-05
This review aims to compare the effectiveness of en masse and two-step retraction methods during orthodontic space closure regarding anchorage preservation and anterior segment retraction and to assess their effect on the duration of treatment and root resorption. An electronic search for potentially eligible randomized controlled trials and prospective controlled trials was performed in five electronic databases up to July 2017. The process of study selection, data extraction, and quality assessment was performed by two reviewers independently. A narrative review is presented in addition to a quantitative synthesis of the pooled results where possible. The Cochrane risk of bias tool and the Newcastle-Ottawa Scale were used for the methodological quality assessment of the included studies. Eight studies were included in the qualitative synthesis in this review. Four studies were included in the quantitative synthesis. En masse/miniscrew combination showed a statistically significant standard mean difference regarding anchorage preservation - 2.55 mm (95% CI - 2.99 to - 2.11) and the amount of upper incisor retraction - 0.38 mm (95% CI - 0.70 to - 0.06) when compared to a two-step/conventional anchorage combination. Qualitative synthesis suggested that en masse retraction requires less time than two-step retraction with no difference in the amount of root resorption. Both en masse and two-step retraction methods are effective during the space closure phase. The en masse/miniscrew combination is superior to the two-step/conventional anchorage combination with regard to anchorage preservation and amount of retraction. Limited evidence suggests that anchorage reinforcement with a headgear produces similar results with both retraction methods. Limited evidence also suggests that en masse retraction may require less time and that no significant differences exist in the amount of root resorption between the two methods.
Immobilization Techniques to Avoid Enzyme Loss from Oxidase-Based Biosensors: A One-Year Study
House, Jody L.; Anderson, Ellen M.; Ward, W. Kenneth
2007-01-01
Background Continuous amperometric sensors that measure glucose or lactate require a stable sensitivity, and glutaraldehyde crosslinking has been used widely to avoid enzyme loss. Nonetheless, little data is published on the effectiveness of enzyme immobilization with glutaraldehyde. Methods A combination of electrochemical testing and spectrophotometric assays was used to study the relationship between enzyme shedding and the fabrication procedure. In addition, we studied the relationship between the glutaraldehyde concentration and sensor performance over a period of one year. Results The enzyme immobilization process by glutaraldehyde crosslinking to glucose oxidase appears to require at least 24-hours at room temperature to reach completion. In addition, excess free glucose oxidase can be removed by soaking sensors in purified water for 20 minutes. Even with the addition of these steps, however, it appears that there is some free glucose oxidase entrapped within the enzyme layer which contributes to a decline in sensitivity over time. Although it reduces the ultimate sensitivity (probably via a change in the enzyme's natural conformation), glutaraldehyde concentration in the enzyme layer can be increased in order to minimize this instability. Conclusions After exposure of oxidase enzymes to glutaraldehyde, effective crosslinking requires a rinse step and a 24-hour incubation step. In order to minimize the loss of sensor sensitivity over time, the glutaraldehyde concentration can be increased. PMID:19888375
A General Method for Solving Systems of Non-Linear Equations
NASA Technical Reports Server (NTRS)
Nachtsheim, Philip R.; Deiss, Ron (Technical Monitor)
1995-01-01
The method of steepest descent is modified so that accelerated convergence is achieved near a root. It is assumed that the function of interest can be approximated near a root by a quadratic form. An eigenvector of the quadratic form is found by evaluating the function and its gradient at an arbitrary point and another suitably selected point. The terminal point of the eigenvector is chosen to lie on the line segment joining the two points. The terminal point found lies on an axis of the quadratic form. The selection of a suitable step size at this point leads directly to the root in the direction of steepest descent in a single step. Newton's root finding method not infrequently diverges if the starting point is far from the root. However, the current method in these regions merely reverts to the method of steepest descent with an adaptive step size. The current method's performance should match that of the Levenberg-Marquardt root finding method since they both share the ability to converge from a starting point far from the root and both exhibit quadratic convergence near a root. The Levenberg-Marquardt method requires storage for coefficients of linear equations. The current method which does not require the solution of linear equations requires more time for additional function and gradient evaluations. The classic trade off of time for space separates the two methods.
Autonomous antenna tracking system for mobile symphonie ground stations
NASA Technical Reports Server (NTRS)
Ernsberger, K.; Lorch, G.; Waffenschmidt, E.
1982-01-01
The implementation of a satellite tracking and antenna control system is described. Due to the loss of inclination control for the symphonie satellites, it became necessary to equip the parabolic antennas of the mobile Symphonie ground station with tracking facilities. For the relatively low required tracking accuracy of 0.5 dB, a low cost, step track system was selected. The step track system developed for this purpose and tested over a long period of time in 7 ground stations is based on a search step method with subsequent parabola interpolation. As compared with the real search step method, the system has the advantage of a higher pointing angle resolution, and thus a higher tracking accuracy. When the pilot signal has been switched off for a long period of time, as for instance after the eclipse, the antenna is repointed towards the satellite by an automatically initiated spiral search scan. The function and design of the tracking system are detailed, while easy handling and tracking results.
Experimental Quantum-Walk Revival with a Time-Dependent Coin
NASA Astrophysics Data System (ADS)
Xue, P.; Zhang, R.; Qin, H.; Zhan, X.; Bian, Z. H.; Li, J.; Sanders, Barry C.
2015-04-01
We demonstrate a quantum walk with time-dependent coin bias. With this technique we realize an experimental single-photon one-dimensional quantum walk with a linearly ramped time-dependent coin flip operation and thereby demonstrate two periodic revivals of the walker distribution. In our beam-displacer interferometer, the walk corresponds to movement between discretely separated transverse modes of the field serving as lattice sites, and the time-dependent coin flip is effected by implementing a different angle between the optical axis of half-wave plate and the light propagation at each step. Each of the quantum-walk steps required to realize a revival comprises two sequential orthogonal coin-flip operators, with one coin having constant bias and the other coin having a time-dependent ramped coin bias, followed by a conditional translation of the walker.
NASA Technical Reports Server (NTRS)
Ayap, Shanti; Fisher, Forest; Gladden, Roy; Khanampompan, Teerapat
2008-01-01
This software tool saves time and reduces risk by automating two labor-intensive and error-prone post-processing steps required for every DKF [DSN (Deep Space Network) Keyword File] that MRO (Mars Reconnaissance Orbiter) produces, and is being extended to post-process the corresponding TSOE (Text Sequence Of Events) as well. The need for this post-processing step stems from limitations in the seq-gen modeling resulting in incorrect DKF generation that is then cleaned up in post-processing.
The Next Step: A Study on Resiliency in Command and Control
2015-06-01
THE NEXT STEP: A STUDY ON RESILIENCY IN COMMAND AND CONTROL” BY LT COL RUSS “BONES” COOK A THESIS PRESENTED TO THE FACULTY OF THE SCHOOL...OF ADVANCED AIR AND SPACE STUDIES FOR COMPLETION OF GRADUATION REQUIREMENTS SCHOOL OF ADVANCED AIR AND SPACE STUDIES AIR UNIVERSITY MAXWELL...Aeronautical University; MA, Military Studies , Marine Corps University) is a senior pilot with more than 2,500 hours including combat time in both the HH
Continuous-Time Bilinear System Identification
NASA Technical Reports Server (NTRS)
Juang, Jer-Nan
2003-01-01
The objective of this paper is to describe a new method for identification of a continuous-time multi-input and multi-output bilinear system. The approach is to make judicious use of the linear-model properties of the bilinear system when subjected to a constant input. Two steps are required in the identification process. The first step is to use a set of pulse responses resulting from a constant input of one sample period to identify the state matrix, the output matrix, and the direct transmission matrix. The second step is to use another set of pulse responses with the same constant input over multiple sample periods to identify the input matrix and the coefficient matrices associated with the coupling terms between the state and the inputs. Numerical examples are given to illustrate the concept and the computational algorithm for the identification method.
Evaluating a primary care psychology service in Ireland: a survey of stakeholders and psychologists.
Corcoran, Mark; Byrne, Michael
2017-05-01
Primary care psychology services (PCPS) represent an important resource in meeting the various health needs of our communities. This study evaluated the PCPS in a two-county area within the Republic of Ireland. The objectives were to (i) examine the viewpoints of the service for both psychologists and stakeholders (healthcare professionals only) and (ii) examine the enactment of the stepped care model of service provision. Separate surveys were sent to primary care psychologists (n = 8), general practitioners (GPs; n = 69) and other stakeholders in the two counties. GPs and stakeholders were required to rate the current PCPS. The GP survey specifically examined referrals to the PCPS and service configuration, while the stakeholder survey also requested suggestions for future service provision. Psychologists were required to provide information regarding their workload, time spent on certain tasks and productivity ideas. Referral numbers, waiting lists and waiting times were also obtained. All 8 psychologists, 23 GPs (33% response rate) and 37 stakeholders (unknown response rate) responded. GPs and stakeholders reported access to the PCPS as a primary concern, with waiting times of up to 80 weeks in some areas. Service provision to children and adults was uneven between counties. A stepped care model of service provision was not observed. Access can be improved by further implementation of a stepped care service, developing a high-throughput service for adults (based on a stepped care model), and employing a single waiting list for each county to ensure equal access. © 2016 John Wiley & Sons Ltd.
40 CFR 147.2925 - Standard permit conditions.
Code of Federal Regulations, 2011 CFR
2011-07-01
... take all reasonable steps to mitigate any adverse environmental impact resulting from noncompliance. (d...) The permittee shall furnish, within a reasonable time, information that the Regional Administrator...) Signatory requirements. All applications, reports or information submitted to the Regional Administrator or...
40 CFR 147.2925 - Standard permit conditions.
Code of Federal Regulations, 2012 CFR
2012-07-01
... take all reasonable steps to mitigate any adverse environmental impact resulting from noncompliance. (d...) The permittee shall furnish, within a reasonable time, information that the Regional Administrator...) Signatory requirements. All applications, reports or information submitted to the Regional Administrator or...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wynne, Adam S.
2011-05-05
In many application domains in science and engineering, data produced by sensors, instruments and networks is naturally processed by software applications structured as a pipeline . Pipelines comprise a sequence of software components that progressively process discrete units of data to produce a desired outcome. For example, in a Web crawler that is extracting semantics from text on Web sites, the first stage in the pipeline might be to remove all HTML tags to leave only the raw text of the document. The second step may parse the raw text to break it down into its constituent grammatical parts, suchmore » as nouns, verbs and so on. Subsequent steps may look for names of people or places, interesting events or times so documents can be sequenced on a time line. Each of these steps can be written as a specialized program that works in isolation with other steps in the pipeline. In many applications, simple linear software pipelines are sufficient. However, more complex applications require topologies that contain forks and joins, creating pipelines comprising branches where parallel execution is desirable. It is also increasingly common for pipelines to process very large files or high volume data streams which impose end-to-end performance constraints. Additionally, processes in a pipeline may have specific execution requirements and hence need to be distributed as services across a heterogeneous computing and data management infrastructure. From a software engineering perspective, these more complex pipelines become problematic to implement. While simple linear pipelines can be built using minimal infrastructure such as scripting languages, complex topologies and large, high volume data processing requires suitable abstractions, run-time infrastructures and development tools to construct pipelines with the desired qualities-of-service and flexibility to evolve to handle new requirements. The above summarizes the reasons we created the MeDICi Integration Framework (MIF) that is designed for creating high-performance, scalable and modifiable software pipelines. MIF exploits a low friction, robust, open source middleware platform and extends it with component and service-based programmatic interfaces that make implementing complex pipelines simple. The MIF run-time automatically handles queues between pipeline elements in order to handle request bursts, and automatically executes multiple instances of pipeline elements to increase pipeline throughput. Distributed pipeline elements are supported using a range of configurable communications protocols, and the MIF interfaces provide efficient mechanisms for moving data directly between two distributed pipeline elements.« less
Real-time traffic sign detection and recognition
NASA Astrophysics Data System (ADS)
Herbschleb, Ernst; de With, Peter H. N.
2009-01-01
The continuous growth of imaging databases increasingly requires analysis tools for extraction of features. In this paper, a new architecture for the detection of traffic signs is proposed. The architecture is designed to process a large database with tens of millions of images with a resolution up to 4,800x2,400 pixels. Because of the size of the database, a high reliability as well as a high throughput is required. The novel architecture consists of a three-stage algorithm with multiple steps per stage, combining both color and specific spatial information. The first stage contains an area-limitation step which is performance critical in both the detection rate as the overall processing time. The second stage locates suggestions for traffic signs using recently published feature processing. The third stage contains a validation step to enhance reliability of the algorithm. During this stage, the traffic signs are recognized. Experiments show a convincing detection rate of 99%. With respect to computational speed, the throughput for line-of-sight images of 800×600 pixels is 35 Hz and for panorama images it is 4 Hz. Our novel architecture outperforms existing algorithms, with respect to both detection rate and throughput
A step in time: Changes in standard-frequency and time-signal broadcasts, 1 January 1972
NASA Technical Reports Server (NTRS)
Chi, A. R.; Fosque, H. S.
1973-01-01
An improved coordinated universal time (UTC) system has been adopted by the International Radio Consultative Committee. It was implemented internationally by the standard-frequency and time-broadcast stations on 1 Jan. 1972. The new UTC system eliminates the frequency offset of 300 parts in 10 to the 10th power between the old UTC and atomic time, thus making the broadcast time interval (the UTC second) constant and defined by the resonant frequency of cesium atoms. The new time scale is kept in synchronism with the rotation of the Earth within plus or minus 0.7 s by step-time adjustments of exactly 1 s, when needed. A time code has been added to the disseminated time signals to permit universal time to be obtained from the broadcasts to the nearest 0.1 s for users requiring such precision. The texts of the International Radio Consultative Committee recommendation and report to implement the new UTC system are given. The coding formats used by various standard time broadcast services to transmit the difference between the universal time (UT1) and the UTC are also given. For users' convenience, worldwide primary VLF and HF transmissions stations, frequencies, and schedules of time emissions are also included. Actual time-step adjustments made by various stations on 1 Jan. 1972, are provided for future reference.
Nagano, Hanatsu; Levinger, Pazit; Downie, Calum; Hayes, Alan; Begg, Rezaul
2015-09-01
Falls during walking reflect susceptibility to balance loss and the individual's capacity to recover stability. Balance can be recovered using either one step or multiple steps but both responses are impaired with ageing. To investigate older adults' (n=15, 72.5±4.8 yrs) recovery step control a tether-release procedure was devised to induce unanticipated forward balance loss. Three-dimensional position-time data combined with foot-ground reaction forces were used to measure balance recovery. Dependent variables were; margin of stability (MoS) and available response time (ART) for spatial and temporal balance measures in the transverse and sagittal planes; lower limb joint angles and joint negative/positive work; and spatio-temporal gait parameters. Relative to multi-step responses, single-step recovery was more effective in maintaining balance, indicated by greater MoS and longer ART. MoS in the sagittal plane measure and ART in the transverse plane distinguished single step responses from multiple steps. When MoS and ART were negative (<0), balance was not secured and additional steps would be required to establish the new base of support for balance recovery. Single-step responses demonstrated greater step length and velocity and when the recovery foot landed, greater centre of mass downward velocity. Single-step strategies also showed greater ankle dorsiflexion, increased knee maximum flexion and more negative work at the ankle and knee. Collectively these findings suggest that single-step responses are more effective in forward balance recovery by directing falling momentum downward to be absorbed as lower limb eccentric work. Copyright © 2015 Elsevier B.V. All rights reserved.
Siboni, Renaud; Joseph, Etienne; Blasco, Laurent; Barbe, Coralie; Bajolet, Odile; Diallo, Saïdou; Ohl, Xavier
2018-06-07
Management of septic non-union of the tibia requires debridement and excision of all infected bone and soft tissues. Various surgical techniques have been described to fill the bone defect. The "Induced Membrane" technique, described by A. C. Masquelet in 1986, is a two-step procedure using a PMMA cement spacer around which an induced membrane develops, to be used in the second step as a bone graft holder for the bone graft. The purpose of this study was to assess our clinical and radiological results with this technique in a series managed in our department. Nineteen traumatic septic non-unions of the tibia were included in a retrospective single-center study between November 2007 and November 2014. All patients were followed up clinically and radiologically to assess bone union time. Multivariate analysis was used to identify factors influencing union. The series comprised 4 women and 14 men (19 legs); mean age was 53.9 years. Vascularized flap transfer was required in 26% of cases before the first stage of treatment. All patients underwent a two-step procedure, with a mean interval of 7.9 weeks. Mean bone defect after the first step was 52.4mm. The bone graft was harvested from the iliac crest in the majority of cases (18/19). The bone was stabilized with an external fixator, locking plate or plaster cast after the second step. Mean follow-up was 34 months. Bony union rate was 89% (17/19), at a mean 16 months after step 2. Eleven patients underwent one or more (mean 2.1) complementary procedures. Severity of index fracture skin opening was significantly correlated with union time (Gustilo III vs. Gustilo I or II, p=0.028). A trend was found for negative impact of smoking on union (p=0.06). Bone defect size did not correlate with union rate or time. The union rate was acceptable, at 89%, but with longer union time than reported in the literature. Many factors could explain this: lack of rigid fixation after step 2 (in case of plaster cast or external fixator), or failure to cease smoking. The results showed that the induced membrane technique is effective in treating tibial septic non-union, but could be improved by stable fixation after the second step and by cessation of smoking. IV, Retrospective study. Copyright © 2018 Elsevier Masson SAS. All rights reserved.
Planning and setting objectives in field studies: Chapter 2
Fisher, Robert N.; Dodd, C. Kenneth
2016-01-01
This chapter enumerates the steps required in designing and planning field studies on the ecology and conservation of reptiles, as these involve a high level of uncertainty and risk. To this end, the chapter differentiates between goals (descriptions of what one intends to accomplish) and objectives (the measurable steps required to achieve the established goals). Thus, meeting a specific goal may require many objectives. It may not be possible to define some of them until certain experiments have been conducted; often evaluations of sampling protocols are needed to increase certainty in the biological results. And if sampling locations are fixed and sampling events are repeated over time, then both study-specific covariates and sampling-specific covariates should exist. Additionally, other critical design considerations for field study include obtaining permits, as well as researching ethics and biosecurity issues.
Shiravi, AH; Mostafavi, R; Akbarzadeh, K; Oshaghi, MA
2011-01-01
Background: The aim of his study was to determine development time and thermal requirements of three myiasis flies including Chrysomya albiceps, Lucilia sericata, and Sarcophaga sp. Methods: Rate of development (ROD) and accumulated degree day (ADD) of three important forensic flies in Iran, Chrysomya albiceps, Lucilia sericata, and Sarcophaga sp. by rearing individuals under a single constant temperature (28° C) was calculated using specific formula for four developmental events including egg hatching, larval stages, pupation, and eclosion. Results: Rates of development decreased step by step as the flies grew from egg to larvae and then to adult stage; however, this rate was bigger for blowflies (C. albiceps and L. sericata) in comparison with the flesh fly Sarcophaga sp. Egg hatching, larval stages, and pupation took about one fourth and half of the time of the total pre-adult development time for all of the three species. In general, the flesh fly Sarcophaga sp. required more heat for development than the blowflies. The thermal constants (K) were 130–195, 148–222, and 221–323 degree-days (DD) for egg hatching to adult stages of C. albiceps, L. sericata, and Sarcophaga sp., respectively. Conclusion: This is the first report on thermal requirement of three forensic flies in Iran. The data of this study provide preliminary information for forensic entomologist to establish PMI in the area of study. PMID:22808410
40 CFR 141.133 - Compliance requirements.
Code of Federal Regulations, 2010 CFR
2010-07-01
... specified by § 141.135(c). Systems may begin monitoring to determine whether Step 1 TOC removals can be met... the Step 1 requirements in § 141.135(b)(2) and must therefore apply for alternate minimum TOC removal (Step 2) requirements, is not eligible for retroactive approval of alternate minimum TOC removal (Step 2...
15 CFR 732.6 - Steps for other requirements.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 15 Commerce and Foreign Trade 2 2010-01-01 2010-01-01 false Steps for other requirements. 732.6...) BUREAU OF INDUSTRY AND SECURITY, DEPARTMENT OF COMMERCE EXPORT ADMINISTRATION REGULATIONS STEPS FOR USING THE EAR § 732.6 Steps for other requirements. Sections 732.1 through 732.4 of this part are useful in...
QUICR-learning for Multi-Agent Coordination
NASA Technical Reports Server (NTRS)
Agogino, Adrian K.; Tumer, Kagan
2006-01-01
Coordinating multiple agents that need to perform a sequence of actions to maximize a system level reward requires solving two distinct credit assignment problems. First, credit must be assigned for an action taken at time step t that results in a reward at time step t > t. Second, credit must be assigned for the contribution of agent i to the overall system performance. The first credit assignment problem is typically addressed with temporal difference methods such as Q-learning. The second credit assignment problem is typically addressed by creating custom reward functions. To address both credit assignment problems simultaneously, we propose the "Q Updates with Immediate Counterfactual Rewards-learning" (QUICR-learning) designed to improve both the convergence properties and performance of Q-learning in large multi-agent problems. QUICR-learning is based on previous work on single-time-step counterfactual rewards described by the collectives framework. Results on a traffic congestion problem shows that QUICR-learning is significantly better than a Q-learner using collectives-based (single-time-step counterfactual) rewards. In addition QUICR-learning provides significant gains over conventional and local Q-learning. Additional results on a multi-agent grid-world problem show that the improvements due to QUICR-learning are not domain specific and can provide up to a ten fold increase in performance over existing methods.
NASA Astrophysics Data System (ADS)
Yang, Haijian; Sun, Shuyu; Yang, Chao
2017-03-01
Most existing methods for solving two-phase flow problems in porous media do not take the physically feasible saturation fractions between 0 and 1 into account, which often destroys the numerical accuracy and physical interpretability of the simulation. To calculate the solution without the loss of this basic requirement, we introduce a variational inequality formulation of the saturation equilibrium with a box inequality constraint, and use a conservative finite element method for the spatial discretization and a backward differentiation formula with adaptive time stepping for the temporal integration. The resulting variational inequality system at each time step is solved by using a semismooth Newton algorithm. To accelerate the Newton convergence and improve the robustness, we employ a family of adaptive nonlinear elimination methods as a nonlinear preconditioner. Some numerical results are presented to demonstrate the robustness and efficiency of the proposed algorithm. A comparison is also included to show the superiority of the proposed fully implicit approach over the classical IMplicit Pressure-Explicit Saturation (IMPES) method in terms of the time step size and the total execution time measured on a parallel computer.
An energy- and charge-conserving, implicit, electrostatic particle-in-cell algorithm
NASA Astrophysics Data System (ADS)
Chen, G.; Chacón, L.; Barnes, D. C.
2011-08-01
This paper discusses a novel fully implicit formulation for a one-dimensional electrostatic particle-in-cell (PIC) plasma simulation approach. Unlike earlier implicit electrostatic PIC approaches (which are based on a linearized Vlasov-Poisson formulation), ours is based on a nonlinearly converged Vlasov-Ampére (VA) model. By iterating particles and fields to a tight nonlinear convergence tolerance, the approach features superior stability and accuracy properties, avoiding most of the accuracy pitfalls in earlier implicit PIC implementations. In particular, the formulation is stable against temporal (Courant-Friedrichs-Lewy) and spatial (aliasing) instabilities. It is charge- and energy-conserving to numerical round-off for arbitrary implicit time steps (unlike the earlier "energy-conserving" explicit PIC formulation, which only conserves energy in the limit of arbitrarily small time steps). While momentum is not exactly conserved, errors are kept small by an adaptive particle sub-stepping orbit integrator, which is instrumental to prevent particle tunneling (a deleterious effect for long-term accuracy). The VA model is orbit-averaged along particle orbits to enforce an energy conservation theorem with particle sub-stepping. As a result, very large time steps, constrained only by the dynamical time scale of interest, are possible without accuracy loss. Algorithmically, the approach features a Jacobian-free Newton-Krylov solver. A main development in this study is the nonlinear elimination of the new-time particle variables (positions and velocities). Such nonlinear elimination, which we term particle enslavement, results in a nonlinear formulation with memory requirements comparable to those of a fluid computation, and affords us substantial freedom in regards to the particle orbit integrator. Numerical examples are presented that demonstrate the advertised properties of the scheme. In particular, long-time ion acoustic wave simulations show that numerical accuracy does not degrade even with very large implicit time steps, and that significant CPU gains are possible.
Exact charge and energy conservation in implicit PIC with mapped computational meshes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Guangye; Barnes, D. C.
This paper discusses a novel fully implicit formulation for a one-dimensional electrostatic particle-in-cell (PIC) plasma simulation approach. Unlike earlier implicit electrostatic PIC approaches (which are based on a linearized Vlasov Poisson formulation), ours is based on a nonlinearly converged Vlasov Amp re (VA) model. By iterating particles and fields to a tight nonlinear convergence tolerance, the approach features superior stability and accuracy properties, avoiding most of the accuracy pitfalls in earlier implicit PIC implementations. In particular, the formulation is stable against temporal (Courant Friedrichs Lewy) and spatial (aliasing) instabilities. It is charge- and energy-conserving to numerical round-off for arbitrary implicitmore » time steps (unlike the earlier energy-conserving explicit PIC formulation, which only conserves energy in the limit of arbitrarily small time steps). While momentum is not exactly conserved, errors are kept small by an adaptive particle sub-stepping orbit integrator, which is instrumental to prevent particle tunneling (a deleterious effect for long-term accuracy). The VA model is orbit-averaged along particle orbits to enforce an energy conservation theorem with particle sub-stepping. As a result, very large time steps, constrained only by the dynamical time scale of interest, are possible without accuracy loss. Algorithmically, the approach features a Jacobian-free Newton Krylov solver. A main development in this study is the nonlinear elimination of the new-time particle variables (positions and velocities). Such nonlinear elimination, which we term particle enslavement, results in a nonlinear formulation with memory requirements comparable to those of a fluid computation, and affords us substantial freedom in regards to the particle orbit integrator. Numerical examples are presented that demonstrate the advertised properties of the scheme. In particular, long-time ion acoustic wave simulations show that numerical accuracy does not degrade even with very large implicit time steps, and that significant CPU gains are possible.« less
A new method to characterize the kinetics of cholinesterases inhibited by carbamates.
Xiao, Qiaoling; Zhou, Huimin; Wei, Hong; Du, Huaqiao; Tan, Wen; Zhan, Yiyi; Pistolozzi, Marco
2017-09-10
The inhibition of cholinesterases (ChEs) by carbamates includes a carbamylation (inhibition) step, in which the drug transfers its carbamate moiety to the active site of the enzyme and a decarbamylation (activity recovery) step, in which the carbamyl group is hydrolyzed from the enzyme. The carbamylation and decarbamylation kinetics decide the extent and the duration of the inhibition, thus the full characterization of candidate carbamate inhibitors requires the measurement of the kinetic constants describing both steps. Carbamylation and decarbamylation rate constants are traditionally measured by two separate set of experiments, thus making the full characterization of candidate inhibitors time-consuming. In this communication we show that by the analysis of the area under the inhibition-time curve of cholinesterases inhibited by carbamates it is possible to calculate the decarbamylation rate constant from the same data traditionally used to characterize only the carbamylation kinetics, therefore it is possible to obtain a full characterization of the inhibition with a single set of experiments. The characterization of the inhibition kinetics of human and dog plasma butyrylcholinesterase and of human acetylcholinesterase by bambuterol and bambuterol monocarbamate enantiomers was used to demonstrate the validity of the approach. The results showed that the proposed method provides reliable estimations of carbamylation and decarbamylation rate constants thus representing a simple and useful approach to reduce the time required for the characterization of carbamate inhibitors. Copyright © 2017 Elsevier B.V. All rights reserved.
The SCUBA-2 Data Reduction Cookbook
NASA Astrophysics Data System (ADS)
Thomas, Holly S.; Currie, Malcolm J.
This cookbook provides a short introduction to Starlink facilities, especially SMURF, the Sub-Millimetre User Reduction Facility, for reducing, displaying, and calibrating SCUBA-2 data. It describes some of the data artefacts present in SCUBA-2 time-series and methods to mitigate them. In particular, this cookbook illustrates the various steps required to reduce the data; and gives an overview of the Dynamic Iterative Map-Maker, which carries out all of these steps using a single command controlled by a configuration file. Specialised configuration files are presented.
The SCUBA-2 SRO data reduction cookbook
NASA Astrophysics Data System (ADS)
Chapin, Edward; Dempsey, Jessica; Jenness, Tim; Scott, Douglas; Thomas, Holly; Tilanus, Remo P. J.
This cookbook provides a short introduction to starlink\\ facilities, especially smurf, the Sub-Millimetre User Reduction Facility, for reducing and displaying SCUBA-2 SRO data. We describe some of the data artefacts present in SCUBA-2 time series and methods we employ to mitigate them. In particular, we illustrate the various steps required to reduce the data, and the Dynamic Iterative Map-Maker, which carries out all of these steps using a single command. For information on SCUBA-2 data reduction since SRO, please SC/21.
It pays to have a spring in your step
Sawicki, Gregory S.; Lewis, Cara L.; Ferris, Daniel P.
2010-01-01
A large portion of the mechanical work required for walking comes from muscles and tendons crossing the ankle joint. By storing and releasing elastic energy in the Achilles tendon during each step, humans greatly enhance the efficiency of ankle joint work far beyond what is possible for work performed at the knee and hip joints. Summary Humans produce mechanical work at the ankle joint during walking with an efficiency two to six times greater than isolated muscle efficiency. PMID:19550204
Website Redesign: A Case Study.
Wu, Jin; Brown, Janis F
2016-01-01
A library website redesign is a complicated and at times arduous task, requiring many different steps including determining user needs, analyzing past user behavior, examining other websites, defining design preferences, testing, marketing, and launching the site. Many different types of expertise are required over the entire process. Lessons learned from the Norris Medical Library's experience with the redesign effort may be useful to others undertaking a similar project.
Role of transient water pressure in quarrying: A subglacial experiment using acoustic emissions
Cohen, D.; Hooyer, T.S.; Iverson, N.R.; Thomason, J.F.; Jackson, M.
2006-01-01
Probably the most important mechanism of glacial erosion is quarrying: the growth and coalescence of cracks in subglacial bedrock and dislodgement of resultant rock fragments. Although evidence indicates that erosion rates depend on sliding speed, rates of crack growth in bedrock may be enhanced by changing stresses on the bed caused by fluctuating basal water pressure in zones of ice-bed separation. To study quarrying in real time, a granite step, 12 cm high with a crack in its stoss surface, was installed at the bed of Engabreen, Norway. Acoustic emission sensors monitored crack growth events in the step as ice slid over it. Vertical stresses, water pressure, and cavity height in the lee of the step were also measured. Water was pumped to the lee of the step several times over 8 days. Pumping initially caused opening of a leeward cavity, which then closed after pumping was stopped and water pressure decreased. During cavity closure, acoustic emissions emanating mostly from the vicinity of the base of the crack in the step increased dramatically. With repeated pump tests this crack grew with time until the step's lee surface was quarried. Our experiments indicate that fluctuating water pressure caused stress thresholds required for crack growth to be exceeded. Natural basal water pressure fluctuations should also concentrate stresses on rock steps, increasing rates of crack growth. Stress changes on the bed due to water pressure fluctuations will increase in magnitude and duration with cavity size, which may help explain the effect of sliding speed on erosion rates. Copyright 2006 by the American Geophysical Union.
49 CFR 399.207 - Truck and truck-tractor access requirements.
Code of Federal Regulations, 2014 CFR
2014-10-01
... requirements: (1) Vertical height. All measurements of vertical height shall be made from ground level with the... vertical height of the first step shall be no more than 609 millimeters (24 inches) from ground level. (3... requirement. The step need not retain the disc at rest. (5) Step strength. Each step must withstand a vertical...
49 CFR 399.207 - Truck and truck-tractor access requirements.
Code of Federal Regulations, 2013 CFR
2013-10-01
... requirements: (1) Vertical height. All measurements of vertical height shall be made from ground level with the... vertical height of the first step shall be no more than 609 millimeters (24 inches) from ground level. (3... requirement. The step need not retain the disc at rest. (5) Step strength. Each step must withstand a vertical...
49 CFR 399.207 - Truck and truck-tractor access requirements.
Code of Federal Regulations, 2011 CFR
2011-10-01
... requirements: (1) Vertical height. All measurements of vertical height shall be made from ground level with the... vertical height of the first step shall be no more than 609 millimeters (24 inches) from ground level. (3... requirement. The step need not retain the disc at rest. (5) Step strength. Each step must withstand a vertical...
49 CFR 399.207 - Truck and truck-tractor access requirements.
Code of Federal Regulations, 2012 CFR
2012-10-01
... requirements: (1) Vertical height. All measurements of vertical height shall be made from ground level with the... vertical height of the first step shall be no more than 609 millimeters (24 inches) from ground level. (3... requirement. The step need not retain the disc at rest. (5) Step strength. Each step must withstand a vertical...
Aging effect on step adjustments and stability control in visually perturbed gait initiation.
Sun, Ruopeng; Cui, Chuyi; Shea, John B
2017-10-01
Gait adaptability is essential for fall avoidance during locomotion. It requires the ability to rapidly inhibit original motor planning, select and execute alternative motor commands, while also maintaining the stability of locomotion. This study investigated the aging effect on gait adaptability and dynamic stability control during a visually perturbed gait initiation task. A novel approach was used such that the anticipatory postural adjustment (APA) during gait initiation were used to trigger the unpredictable relocation of a foot-size stepping target. Participants (10 young adults and 10 older adults) completed visually perturbed gait initiation in three adjustment timing conditions (early, intermediate, late; all extracted from the stereotypical APA pattern) and two adjustment direction conditions (medial, lateral). Stepping accuracy, foot rotation at landing, and Margin of Dynamic Stability (MDS) were analyzed and compared across test conditions and groups using a linear mixed model. Stepping accuracy decreased as a function of adjustment timing as well as stepping direction, with older subjects exhibited a significantly greater undershoot in foot placement to late lateral stepping. Late adjustment also elicited a reaching-like movement (i.e. foot rotation prior to landing in order to step on the target), regardless of stepping direction. MDS measures in the medial-lateral and anterior-posterior direction revealed both young and older adults exhibited reduced stability in the adjustment step and subsequent steps. However, young adults returned to stable gait faster than older adults. These findings could be useful for future study of screening deficits in gait adaptability and preventing falls. Copyright © 2017 Elsevier B.V. All rights reserved.
Jindo, Takashi; Kitano, Naruki; Tsunoda, Kenji; Kusuda, Mikiko; Hotta, Kazushi; Okura, Tomohiro
Decreasing daily life physical activity (PA) outside an exercise program might hinder the benefit of that program on lower-extremity physical function (LEPF) in older adults. The purpose of this study was to investigate how daily life PA modulates the effects of an exercise program on LEPF. The participants were 46 community-dwelling older adults (mean age, 70.1 ± 3.5 years) in Kasama City, a rural area in Japan. All participated in a fall-prevention program called square-stepping exercise once a week for 11 weeks. We evaluated their daily life PA outside the exercise program with pedometers and calculated the average daily step counts during the early and late periods of the program. We divided participants into 2 groups on the basis of whether or not they decreased PA by more than 1000 steps per day between the early and late periods. To ascertain the LEPF benefits induced by participating in the exercise program, we measured 5 physical performance tests before and after the intervention: 1-leg stand, 5-time sit-to-stand, Timed Up and Go (TUG), habitual walking speed, and choice-stepping reaction time (CSRT). We used a 2-way analysis of variance to confirm the interaction between the 2 groups and the time effect before and after the intervention. During the exercise program, 8 participants decreased their daily life PA (early period, 6971 ± 2771; late period, 5175 ± 2132) and 38 participants maintained PA (early period, 6326 ± 2477; late period, 6628 ± 2636). Both groups significantly improved their performance in TUG and CSRT at the posttest compared with the baseline. A significant group-by-time interaction on the walking speed (P = .038) was observed: participants who maintained PA improved their performance more than those who decreased their PA. Square-stepping exercise requires and strengthens dynamic balance and agility, which contributed to the improved time effects that occurred in TUG and CSRT. On the contrary, because PA is positively associated with walking speed, maintaining daily life PA outside an exercise program may have a stronger influence on walking speed. To enhance the effectiveness of an exercise program for young-old adults, researchers and instructors should try to maintain the participant's daily life PA outside the program. Regardless of decreasing or maintaining daily life PA, the square-stepping exercise program could improve aspects of LEPF that require complex physical performance. However, a greater effect can be expected when participants maintain their daily life PA outside the exercise program.
Hydes, Theresa; Hansi, Navjyot; Trebble, Timothy M
2012-01-01
Upper gastrointestinal (UGI) endoscopy is a routine healthcare procedure with a defined patient pathway. The objective of this study was to redesign this pathway for unsedated patients using lean thinking transformation to focus on patient-derived value-adding steps, remove waste and create a more efficient process. This was to form the basis of a pathway template that was transferrable to other endoscopy units. A literature search of patient expectations for UGI endoscopy identified patient-derived value. A value stream map was created of the current pathway. The minimum and maximum time per step, bottlenecks and staff-staff interactions were recorded. This information was used for service transformation using lean thinking. A patient pathway template was created and implemented into a secondary unit. Questionnaire studies were performed to assess patient satisfaction. In the primary unit the patient pathway reduced from 19 to 11 steps with a reduction in the maximum lead time from 375 to 80 min following lean thinking transformation. The minimum value/lead time ratio increased from 24% to 49%. The patient pathway was redesigned as a 'cellular' system with minimised patient and staff travelling distances, waiting times, paperwork and handoffs. Nursing staff requirements reduced by 25%. Patient-prioritised aspects of care were emphasised with increased patient-endoscopist interaction time. The template was successfully introduced into a second unit with an overall positive patient satisfaction rating of 95%. Lean thinking transformation of the unsedated UGI endoscopy pathway results in reduced waiting times, reduced staffing requirements and improved patient flow and can form the basis of a pathway template which may be successfully transferred into alternative endoscopy environments with high levels of patient satisfaction.
NASA Astrophysics Data System (ADS)
Fehn, Niklas; Wall, Wolfgang A.; Kronbichler, Martin
2017-12-01
The present paper deals with the numerical solution of the incompressible Navier-Stokes equations using high-order discontinuous Galerkin (DG) methods for discretization in space. For DG methods applied to the dual splitting projection method, instabilities have recently been reported that occur for small time step sizes. Since the critical time step size depends on the viscosity and the spatial resolution, these instabilities limit the robustness of the Navier-Stokes solver in case of complex engineering applications characterized by coarse spatial resolutions and small viscosities. By means of numerical investigation we give evidence that these instabilities are related to the discontinuous Galerkin formulation of the velocity divergence term and the pressure gradient term that couple velocity and pressure. Integration by parts of these terms with a suitable definition of boundary conditions is required in order to obtain a stable and robust method. Since the intermediate velocity field does not fulfill the boundary conditions prescribed for the velocity, a consistent boundary condition is derived from the convective step of the dual splitting scheme to ensure high-order accuracy with respect to the temporal discretization. This new formulation is stable in the limit of small time steps for both equal-order and mixed-order polynomial approximations. Although the dual splitting scheme itself includes inf-sup stabilizing contributions, we demonstrate that spurious pressure oscillations appear for equal-order polynomials and small time steps highlighting the necessity to consider inf-sup stability explicitly.
NASA Astrophysics Data System (ADS)
Rohart, François
2017-01-01
In a previous paper [Rohart et al., Phys Rev A 2014;90(042506)], the influence of detection-bandwidth properties on observed line-shapes in precision spectroscopy was theoretically modeled for the first time using the basic model of a continuous sweeping of the laser frequency. Specific experiments confirmed general theoretical trends but also revealed several insufficiencies of the model in case of stepped frequency scans. As a consequence in as much as up-to-date experiments use step-by-step frequency-swept lasers, a new model of the influence of the detection-bandwidth is developed, including a realistic timing of signal sampling and frequency changes. Using Fourier transform techniques, the resulting time domain apparatus function gets a simple analytical form that can be easily implemented in line-shape fitting codes without any significant increase of computation durations. This new model is then considered in details for detection systems characterized by 1st and 2nd order bandwidths, underlining the importance of the ratio of detection time constant to frequency step duration, namely for the measurement of line frequencies. It also allows a straightforward analysis of corresponding systematic deviations on retrieved line frequencies and broadenings. Finally, a special attention is paid to consequences of a finite detection-bandwidth in Doppler Broadening Thermometry, namely to experimental adjustments required for a spectroscopic determination of the Boltzmann constant at the 1-ppm level of accuracy. In this respect, the interest of implementing a Butterworth 2nd order filter is emphasized.
Pedometer determined physical activity tracks in African American adults: the Jackson Heart Study.
Newton, Robert L; M, Hongmei Han; Dubbert, Patricia M; Johnson, William D; Hickson, DeMarc A; Ainsworth, Barbara; Carithers, Teresa; Taylor, Herman; Wyatt, Sharon; Tudor-Locke, Catrine
2012-04-18
This study investigated the number of pedometer assessment occasions required to establish habitual physical activity in African American adults. African American adults (mean age 59.9 ± 0.60 years; 59 % female) enrolled in the Diet and Physical Activity Substudy of the Jackson Heart Study wore Yamax pedometers during 3-day monitoring periods, assessed on two to three distinct occasions, each separated by approximately one month. The stability of pedometer measured PA was described as differences in mean steps/day across time, as intraclass correlation coefficients (ICC) by sex, age, and body mass index (BMI) category, and as percent of participants changing steps/day quartiles across time. Valid data were obtained for 270 participants on either two or three different assessment occasions. Mean steps/day were not significantly different across assessment occasions (p values > 0.456). The overall ICCs for steps/day assessed on either two or three occasions were 0.57 and 0.76, respectively. In addition, 85 % (two assessment occasions) and 76 % (three assessment occasions) of all participants remained in the same steps/day quartile or changed one quartile over time. The current study shows that an overall mean steps/day estimate based on a 3-day monitoring period did not differ significantly over 4 - 6 months. The findings were robust to differences in sex, age, and BMI categories. A single 3-day monitoring period is sufficient to capture habitual physical activity in African American adults.
Cingi Steps for preoperative computer-assisted image editing before reduction rhinoplasty.
Cingi, Can Cemal; Cingi, Cemal; Bayar Muluk, Nuray
2014-04-01
The aim of this work is to provide a stepwise systematic guide for a preoperative photo-editing procedure for rhinoplasty cases involving the cooperation of a graphic artist and a surgeon. One hundred female subjects who planned to undergo a reduction rhinoplasty operation were included in this study. The Cingi Steps for Preoperative Computer Imaging (CS-PCI) program, a stepwise systematic guide for image editing using Adobe PhotoShop's "liquify" effect, was applied to the rhinoplasty candidates. The stages of CS-PCI are as follows: (1) lowering the hump; (2) shortening the nose; (3) adjusting the tip projection, (4) perfecting the nasal dorsum, (5) creating a supratip break, and (6) exaggerating the tip projection and/or dorsal slope. Performing the Cingi Steps allows the patient to see what will happen during the operation and observe the final appearance of his or her nose. After the application of described steps, 71 patients (71%) accepted step 4, and 21 (21%) of them accepted step 5. Only 10 patients (10%) wanted to make additional changes to their operation plans. The main benefits of using this method is that it decreases the time needed by the surgeon to perform a graphic analysis, and it reduces the time required for the patient to reach a decision about the procedure. It is an easy and reliable method that will provide improved physician-patient communication, increased patient confidence, and enhanced surgical planning while limiting the time needed for planning. © 2014 ARS-AAOA, LLC.
Biedermann, Benjamin R.; Wieser, Wolfgang; Eigenwillig, Christoph M.; Palte, Gesa; Adler, Desmond C.; Srinivasan, Vivek J.; Fujimoto, James G.; Huber, Robert
2009-01-01
We demonstrate en face swept source optical coherence tomography (ss-OCT) without requiring a Fourier transformation step. The electronic optical coherence tomography (OCT) interference signal from a k-space linear Fourier domain mode-locked laser is mixed with an adjustable local oscillator, yielding the analytic reflectance signal from one image depth for each frequency sweep of the laser. Furthermore, a method for arbitrarily shaping the spectral intensity profile of the laser is presented, without requiring the step of numerical apodization. In combination, these two techniques enable sampling of the in-phase and quadrature signal with a slow analog-to-digital converter and allow for real-time display of en face projections even for highest axial scan rates. Image data generated with this technique is compared to en face images extracted from a three-dimensional OCT data set. This technique can allow for real-time visualization of arbitrarily oriented en face planes for the purpose of alignment, registration, or operator-guided survey scans while simultaneously maintaining the full capability of high-speed volumetric ss-OCT functionality. PMID:18978919
How many steps/day are enough? For adults.
Tudor-Locke, Catrine; Craig, Cora L; Brown, Wendy J; Clemes, Stacy A; De Cocker, Katrien; Giles-Corti, Billie; Hatano, Yoshiro; Inoue, Shigeru; Matsudo, Sandra M; Mutrie, Nanette; Oppert, Jean-Michel; Rowe, David A; Schmidt, Michael D; Schofield, Grant M; Spence, John C; Teixeira, Pedro J; Tully, Mark A; Blair, Steven N
2011-07-28
Physical activity guidelines from around the world are typically expressed in terms of frequency, duration, and intensity parameters. Objective monitoring using pedometers and accelerometers offers a new opportunity to measure and communicate physical activity in terms of steps/day. Various step-based versions or translations of physical activity guidelines are emerging, reflecting public interest in such guidance. However, there appears to be a wide discrepancy in the exact values that are being communicated. It makes sense that step-based recommendations should be harmonious with existing evidence-based public health guidelines that recognize that "some physical activity is better than none" while maintaining a focus on time spent in moderate-to-vigorous physical activity (MVPA). Thus, the purpose of this review was to update our existing knowledge of "How many steps/day are enough?", and to inform step-based recommendations consistent with current physical activity guidelines. Normative data indicate that healthy adults typically take between 4,000 and 18,000 steps/day, and that 10,000 steps/day is reasonable for this population, although there are notable "low active populations." Interventions demonstrate incremental increases on the order of 2,000-2,500 steps/day. The results of seven different controlled studies demonstrate that there is a strong relationship between cadence and intensity. Further, despite some inter-individual variation, 100 steps/minute represents a reasonable floor value indicative of moderate intensity walking. Multiplying this cadence by 30 minutes (i.e., typical of a daily recommendation) produces a minimum of 3,000 steps that is best used as a heuristic (i.e., guiding) value, but these steps must be taken over and above habitual activity levels to be a true expression of free-living steps/day that also includes recommendations for minimal amounts of time in MVPA. Computed steps/day translations of time in MVPA that also include estimates of habitual activity levels equate to 7,100 to 11,000 steps/day. A direct estimate of minimal amounts of MVPA accumulated in the course of objectively monitored free-living behaviour is 7,000-8,000 steps/day. A scale that spans a wide range of incremental increases in steps/day and is congruent with public health recognition that "some physical activity is better than none," yet still incorporates step-based translations of recommended amounts of time in MVPA may be useful in research and practice. The full range of users (researchers to practitioners to the general public) of objective monitoring instruments that provide step-based outputs require good reference data and evidence-based recommendations to be able to design effective health messages congruent with public health physical activity guidelines, guide behaviour change, and ultimately measure, track, and interpret steps/day.
How many steps/day are enough? for adults
2011-01-01
Physical activity guidelines from around the world are typically expressed in terms of frequency, duration, and intensity parameters. Objective monitoring using pedometers and accelerometers offers a new opportunity to measure and communicate physical activity in terms of steps/day. Various step-based versions or translations of physical activity guidelines are emerging, reflecting public interest in such guidance. However, there appears to be a wide discrepancy in the exact values that are being communicated. It makes sense that step-based recommendations should be harmonious with existing evidence-based public health guidelines that recognize that "some physical activity is better than none" while maintaining a focus on time spent in moderate-to-vigorous physical activity (MVPA). Thus, the purpose of this review was to update our existing knowledge of "How many steps/day are enough?", and to inform step-based recommendations consistent with current physical activity guidelines. Normative data indicate that healthy adults typically take between 4,000 and 18,000 steps/day, and that 10,000 steps/day is reasonable for this population, although there are notable "low active populations." Interventions demonstrate incremental increases on the order of 2,000-2,500 steps/day. The results of seven different controlled studies demonstrate that there is a strong relationship between cadence and intensity. Further, despite some inter-individual variation, 100 steps/minute represents a reasonable floor value indicative of moderate intensity walking. Multiplying this cadence by 30 minutes (i.e., typical of a daily recommendation) produces a minimum of 3,000 steps that is best used as a heuristic (i.e., guiding) value, but these steps must be taken over and above habitual activity levels to be a true expression of free-living steps/day that also includes recommendations for minimal amounts of time in MVPA. Computed steps/day translations of time in MVPA that also include estimates of habitual activity levels equate to 7,100 to 11,000 steps/day. A direct estimate of minimal amounts of MVPA accumulated in the course of objectively monitored free-living behaviour is 7,000-8,000 steps/day. A scale that spans a wide range of incremental increases in steps/day and is congruent with public health recognition that "some physical activity is better than none," yet still incorporates step-based translations of recommended amounts of time in MVPA may be useful in research and practice. The full range of users (researchers to practitioners to the general public) of objective monitoring instruments that provide step-based outputs require good reference data and evidence-based recommendations to be able to design effective health messages congruent with public health physical activity guidelines, guide behaviour change, and ultimately measure, track, and interpret steps/day. PMID:21798015
2011-01-01
Background Thousands of children experience cardiac arrest events every year in pediatric intensive care units. Most of these children die. Cardiac arrest prediction tools are used as part of medical emergency team evaluations to identify patients in standard hospital beds that are at high risk for cardiac arrest. There are no models to predict cardiac arrest in pediatric intensive care units though, where the risk of an arrest is 10 times higher than for standard hospital beds. Current tools are based on a multivariable approach that does not characterize deterioration, which often precedes cardiac arrests. Characterizing deterioration requires a time series approach. The purpose of this study is to propose a method that will allow for time series data to be used in clinical prediction models. Successful implementation of these methods has the potential to bring arrest prediction to the pediatric intensive care environment, possibly allowing for interventions that can save lives and prevent disabilities. Methods We reviewed prediction models from nonclinical domains that employ time series data, and identified the steps that are necessary for building predictive models using time series clinical data. We illustrate the method by applying it to the specific case of building a predictive model for cardiac arrest in a pediatric intensive care unit. Results Time course analysis studies from genomic analysis provided a modeling template that was compatible with the steps required to develop a model from clinical time series data. The steps include: 1) selecting candidate variables; 2) specifying measurement parameters; 3) defining data format; 4) defining time window duration and resolution; 5) calculating latent variables for candidate variables not directly measured; 6) calculating time series features as latent variables; 7) creating data subsets to measure model performance effects attributable to various classes of candidate variables; 8) reducing the number of candidate features; 9) training models for various data subsets; and 10) measuring model performance characteristics in unseen data to estimate their external validity. Conclusions We have proposed a ten step process that results in data sets that contain time series features and are suitable for predictive modeling by a number of methods. We illustrated the process through an example of cardiac arrest prediction in a pediatric intensive care setting. PMID:22023778
Kennedy, Curtis E; Turley, James P
2011-10-24
Thousands of children experience cardiac arrest events every year in pediatric intensive care units. Most of these children die. Cardiac arrest prediction tools are used as part of medical emergency team evaluations to identify patients in standard hospital beds that are at high risk for cardiac arrest. There are no models to predict cardiac arrest in pediatric intensive care units though, where the risk of an arrest is 10 times higher than for standard hospital beds. Current tools are based on a multivariable approach that does not characterize deterioration, which often precedes cardiac arrests. Characterizing deterioration requires a time series approach. The purpose of this study is to propose a method that will allow for time series data to be used in clinical prediction models. Successful implementation of these methods has the potential to bring arrest prediction to the pediatric intensive care environment, possibly allowing for interventions that can save lives and prevent disabilities. We reviewed prediction models from nonclinical domains that employ time series data, and identified the steps that are necessary for building predictive models using time series clinical data. We illustrate the method by applying it to the specific case of building a predictive model for cardiac arrest in a pediatric intensive care unit. Time course analysis studies from genomic analysis provided a modeling template that was compatible with the steps required to develop a model from clinical time series data. The steps include: 1) selecting candidate variables; 2) specifying measurement parameters; 3) defining data format; 4) defining time window duration and resolution; 5) calculating latent variables for candidate variables not directly measured; 6) calculating time series features as latent variables; 7) creating data subsets to measure model performance effects attributable to various classes of candidate variables; 8) reducing the number of candidate features; 9) training models for various data subsets; and 10) measuring model performance characteristics in unseen data to estimate their external validity. We have proposed a ten step process that results in data sets that contain time series features and are suitable for predictive modeling by a number of methods. We illustrated the process through an example of cardiac arrest prediction in a pediatric intensive care setting.
An automated workflow for parallel processing of large multiview SPIM recordings
Schmied, Christopher; Steinbach, Peter; Pietzsch, Tobias; Preibisch, Stephan; Tomancak, Pavel
2016-01-01
Summary: Selective Plane Illumination Microscopy (SPIM) allows to image developing organisms in 3D at unprecedented temporal resolution over long periods of time. The resulting massive amounts of raw image data requires extensive processing interactively via dedicated graphical user interface (GUI) applications. The consecutive processing steps can be easily automated and the individual time points can be processed independently, which lends itself to trivial parallelization on a high performance computing (HPC) cluster. Here, we introduce an automated workflow for processing large multiview, multichannel, multiillumination time-lapse SPIM data on a single workstation or in parallel on a HPC cluster. The pipeline relies on snakemake to resolve dependencies among consecutive processing steps and can be easily adapted to any cluster environment for processing SPIM data in a fraction of the time required to collect it. Availability and implementation: The code is distributed free and open source under the MIT license http://opensource.org/licenses/MIT. The source code can be downloaded from github: https://github.com/mpicbg-scicomp/snakemake-workflows. Documentation can be found here: http://fiji.sc/Automated_workflow_for_parallel_Multiview_Reconstruction. Contact: schmied@mpi-cbg.de Supplementary information: Supplementary data are available at Bioinformatics online. PMID:26628585
An automated workflow for parallel processing of large multiview SPIM recordings.
Schmied, Christopher; Steinbach, Peter; Pietzsch, Tobias; Preibisch, Stephan; Tomancak, Pavel
2016-04-01
Selective Plane Illumination Microscopy (SPIM) allows to image developing organisms in 3D at unprecedented temporal resolution over long periods of time. The resulting massive amounts of raw image data requires extensive processing interactively via dedicated graphical user interface (GUI) applications. The consecutive processing steps can be easily automated and the individual time points can be processed independently, which lends itself to trivial parallelization on a high performance computing (HPC) cluster. Here, we introduce an automated workflow for processing large multiview, multichannel, multiillumination time-lapse SPIM data on a single workstation or in parallel on a HPC cluster. The pipeline relies on snakemake to resolve dependencies among consecutive processing steps and can be easily adapted to any cluster environment for processing SPIM data in a fraction of the time required to collect it. The code is distributed free and open source under the MIT license http://opensource.org/licenses/MIT The source code can be downloaded from github: https://github.com/mpicbg-scicomp/snakemake-workflows Documentation can be found here: http://fiji.sc/Automated_workflow_for_parallel_Multiview_Reconstruction : schmied@mpi-cbg.de Supplementary data are available at Bioinformatics online. © The Author 2015. Published by Oxford University Press.
USE OF TAQMAN TO ENUMERATE ENTEROCOCCUS FAECALIS IN WATER
The Polymerase Chain Reaction (PCR) has become a useful tool in the detection of microorganisms. However, conventional PCR is somewhat time-consuming considering that additional steps (e.g., gel electrophoresis and gene sequencing) are required to confirm the presence of the tar...
Australia's marine virtual laboratory
NASA Astrophysics Data System (ADS)
Proctor, Roger; Gillibrand, Philip; Oke, Peter; Rosebrock, Uwe
2014-05-01
In all modelling studies of realistic scenarios, a researcher has to go through a number of steps to set up a model in order to produce a model simulation of value. The steps are generally the same, independent of the modelling system chosen. These steps include determining the time and space scales and processes of the required simulation; obtaining data for the initial set up and for input during the simulation time; obtaining observation data for validation or data assimilation; implementing scripts to run the simulation(s); and running utilities or custom-built software to extract results. These steps are time consuming and resource hungry, and have to be done every time irrespective of the simulation - the more complex the processes, the more effort is required to set up the simulation. The Australian Marine Virtual Laboratory (MARVL) is a new development in modelling frameworks for researchers in Australia. MARVL uses the TRIKE framework, a java-based control system developed by CSIRO that allows a non-specialist user configure and run a model, to automate many of the modelling preparation steps needed to bring the researcher faster to the stage of simulation and analysis. The tool is seen as enhancing the efficiency of researchers and marine managers, and is being considered as an educational aid in teaching. In MARVL we are developing a web-based open source application which provides a number of model choices and provides search and recovery of relevant observations, allowing researchers to: a) efficiently configure a range of different community ocean and wave models for any region, for any historical time period, with model specifications of their choice, through a user-friendly web application, b) access data sets to force a model and nest a model into, c) discover and assemble ocean observations from the Australian Ocean Data Network (AODN, http://portal.aodn.org.au/webportal/) in a format that is suitable for model evaluation or data assimilation, and d) run the assembled configuration in a cloud computing environment, or download the assembled configuration and packaged data to run on any other system of the user's choice. MARVL is now being applied in a number of case studies around Australia ranging in scale from locally confined estuaries to the Tasman Sea between Australia and New Zealand. In time we expect the range of models offered will include biogeochemical models.
Klein, Jan; Teber, Dogu; Frede, Tom; Stock, Christian; Hruza, Marcel; Gözen, Ali; Seemann, Othmar; Schulze, Michael; Rassweiler, Jens
2013-03-01
Development and full validation of a laparoscopic training program for stepwise learning of a reproducible application of a standardized laparoscopic anastomosis technique and integration into the clinical course. The training of vesicourethral anastomosis (VUA) was divided into six simple standardized steps. To fix the objective criteria, four experienced surgeons performed the stepwise training protocol. Thirty-eight participants with no previous laparoscopic experience were investigated in their training performance. The times needed to manage each training step and the total training time were recorded. The integration into the clinical course was investigated. The training results and the corresponding steps during laparoscopic radical prostatectomy (LRP) were analyzed. Data analysis of corresponding operating room (OR) sections of 793 LRP was performed. Based on the validity, criteria were determined. In the laboratory section, a significant reduction of OR time for every step was seen in all participants. Coordination: 62%; longitudinal incision: 52%; inverted U-shape incision: 43%; plexus: 47%. Anastomosis catheter model: 38%. VUA: 38%. The laboratory section required a total time of 29 hours (minimum: 16 hours; maximum: 42 hours). All participants had shorter execution times in the laboratory than under real conditions. The best match was found within the VUA model. To perform an anastomosis under real conditions, 25% more time was needed. By using the training protocol, the performance of the VUA is comparable to that of an surgeon with experience of about 50 laparoscopic VUA. Data analysis proved content, construct, and prognostic validity. The use of stepwise training approaches enables a surgeon to learn and reproduce complex reconstructive surgical tasks: eg, the VUA in a safe environment. The validity of the designed system is given at all levels and should be used as a standard in the clinical surgical training in laparoscopic reconstructive urology.
A gas-kinetic BGK scheme for the compressible Navier-Stokes equations
NASA Technical Reports Server (NTRS)
Xu, Kun
2000-01-01
This paper presents an improved gas-kinetic scheme based on the Bhatnagar-Gross-Krook (BGK) model for the compressible Navier-Stokes equations. The current method extends the previous gas-kinetic Navier-Stokes solver developed by Xu and Prendergast by implementing a general nonequilibrium state to represent the gas distribution function at the beginning of each time step. As a result, the requirement in the previous scheme, such as the particle collision time being less than the time step for the validity of the BGK Navier-Stokes solution, is removed. Therefore, the applicable regime of the current method is much enlarged and the Navier-Stokes solution can be obtained accurately regardless of the ratio between the collision time and the time step. The gas-kinetic Navier-Stokes solver developed by Chou and Baganoff is the limiting case of the current method, and it is valid only under such a limiting condition. Also, in this paper, the appropriate implementation of boundary condition for the kinetic scheme, different kinetic limiting cases, and the Prandtl number fix are presented. The connection among artificial dissipative central schemes, Godunov-type schemes, and the gas-kinetic BGK method is discussed. Many numerical tests are included to validate the current method.
Huang, Edward Pei-Chuan; Wang, Hui-Chih; Ko, Patrick Chow-In; Chang, Anna Marie; Fu, Chia-Ming; Chen, Jiun-Wei; Liao, Yen-Chen; Liu, Hung-Chieh; Fang, Yao-De; Yang, Chih-Wei; Chiang, Wen-Chu; Ma, Matthew Huei-Ming; Chen, Shyr-Chyr
2013-09-01
The quality of cardiopulmonary resuscitation (CPR) is important to survival after cardiac arrest. Mechanical devices (MD) provide constant CPR, but their effectiveness may be affected by deployment timeliness. To identify the timeliness of the overall and of each essential step in the deployment of a piston-type MD during emergency department (ED) resuscitation, and to identify factors associated with delayed MD deployment by video recordings. Between December 2005 and December 2008, video clips from resuscitations with CPR sessions using a MD in the ED were reviewed using time-motion analyses. The overall deployment timeliness and the time spent on each essential step of deployment were measured. There were 37 CPR recordings that used a MD. Deployment of MD took an average 122.6 ± 57.8s. The 3 most time-consuming steps were: (1) setting the device (57.8 ± 38.3s), (2) positioning the patient (33.4 ± 38.0 s), and (3) positioning the device (14.7 ± 9.5s). Total no flow time was 89.1 ± 41.2s (72.7% of total time) and associated with the 3 most time-consuming steps. There was no difference in the total timeliness, no-flow time, and no-flow ratio between different rescuer numbers, time of day of the resuscitation, or body size of patients. Rescuers spent a significant amount of time on MD deployment, leading to long no-flow times. Lack of familiarity with the device and positioning strategy were associated with poor performance. Additional training in device deployment strategies are required to improve the benefits of mechanical CPR. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.
Mathew, Hanna; Kunde, Wilfried; Herbort, Oliver
2017-05-01
When someone grasps an object, the grasp depends on the intended object manipulation and usually facilitates it. If several object manipulation steps are planned, the first step has been reported to primarily determine the grasp selection. We address whether the grasp can be aligned to the second step, if the second step's requirements exceed those of the first step. Participants grasped and rotated a dial first by a small extent and then by various extents in the opposite direction, without releasing the dial. On average, when the requirements of the first and the second step were similar, participants mostly aligned the grasp to the first step. When the requirements of the second step were considerably higher, participants aligned the grasp to the second step, even though the first step still had a considerable impact. Participants employed two different strategies. One subgroup initially aligned the grasp to the first step and then ceased adjusting the grasp to either step. Another group also initially aligned the grasp to the first step and then switched to aligning it primarily to the second step. The data suggest that participants are more likely to switch to the latter strategy when they experienced more awkward arm postures. In summary, grasp selections for multi-step object manipulations can be aligned to the second object manipulation step, if the requirements of this step clearly exceed those of the first step and if participants have some experience with the task.
Cedeño, M
1995-01-01
Tequila is obtained from the distillation of fermented juice of agave plant, Agave tequilana, to which up to 49% (w/v) of an adjunct sugar, mainly from cane or corn, could be added. Agave plants require from 8 to 12 years to mature and during all this time cleaning, pest control, and slacken of land are required to produce an initial raw material with the appropriate chemical composition for tequila production. Production process comprises four steps: cooking to hydrolyze inulin into fructose, milling to extract the sugars, fermentation with a strain of Saccharomyces cerevisiae to convert the sugars into ethanol and organoleptic compounds, and, finally, a two-step distillation process. Maturation, if needed, is carried out in white oak barrels to obtain rested or aged tequila in 2 or 12 months, respectively.
The people side of MRP (materiel requirements planning).
Lunn, T
1994-05-01
A montage of ideas and concepts have been successfully used to train and motivate people to use MRP II systems more effectively. This is important today because many companies are striving to achieve World Class Manufacturing status. Closed loop Materiel Requirements Planning (MRP) systems are an integral part of the process of continuous improvement. Successfully using a formal management planning system, such as MRP II, is a fundamental stepping stone on the path toward World Class Excellence. Included in this article are techniques that companies use to reduce lead time, simplify bills of materiel, and improve schedule adherence. These and other steps all depend on the people who use the system. The focus will be on how companies use the MRP tool more effectively.
Gulzar, Naeem; Klussmann, Martin
2014-06-20
The direct functionalization of C-H bonds is an important and long standing goal in organic chemistry. Such transformations can be very powerful in order to streamline synthesis by saving steps, time and material compared to conventional methods that require the introduction and removal of activating or directing groups. Therefore, the functionalization of C-H bonds is also attractive for green chemistry. Under oxidative conditions, two C-H bonds or one C-H and one heteroatom-H bond can be transformed to C-C and C-heteroatom bonds, respectively. Often these oxidative coupling reactions require synthetic oxidants, expensive catalysts or high temperatures. Here, we describe a two-step procedure to functionalize indole derivatives, more specifically tetrahydrocarbazoles, by C-H amination using only elemental oxygen as oxidant. The reaction uses the principle of C-H functionalization via Intermediate PeroxideS (CHIPS). In the first step, a hydroperoxide is generated oxidatively using visible light, a photosensitizer and elemental oxygen. In the second step, the N-nucleophile, an aniline, is introduced by Brønsted-acid catalyzed activation of the hydroperoxide leaving group. The products of the first and second step often precipitate and can be conveniently filtered off. The synthesis of a biologically active compound is shown.
NASA Technical Reports Server (NTRS)
Brumfield, M. L. (Compiler)
1984-01-01
A plan to develop a space technology experiments platform (STEP) was examined. NASA Langley Research Center held a STEP Experiment Requirements Workshop on June 29 and 30 and July 1, 1983, at which experiment proposers were invited to present more detailed information on their experiment concept and requirements. A feasibility and preliminary definition study was conducted and the preliminary definition of STEP capabilities and experiment concepts and expected requirements for support services are presented. The preliminary definition of STEP capabilities based on detailed review of potential experiment requirements is investigated. Topics discussed include: Shuttle on-orbit dynamics; effects of the space environment on damping materials; erectable beam experiment; technology for development of very large solar array deployers; thermal energy management process experiment; photovoltaic concentrater pointing dynamics and plasma interactions; vibration isolation technology; flight tests of a synthetic aperture radar antenna with use of STEP.
Estimating psychiatric manpower requirements based on patients' needs.
Faulkner, L R; Goldman, C R
1997-05-01
To provide a better understanding of the complexities of estimating psychiatric manpower requirements, the authors describe several approaches to estimation and present a method based on patients' needs. A five-step method for psychiatric manpower estimation is used, with estimates of data pertinent to each step, to calculate the total psychiatric manpower requirements for the United States. The method is also used to estimate the hours of psychiatric service per patient per year that might be available under current psychiatric practice and under a managed care scenario. Depending on assumptions about data at each step in the method, the total psychiatric manpower requirements for the U.S. population range from 2,989 to 358,696 full-time-equivalent psychiatrists. The number of available hours of psychiatric service per patient per year is 14.1 hours under current psychiatric practice and 2.8 hours under the managed care scenario. The key to psychiatric manpower estimation lies in clarifying the assumptions that underlie the specific method used. Even small differences in assumptions mean large differences in estimates. Any credible manpower estimation process must include discussions and negotiations between psychiatrists, other clinicians, administrators, and patients and families to clarify the treatment needs of patients and the roles, responsibilities, and job description of psychiatrists.
van Velzen, Marit H N; Loeve, Arjo J; Niehof, Sjoerd P; Mik, Egbert G
2017-11-01
Photoplethysmography (PPG) is a widely available non-invasive optical technique to visualize pressure pulse waves (PWs). Pulse transit time (PTT) is a physiological parameter that is often derived from calculations on ECG and PPG signals and is based on tightly defined characteristics of the PW shape. PPG signals are sensitive to artefacts. Coughing or movement of the subject can affect PW shapes that much that the PWs become unsuitable for further analysis. The aim of this study was to develop an algorithm that automatically and objectively eliminates unsuitable PWs. In order to develop a proper algorithm for eliminating unsuitable PWs, a literature study was conducted. Next, a '7Step PW-Filter' algorithm was developed that applies seven criteria to determine whether a PW matches the characteristics required to allow PTT calculation. To validate whether the '7Step PW-Filter' eliminates only and all unsuitable PWs, its elimination results were compared to the outcome of manual elimination of unsuitable PWs. The '7Step PW-Filter' had a sensitivity of 96.3% and a specificity of 99.3%. The overall accuracy of the '7Step PW-Filter' for detection of unsuitable PWs was 99.3%. Compared to manual elimination, using the '7Step PW-Filter' reduces PW elimination times from hours to minutes and helps to increase the validity, reliability and reproducibility of PTT data.
Qutub, M O; AlBaz, N; Hawken, P; Anoos, A
2011-01-01
To evaluate usefulness of applying either the two-step algorithm (Ag-EIAs and CCNA) or the three-step algorithm (all three assays) for better confirmation of toxigenic Clostridium difficile. The antigen enzyme immunoassays (Ag-EIAs) can accurately identify the glutamate dehydrogenase antigen of toxigenic and nontoxigenic Clostridium difficile. Therefore, it is used in combination with a toxin-detecting assay [cell line culture neutralization assay (CCNA), or the enzyme immunoassays for toxins A and B (TOX-A/BII EIA)] to provide specific evidence of Clostridium difficile-associated diarrhoea. A total of 151 nonformed stool specimens were tested by Ag-EIAs, TOX-A/BII EIA, and CCNA. All tests were performed according to the manufacturer's instructions and the results of Ag-EIAs and TOX-A/BII EIA were read using a spectrophotometer at a wavelength of 450 nm. A total of 61 (40.7%), 38 (25.3%), and 52 (34.7%) specimens tested positive with Ag-EIA, TOX-A/BII EIA, and CCNA, respectively. Overall, the sensitivity, specificity, negative predictive value, and positive predictive value for Ag-EIA were 94%, 87%, 96.6%, and 80.3%, respectively. Whereas for TOX-A/BII EIA, the sensitivity, specificity, negative predictive value, and positive predictive value were 73.1%, 100%, 87.5%, and 100%, respectively. With the two-step algorithm, all 61 Ag-EIAs-positive cases required 2 days for confirmation. With the three-step algorithm, 37 (60.7%) cases were reported immediately, and the remaining 24 (39.3%) required further testing by CCNA. By applying the two-step algorithm, the workload and cost could be reduced by 28.2% compared with the three-step algorithm. The two-step algorithm is the most practical for accurately detecting toxigenic Clostridium difficile, but it is time-consuming.
Improving the Accuracy of the Chebyshev Rational Approximation Method Using Substeps
Isotalo, Aarno; Pusa, Maria
2016-05-01
The Chebyshev Rational Approximation Method (CRAM) for solving the decay and depletion of nuclides is shown to have a remarkable decrease in error when advancing the system with the same time step and microscopic reaction rates as the previous step. This property is exploited here to achieve high accuracy in any end-of-step solution by dividing a step into equidistant sub-steps. The computational cost of identical substeps can be reduced significantly below that of an equal number of regular steps, as the LU decompositions for the linear solves required in CRAM only need to be formed on the first substep. Themore » improved accuracy provided by substeps is most relevant in decay calculations, where there have previously been concerns about the accuracy and generality of CRAM. Lastly, with substeps, CRAM can solve any decay or depletion problem with constant microscopic reaction rates to an extremely high accuracy for all nuclides with concentrations above an arbitrary limit.« less
Interface Management for a NASA Flight Project Using Model-Based Systems Engineering (MBSE)
NASA Technical Reports Server (NTRS)
Vipavetz, Kevin; Shull, Thomas A.; Infeld, Samatha; Price, Jim
2016-01-01
The goal of interface management is to identify, define, control, and verify interfaces; ensure compatibility; provide an efficient system development; be on time and within budget; while meeting stakeholder requirements. This paper will present a successful seven-step approach to interface management used in several NASA flight projects. The seven-step approach using Model Based Systems Engineering will be illustrated by interface examples from the Materials International Space Station Experiment-X (MISSE-X) project. The MISSE-X was being developed as an International Space Station (ISS) external platform for space environmental studies, designed to advance the technology readiness of materials and devices critical for future space exploration. Emphasis will be given to best practices covering key areas such as interface definition, writing good interface requirements, utilizing interface working groups, developing and controlling interface documents, handling interface agreements, the use of shadow documents, the importance of interface requirement ownership, interface verification, and product transition.
Leblond, Veronique; Ouzegdouh, Maya; Button, Paul
2017-01-01
Abstract Introduction The Pitié Salpêtrière Hospital Hemobiotherapy Department, Paris, France, has been providing extracorporeal photopheresis (ECP) since November 2011, and started using the Therakos® CELLEX® fully integrated system in 2012. This report summarizes our single‐center experience of transitioning from the use of multi‐step ECP procedures to the fully integrated ECP system, considering the capacity and cost implications. Materials and Methods The total number of ECP procedures performed 2011–2015 was derived from department records. The time taken to complete a single ECP treatment using a multi‐step technique and the fully integrated system at our department was assessed. Resource costs (2014€) were obtained for materials and calculated for personnel time required. Time‐driven activity‐based costing methods were applied to provide a cost comparison. Results The number of ECP treatments per year increased from 225 (2012) to 727 (2015). The single multi‐step procedure took 270 min compared to 120 min for the fully integrated system. The total calculated per‐session cost of performing ECP using the multi‐step procedure was greater than with the CELLEX® system (€1,429.37 and €1,264.70 per treatment, respectively). Conclusions For hospitals considering a transition from multi‐step procedures to fully integrated methods for ECP where cost may be a barrier, time‐driven activity‐based costing should be utilized to gain a more comprehensive understanding the full benefit that such a transition offers. The example from our department confirmed that there were not just cost and time savings, but that the time efficiencies gained with CELLEX® allow for more patient treatments per year. PMID:28419561
Improving government regulations: a guidebook for conservation and renewable energy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Neese, R. J.; Scheer, R. M.; Marasco, A. L.
1981-04-01
An integrated view of the Office of Conservation and Solar Energy (CS) policy making encompassing both administrative procedures and policy analysis is presented. Chapter One very briefly sketches each step in the development of a significant regulation, noting important requirements and participants. Chapter Two expands upon the Overview, providing the details of the process, the rationale and source of requirements, concurrence procedures, and advice on the timing and synchronization of steps. Chapter Three explains the types of analysis documents that may be required for a program. Regulatory Analyses, Environmental Impact Statements, Urban and Community Impact Analyses, and Regulatory Flexibility Analysesmore » are all discussed. Specific information to be included in the documents and the circumstances under which the documents need to be prepared are explained. Chapter Four is a step-by-step discussion of how to do good analysis. Use of models and data bases is discussed. Policy objectives, alternatives, and decision making are explained. In Chapter five guidance is provided on identifying the public that would most likely be interested in the regulation, involving its constituents in a dialogue with CS, evaluating and handling comments, and engineering the final response. Chapter Six provides direction on planning the evaluation, monitoring the regulation's success once it has been promulgated, and allowing for constructive support or criticism from outside DOE. (MCW)« less
SWIFT MODELLER: a Java based GUI for molecular modeling.
Mathur, Abhinav; Shankaracharya; Vidyarthi, Ambarish S
2011-10-01
MODELLER is command line argument based software which requires tedious formatting of inputs and writing of Python scripts which most people are not comfortable with. Also the visualization of output becomes cumbersome due to verbose files. This makes the whole software protocol very complex and requires extensive study of MODELLER manuals and tutorials. Here we describe SWIFT MODELLER, a GUI that automates formatting, scripting and data extraction processes and present it in an interactive way making MODELLER much easier to use than before. The screens in SWIFT MODELLER are designed keeping homology modeling in mind and their flow is a depiction of its steps. It eliminates the formatting of inputs, scripting processes and analysis of verbose output files through automation and makes pasting of the target sequence as the only prerequisite. Jmol (3D structure visualization tool) has been integrated into the GUI which opens and demonstrates the protein data bank files created by the MODELLER software. All files required and created by the software are saved in a folder named after the work instance's date and time of execution. SWIFT MODELLER lowers the skill level required for the software through automation of many of the steps in the original software protocol, thus saving an enormous amount of time per instance and making MODELLER very easy to work with.
2014-01-01
This article attempts to define terminology and to describe a process for writing adaptive, early phase study protocols which are transparent, self-intuitive and uniform. It provides a step by step guide, giving templates from projects which received regulatory authorisation and were successfully performed in the UK. During adaptive studies evolving data is used to modify the trial design and conduct within the protocol-defined remit. Adaptations within that remit are documented using non-substantial protocol amendments which do not require regulatory or ethical review. This concept is efficient in gathering relevant data in exploratory early phase studies, ethical and time- and cost-effective. PMID:24980283
32 CFR 775.9 - Documentation and analysis.
Code of Federal Regulations, 2014 CFR
2014-07-01
... of the implementing factors of the program that can be ascertained at the time of impact statement... any environmental studies, surveys and impact analyses required by other environmental review laws and... programmatic environmental impact statement discussing the impacts of a wide ranging or long term stepped...
32 CFR 775.9 - Documentation and analysis.
Code of Federal Regulations, 2012 CFR
2012-07-01
... of the implementing factors of the program that can be ascertained at the time of impact statement... any environmental studies, surveys and impact analyses required by other environmental review laws and... programmatic environmental impact statement discussing the impacts of a wide ranging or long term stepped...
32 CFR 775.9 - Documentation and analysis.
Code of Federal Regulations, 2010 CFR
2010-07-01
... of the implementing factors of the program that can be ascertained at the time of impact statement... any environmental studies, surveys and impact analyses required by other environmental review laws and... programmatic environmental impact statement discussing the impacts of a wide ranging or long term stepped...
32 CFR 775.9 - Documentation and analysis.
Code of Federal Regulations, 2011 CFR
2011-07-01
... of the implementing factors of the program that can be ascertained at the time of impact statement... any environmental studies, surveys and impact analyses required by other environmental review laws and... programmatic environmental impact statement discussing the impacts of a wide ranging or long term stepped...
32 CFR 775.9 - Documentation and analysis.
Code of Federal Regulations, 2013 CFR
2013-07-01
... of the implementing factors of the program that can be ascertained at the time of impact statement... any environmental studies, surveys and impact analyses required by other environmental review laws and... programmatic environmental impact statement discussing the impacts of a wide ranging or long term stepped...
Manuel, Gerald; Lupták, Andrej; Corn, Robert M.
2017-01-01
A two-step templated, ribosomal biosynthesis/printing method for the fabrication of protein microarrays for surface plasmon resonance imaging (SPRI) measurements is demonstrated. In the first step, a sixteen component microarray of proteins is created in microwells by cell free on chip protein synthesis; each microwell contains both an in vitro transcription and translation (IVTT) solution and 350 femtomoles of a specific DNA template sequence that together are used to create approximately 40 picomoles of a specific hexahistidine-tagged protein. In the second step, the protein microwell array is used to contact print one or more protein microarrays onto nitrilotriacetic acid (NTA)-functionalized gold thin film SPRI chips for real-time SPRI surface bioaffinity adsorption measurements. Even though each microwell array element only contains approximately 40 picomoles of protein, the concentration is sufficiently high for the efficient bioaffinity adsorption and capture of the approximately 100 femtomoles of hexahistidine-tagged protein required to create each SPRI microarray element. As a first example, the protein biosynthesis process is verified with fluorescence imaging measurements of a microwell array containing His-tagged green fluorescent protein (GFP), yellow fluorescent protein (YFP) and mCherry (RFP), and then the fidelity of SPRI chips printed from this protein microwell array is ascertained by measuring the real-time adsorption of various antibodies specific to these three structurally related proteins. This greatly simplified two-step synthesis/printing fabrication methodology eliminates most of the handling, purification and processing steps normally required in the synthesis of multiple protein probes, and enables the rapid fabrication of SPRI protein microarrays from DNA templates for the study of protein-protein bioaffinity interactions. PMID:28706572
Changes in the dielectric properties of a plant stem produced by the application of voltage steps
NASA Astrophysics Data System (ADS)
Hart, F. X.
1983-03-01
Time Domain Dielectric Spectroscopy (TDDS) provides a useful method for monitoring the physiological state of a biological system which may be changing with time. A voltage step is applied to a sample and the Fourier Transform of the resulting current yields the variations of the conductance, capacitance and dielectric loss of the sample with frequency (dielectric spectrum). An important question is whether the application of the voltage step itself can produce changes which obscure those of interest. Long term monitoring of the dielectric properties of plant stems requires the use of needle electrodes with relatively large current densities and field strengths at the electrode-stem interface. Steady currents on the order of those used in TDDS have been observed to modify the distribution of plant growth hormones, to produce wounding at electrode sites, and to cause stem collapse. This paper presents the preliminary results of an investigation into the effects of the application of voltage steps on the observed dielectric spectrum of the stem of the plant Coleus.
Coherent diffractive imaging of time-evolving samples with improved temporal resolution
Ulvestad, A.; Tripathi, A.; Hruszkewycz, S. O.; ...
2016-05-19
Bragg coherent x-ray diffractive imaging is a powerful technique for investigating dynamic nanoscale processes in nanoparticles immersed in reactive, realistic environments. Its temporal resolution is limited, however, by the oversampling requirements of three-dimensional phase retrieval. Here, we show that incorporating the entire measurement time series, which is typically a continuous physical process, into phase retrieval allows the oversampling requirement at each time step to be reduced, leading to a subsequent improvement in the temporal resolution by a factor of 2-20 times. The increased time resolution will allow imaging of faster dynamics and of radiation-dose-sensitive samples. Furthermore, this approach, which wemore » call "chrono CDI," may find use in improving the time resolution in other imaging techniques.« less
Non-Invasive Transcranial Brain Therapy Guided by CT Scans: an In Vivo Monkey Study
NASA Astrophysics Data System (ADS)
Marquet, F.; Pernot, M.; Aubry, J.-F.; Montaldo, G.; Tanter, M.; Boch, A.-L.; Kujas, M.; Seilhean, D.; Fink, M.
2007-05-01
Brain therapy using focused ultrasound remains very limited due to the strong aberrations induced by the skull. A minimally invasive technique using time-reversal was validated recently in-vivo on 20 sheeps. But such a technique requires a hydrophone at the focal point for the first step of the time-reversal procedure. A completely noninvasive therapy requires a reliable model of the acoustic properties of the skull in order to simulate this first step. 3-D simulations based on high-resolution CT images of a skull have been successfully performed with a finite differences code developed in our Laboratory. Thanks to the skull porosity, directly extracted from the CT images, we reconstructed acoustic speed, density and absorption maps and performed the computation. Computed wavefronts are in good agreement with experimental wavefronts acquired through the same part of the skull and this technique was validated in-vitro in the laboratory. A stereotactic frame has been designed and built in order to perform non invasive transcranial focusing in vivo. Here we describe all the steps of our new protocol, from the CT-scans to the therapy treatment and the first in vivo results on a monkey will be presented. This protocol is based on protocols already existing in radiotherapy.
OGUSHI, Sugako; SAITOU, Mitinori
2010-10-01
During oocyte growth in the ovary, the nucleolus is mainly responsible for ribosome biogenesis. However, in the fully-grown oocyte, all transcription ceases, including ribosomal RNA synthesis, and the nucleolus adopts a specific monotonous fibrillar morphology without chromatin. The function of this inactive nucleolus in oocytes and embryos is still unknown. We previously reported that the embryo lacking an inactive nucleolus failed to develop past the first few cleavages, indicating the requirement of a nucleolus for preimplantation development. Here, we reinjected the nucleolus into oocytes and zygotes without nucleoli at various time points to examine the timing of the nucleolus requirement during meiosis and early embryonic development. When we put the nucleolus back into oocytes lacking a nucleolus at the germinal vesicle (GV) stage and at second metaphase (MII), these oocytes were fertilized, formed pronuclei with nucleoli and developed to full term. When the nucleolus was reinjected at the pronucleus (PN) stage, most of the reconstructed zygotes cleaved and formed nuclei with nucleoli at the 2-cell stage, but the rate of blastocyst formation and the numbers of surviving pups were profoundly reduced. Moreover, the zygotes without nucleoli showed a disorder of higher chromatin organization not only in the female pronucleus but also, interestingly, in the male pronucleus. Thus, the critical time point when the nucleolus is required for progression of early embryonic development appears to be at the point of the early step of pronucleus organization.
Effects of Imperfect Dynamic Clamp: Computational and Experimental Results
Bettencourt, Jonathan C.; Lillis, Kyle P.; White, John A.
2008-01-01
In the dynamic clamp technique, a typically nonlinear feedback system delivers electrical current to an excitable cell that represents the actions of “virtual” ion channels (e.g., channels that are gated by local membrane potential or by electrical activity in neighboring biological or virtual neurons). Since the conception of this technique, there have been a number of different implementations of dynamic clamp systems, each with differing levels of flexibility and performance. Embedded hardware-based systems typically offer feedback that is very fast and precisely timed, but these systems are often expensive and sometimes inflexible. PC-based systems, on the other hand, allow the user to write software that defines an arbitrarily complex feedback system, but real-time performance in PC-based systems can be deteriorated by imperfect real-time performance. Here we systematically evaluate the performance requirements for artificial dynamic clamp knock-in of transient sodium and delayed rectifier potassium conductances. Specifically we examine the effects of controller time step duration, differential equation integration method, jitter (variability in time step), and latency (the time lag from reading inputs to updating outputs). Each of these control system flaws is artificially introduced in both simulated and real dynamic clamp experiments. We demonstrate that each of these errors affect dynamic clamp accuracy in a way that depends on the time constants and stiffness of the differential equations being solved. In simulations, time steps above 0.2 ms lead to catastrophic alteration of spike shape, but the frequency-vs.-current relationship is much more robust. Latency (the part of the time step that occurs between measuring membrane potential and injecting re-calculated membrane current) is a crucial factor as well. Experimental data are substantially more sensitive to inaccuracies than simulated data. PMID:18076999
Criteria for Handling Qualities of Military Aircraft.
1982-06-01
loop precognitive manner. The pilot is able to apply discrete, step-like inputs which more or less exactly produce the desired aircraft response. Some...While closed loop operation depends upon the frequency domain response characteristics, successful precognitive control requires the time domain...represents the other extreme of the pilot task from the precognitive time response situation. Mich work was done in attempting to predict pilot opinion from
Architecture for time or transform domain decoding of reed-solomon codes
NASA Technical Reports Server (NTRS)
Hsu, In-Shek (Inventor); Truong, Trieu-Kie (Inventor); Deutsch, Leslie J. (Inventor); Shao, Howard M. (Inventor)
1989-01-01
Two pipeline (255,233) RS decoders, one a time domain decoder and the other a transform domain decoder, use the same first part to develop an errata locator polynomial .tau.(x), and an errata evaluator polynominal A(x). Both the time domain decoder and transform domain decoder have a modified GCD that uses an input multiplexer and an output demultiplexer to reduce the number of GCD cells required. The time domain decoder uses a Chien search and polynomial evaluator on the GCD outputs .tau.(x) and A(x), for the final decoding steps, while the transform domain decoder uses a transform error pattern algorithm operating on .tau.(x) and the initial syndrome computation S(x), followed by an inverse transform algorithm in sequence for the final decoding steps prior to adding the received RS coded message to produce a decoded output message.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Arumugam, Kamesh
Efficient parallel implementations of scientific applications on multi-core CPUs with accelerators such as GPUs and Xeon Phis is challenging. This requires - exploiting the data parallel architecture of the accelerator along with the vector pipelines of modern x86 CPU architectures, load balancing, and efficient memory transfer between different devices. It is relatively easy to meet these requirements for highly structured scientific applications. In contrast, a number of scientific and engineering applications are unstructured. Getting performance on accelerators for these applications is extremely challenging because many of these applications employ irregular algorithms which exhibit data-dependent control-ow and irregular memory accesses. Furthermore,more » these applications are often iterative with dependency between steps, and thus making it hard to parallelize across steps. As a result, parallelism in these applications is often limited to a single step. Numerical simulation of charged particles beam dynamics is one such application where the distribution of work and memory access pattern at each time step is irregular. Applications with these properties tend to present significant branch and memory divergence, load imbalance between different processor cores, and poor compute and memory utilization. Prior research on parallelizing such irregular applications have been focused around optimizing the irregular, data-dependent memory accesses and control-ow during a single step of the application independent of the other steps, with the assumption that these patterns are completely unpredictable. We observed that the structure of computation leading to control-ow divergence and irregular memory accesses in one step is similar to that in the next step. It is possible to predict this structure in the current step by observing the computation structure of previous steps. In this dissertation, we present novel machine learning based optimization techniques to address the parallel implementation challenges of such irregular applications on different HPC architectures. In particular, we use supervised learning to predict the computation structure and use it to address the control-ow and memory access irregularities in the parallel implementation of such applications on GPUs, Xeon Phis, and heterogeneous architectures composed of multi-core CPUs with GPUs or Xeon Phis. We use numerical simulation of charged particles beam dynamics simulation as a motivating example throughout the dissertation to present our new approach, though they should be equally applicable to a wide range of irregular applications. The machine learning approach presented here use predictive analytics and forecasting techniques to adaptively model and track the irregular memory access pattern at each time step of the simulation to anticipate the future memory access pattern. Access pattern forecasts can then be used to formulate optimization decisions during application execution which improves the performance of the application at a future time step based on the observations from earlier time steps. In heterogeneous architectures, forecasts can also be used to improve the memory performance and resource utilization of all the processing units to deliver a good aggregate performance. We used these optimization techniques and anticipation strategy to design a cache-aware, memory efficient parallel algorithm to address the irregularities in the parallel implementation of charged particles beam dynamics simulation on different HPC architectures. Experimental result using a diverse mix of HPC architectures shows that our approach in using anticipation strategy is effective in maximizing data reuse, ensuring workload balance, minimizing branch and memory divergence, and in improving resource utilization.« less
Expedited vocational assessment under the sequential evaluation process. Final rules.
2012-07-25
We are revising our rules to give adjudicators the discretion to proceed to the fifth step of the sequential evaluation process for assessing disability when we have insufficient information about a claimant's past relevant work history to make the findings required for step 4. If an adjudicator finds at step 5 that a claimant may be unable to adjust to other work existing in the national economy, the adjudicator will return to the fourth step to develop the claimant's work history and make a finding about whether the claimant can perform his or her past relevant work. We expect that this new expedited process will not disadvantage any claimant or change the ultimate conclusion about whether a claimant is disabled, but it will promote administrative efficiency and help us make more timely disability determinations and decisions.
Preconditioned conjugate-gradient methods for low-speed flow calculations
NASA Technical Reports Server (NTRS)
Ajmani, Kumud; Ng, Wing-Fai; Liou, Meng-Sing
1993-01-01
An investigation is conducted into the viability of using a generalized Conjugate Gradient-like method as an iterative solver to obtain steady-state solutions of very low-speed fluid flow problems. Low-speed flow at Mach 0.1 over a backward-facing step is chosen as a representative test problem. The unsteady form of the two dimensional, compressible Navier-Stokes equations is integrated in time using discrete time-steps. The Navier-Stokes equations are cast in an implicit, upwind finite-volume, flux split formulation. The new iterative solver is used to solve a linear system of equations at each step of the time-integration. Preconditioning techniques are used with the new solver to enhance the stability and convergence rate of the solver and are found to be critical to the overall success of the solver. A study of various preconditioners reveals that a preconditioner based on the Lower-Upper Successive Symmetric Over-Relaxation iterative scheme is more efficient than a preconditioner based on Incomplete L-U factorizations of the iteration matrix. The performance of the new preconditioned solver is compared with a conventional Line Gauss-Seidel Relaxation (LGSR) solver. Overall speed-up factors of 28 (in terms of global time-steps required to converge to a steady-state solution) and 20 (in terms of total CPU time on one processor of a CRAY-YMP) are found in favor of the new preconditioned solver, when compared with the LGSR solver.
Preconditioned Conjugate Gradient methods for low speed flow calculations
NASA Technical Reports Server (NTRS)
Ajmani, Kumud; Ng, Wing-Fai; Liou, Meng-Sing
1993-01-01
An investigation is conducted into the viability of using a generalized Conjugate Gradient-like method as an iterative solver to obtain steady-state solutions of very low-speed fluid flow problems. Low-speed flow at Mach 0.1 over a backward-facing step is chosen as a representative test problem. The unsteady form of the two dimensional, compressible Navier-Stokes equations are integrated in time using discrete time-steps. The Navier-Stokes equations are cast in an implicit, upwind finite-volume, flux split formulation. The new iterative solver is used to solve a linear system of equations at each step of the time-integration. Preconditioning techniques are used with the new solver to enhance the stability and the convergence rate of the solver and are found to be critical to the overall success of the solver. A study of various preconditioners reveals that a preconditioner based on the lower-upper (L-U)-successive symmetric over-relaxation iterative scheme is more efficient than a preconditioner based on incomplete L-U factorizations of the iteration matrix. The performance of the new preconditioned solver is compared with a conventional line Gauss-Seidel relaxation (LGSR) solver. Overall speed-up factors of 28 (in terms of global time-steps required to converge to a steady-state solution) and 20 (in terms of total CPU time on one processor of a CRAY-YMP) are found in favor of the new preconditioned solver, when compared with the LGSR solver.
The exact fundamental solution for the Benes tracking problem
NASA Astrophysics Data System (ADS)
Balaji, Bhashyam
2009-05-01
The universal continuous-discrete tracking problem requires the solution of a Fokker-Planck-Kolmogorov forward equation (FPKfe) for an arbitrary initial condition. Using results from quantum mechanics, the exact fundamental solution for the FPKfe is derived for the state model of arbitrary dimension with Benes drift that requires only the computation of elementary transcendental functions and standard linear algebra techniques- no ordinary or partial differential equations need to be solved. The measurement process may be an arbitrary, discrete-time nonlinear stochastic process, and the time step size can be arbitrary. Numerical examples are included, demonstrating its utility in practical implementation.
A point implicit time integration technique for slow transient flow problems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kadioglu, Samet Y.; Berry, Ray A.; Martineau, Richard C.
2015-05-01
We introduce a point implicit time integration technique for slow transient flow problems. The method treats the solution variables of interest (that can be located at cell centers, cell edges, or cell nodes) implicitly and the rest of the information related to same or other variables are handled explicitly. The method does not require implicit iteration; instead it time advances the solutions in a similar spirit to explicit methods, except it involves a few additional function(s) evaluation steps. Moreover, the method is unconditionally stable, as a fully implicit method would be. This new approach exhibits the simplicity of implementation ofmore » explicit methods and the stability of implicit methods. It is specifically designed for slow transient flow problems of long duration wherein one would like to perform time integrations with very large time steps. Because the method can be time inaccurate for fast transient problems, particularly with larger time steps, an appropriate solution strategy for a problem that evolves from a fast to a slow transient would be to integrate the fast transient with an explicit or semi-implicit technique and then switch to this point implicit method as soon as the time variation slows sufficiently. We have solved several test problems that result from scalar or systems of flow equations. Our findings indicate the new method can integrate slow transient problems very efficiently; and its implementation is very robust.« less
Dynamic implicit 3D adaptive mesh refinement for non-equilibrium radiation diffusion
NASA Astrophysics Data System (ADS)
Philip, B.; Wang, Z.; Berrill, M. A.; Birke, M.; Pernice, M.
2014-04-01
The time dependent non-equilibrium radiation diffusion equations are important for solving the transport of energy through radiation in optically thick regimes and find applications in several fields including astrophysics and inertial confinement fusion. The associated initial boundary value problems that are encountered often exhibit a wide range of scales in space and time and are extremely challenging to solve. To efficiently and accurately simulate these systems we describe our research on combining techniques that will also find use more broadly for long term time integration of nonlinear multi-physics systems: implicit time integration for efficient long term time integration of stiff multi-physics systems, local control theory based step size control to minimize the required global number of time steps while controlling accuracy, dynamic 3D adaptive mesh refinement (AMR) to minimize memory and computational costs, Jacobian Free Newton-Krylov methods on AMR grids for efficient nonlinear solution, and optimal multilevel preconditioner components that provide level independent solver convergence.
A discrete classical space-time could require 6 extra-dimensions
NASA Astrophysics Data System (ADS)
Guillemant, Philippe; Medale, Marc; Abid, Cherifa
2018-01-01
We consider a discrete space-time in which conservation laws are computed in such a way that the density of information is kept bounded. We use a 2D billiard as a toy model to compute the uncertainty propagation in ball positions after every shock and the corresponding loss of phase information. Our main result is the computation of a critical time step above which billiard calculations are no longer deterministic, meaning that a multiverse of distinct billiard histories begins to appear, caused by the lack of information. Then, we highlight unexpected properties of this critical time step and the subsequent exponential evolution of the number of histories with time, to observe that after certain duration all billiard states could become possible final states, independent of initial conditions. We conclude that if our space-time is really a discrete one, one would need to introduce extra-dimensions in order to provide supplementary constraints that specify which history should be played.
One Step at a Time: SBM as an Incremental Process.
ERIC Educational Resources Information Center
Conrad, Mark
1995-01-01
Discusses incremental SBM budgeting and answers questions regarding resource equity, bookkeeping requirements, accountability, decision-making processes, and purchasing. Approaching site-based management as an incremental process recognizes that every school system engages in some level of site-based decisions. Implementation can be gradual and…
Lessons from Alternative Grading: Essential Qualities of Teacher Feedback
ERIC Educational Resources Information Center
Percell, Jay C.
2017-01-01
One critically important step in the instructional process is providing feedback to students, and yet, providing timely and thorough feedback is often lacking due attention. Reasons for this oversight could range from several factors including increased class sizes, vast content coverage requirements, extracurricular responsibilities, and the…
Divakar, K; Devi, G Nandhini; Gautam, Pennathur
2012-01-01
Protein identification in polyacrylamide gel electrophoresis (PAGE) requires post-electrophoretic steps like fixing, staining, and destaining of the gel, which are time-consuming and cumbersome. A new method for direct visualization of protein bands in PAGE has been developed using meso-tetrakis(4-sulfonatophenyl)porphyrin (TPPS) as a dye without the need for any post-electrophoretic steps; thus, separation and recovery of enzymes become much easier for further analysis. Activity staining was carried out to show that the biochemical activity of the enzymes was preserved after electrophoresis.
Intermediate sanctions for healthcare organizations.
Samuels, David G; Shoretz, Morris
2002-09-01
Intermediate sanctions legislation requires that tax-exempt providers take steps to ensure that their senior staff members are compensated at fair-market value. A first-time violation could subject an individual to an excise tax of 25 percent of the compensation amount deemed to be excess benefit. Failure to correct the violation could subject the individual to an excise tax of 200 percent of the excess benefit. Tax-exempt organizations may invoke a rebuttable presumption of reasonableness that compensation levels are appropriate. Tax-exempt providers should refer to IRS guidance regarding steps to ensuring compliance.
Evaluation of Reaction Cross Section Data Used for Thin Layer Activation Technique
NASA Astrophysics Data System (ADS)
Ditrói, F.; Takács, S.; Tárkányi, F.
2005-05-01
Thin layer activation (TLA) is a widely used nuclear method to investigate and control the loss of material during wear, corrosion and erosion processes. The process requires knowledge of depth profiles of the investigated radioisotopes produced by charged particle bombardment. The depth distribution of the activity can be determined with direct, very time-consuming step by step measurement or by calculation from reliable cross section, stopping power and sample composition data. These data were checked experimentally at several points performing only a couple of measurements.
Evaluation of Reaction Cross Section Data Used for Thin Layer Activation Technique
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ditroi, F.; Takacs, S.; Tarkanyi, F.
2005-05-24
Thin layer activation (TLA) is a widely used nuclear method to investigate and control the loss of material during wear, corrosion and erosion processes. The process requires knowledge of depth profiles of the investigated radioisotopes produced by charged particle bombardment. The depth distribution of the activity can be determined with direct, very time-consuming step by step measurement or by calculation from reliable cross section, stopping power and sample composition data. These data were checked experimentally at several points performing only a couple of measurements.
Pedometer determined physical activity tracks in African American adults: The Jackson Heart Study
2012-01-01
Background This study investigated the number of pedometer assessment occasions required to establish habitual physical activity in African American adults. Methods African American adults (mean age 59.9 ± 0.60 years; 59 % female) enrolled in the Diet and Physical Activity Substudy of the Jackson Heart Study wore Yamax pedometers during 3-day monitoring periods, assessed on two to three distinct occasions, each separated by approximately one month. The stability of pedometer measured PA was described as differences in mean steps/day across time, as intraclass correlation coefficients (ICC) by sex, age, and body mass index (BMI) category, and as percent of participants changing steps/day quartiles across time. Results Valid data were obtained for 270 participants on either two or three different assessment occasions. Mean steps/day were not significantly different across assessment occasions (p values > 0.456). The overall ICCs for steps/day assessed on either two or three occasions were 0.57 and 0.76, respectively. In addition, 85 % (two assessment occasions) and 76 % (three assessment occasions) of all participants remained in the same steps/day quartile or changed one quartile over time. Conclusion The current study shows that an overall mean steps/day estimate based on a 3-day monitoring period did not differ significantly over 4 – 6 months. The findings were robust to differences in sex, age, and BMI categories. A single 3-day monitoring period is sufficient to capture habitual physical activity in African American adults. PMID:22512833
Das, P; Pandey, P; Harishankar, A; Chandy, M; Bhattacharya, S; Chakrabarti, A
2017-01-01
Standardization of Aspergillus polymerase chain reaction (PCR) poses two technical challenges (a) standardization of DNA extraction, (b) optimization of PCR against various medically important Aspergillus species. Many cases of aspergillosis go undiagnosed because of relative insensitivity of conventional diagnostic methods such as microscopy, culture or antigen detection. The present study is an attempt to standardize real-time PCR assay for rapid sensitive and specific detection of Aspergillus DNA in EDTA whole blood. Three nucleic acid extraction protocols were compared and a two-step real-time PCR assay was developed and validated following the recommendations of the European Aspergillus PCR Initiative in our setup. In the first PCR step (pan-Aspergillus PCR), the target was 28S rDNA gene, whereas in the second step, species specific PCR the targets were beta-tubulin (for Aspergillus fumigatus, Aspergillus flavus, Aspergillus terreus), gene and calmodulin gene (for Aspergillus niger). Species specific identification of four medically important Aspergillus species, namely, A. fumigatus, A. flavus, A. niger and A. terreus were achieved by this PCR. Specificity of the PCR was tested against 34 different DNA source including bacteria, virus, yeast, other Aspergillus sp., other fungal species and for human DNA and had no false-positive reactions. The analytical sensitivity of the PCR was found to be 102 CFU/ml. The present protocol of two-step real-time PCR assays for genus- and species-specific identification for commonly isolated species in whole blood for diagnosis of invasive Aspergillus infections offers a rapid, sensitive and specific assay option and requires clinical validation at multiple centers.
Real-Time Aerodynamic Flow and Data Visualization in an Interactive Virtual Environment
NASA Technical Reports Server (NTRS)
Schwartz, Richard J.; Fleming, Gary A.
2005-01-01
Significant advances have been made to non-intrusive flow field diagnostics in the past decade. Camera based techniques are now capable of determining physical qualities such as surface deformation, surface pressure and temperature, flow velocities, and molecular species concentration. In each case, extracting the pertinent information from the large volume of acquired data requires powerful and efficient data visualization tools. The additional requirement for real time visualization is fueled by an increased emphasis on minimizing test time in expensive facilities. This paper will address a capability titled LiveView3D, which is the first step in the development phase of an in depth, real time data visualization and analysis tool for use in aerospace testing facilities.
Step-off, vertical electromagnetic responses of a deep resistivity layer buried in marine sediments
NASA Astrophysics Data System (ADS)
Jang, Hangilro; Jang, Hannuree; Lee, Ki Ha; Kim, Hee Joon
2013-04-01
A frequency-domain, marine controlled-source electromagnetic (CSEM) method has been applied successfully in deep water areas for detecting hydrocarbon (HC) reservoirs. However, a typical technique with horizontal transmitters and receivers requires large source-receiver separations with respect to the target depth. A time-domain EM system with vertical transmitters and receivers can be an alternative because vertical electric fields are sensitive to deep resistive layers. In this paper, a time-domain modelling code, with multiple source and receiver dipoles that are finite in length, has been written to investigate transient EM problems. With the use of this code, we calculate step-off responses for one-dimensional HC reservoir models. Although the vertical electric field has much smaller amplitude of signal than the horizontal field, vertical currents resulting from a vertical transmitter are sensitive to resistive layers. The modelling shows a significant difference between step-off responses of HC- and water-filled reservoirs, and the contrast can be recognized at late times at relatively short offsets. A maximum contrast occurs at more than 4 s, being delayed with the depth of the HC layer.
NASA Astrophysics Data System (ADS)
Izat Rashed, Ghamgeen
2018-03-01
This paper presented a way of obtaining certain operating rules on time steps for the management of a large reservoir operation with a peak hydropower plant associated to it. The rules were allowed to have the form of non-linear regression equations which link a decision variable (here the water volume in the reservoir at the end of the time step) by several parameters influencing it. This paper considered the Dokan hydroelectric development KR-Iraq, which operation data are available for. It was showing that both the monthly average inflows and the monthly power demands are random variables. A model of deterministic dynamic programming intending the minimization of the total amount of the squares differences between the demanded energy and the generated energy is run with a multitude of annual scenarios of inflows and monthly required energies. The operating rules achieved allow the efficient and safe management of the operation and it is quietly and accurately known the forecast of the inflow and of the energy demand on the next time step.
NASA Astrophysics Data System (ADS)
Rashed, G. I.
2018-02-01
This paper presented a way of obtaining certain operating rules on time steps for the management of a large reservoir operation with a peak hydropower plant associated to it. The rules were allowed to have the form of non-linear regression equations which link a decision variable (here the water volume in the reservoir at the end of the time step) by several parameters influencing it. This paper considered the Dokan hydroelectric development KR-Iraq, which operation data are available for. It was showing that both the monthly average inflows and the monthly power demands are random variables. A model of deterministic dynamic programming intending the minimization of the total amount of the squares differences between the demanded energy and the generated energy is run with a multitude of annual scenarios of inflows and monthly required energies. The operating rules achieved allow the efficient and safe management of the operation and it is quietly and accurately known the forecast of the inflow and of the energy demand on the next time step.
Hemann, Brian A; Durning, Steven J; Kelly, William F; Dong, Ting; Pangaro, Louis N; Hemmer, Paul A
2015-04-01
To determine whether the Uniformed Services University (USU) system of workplace performance assessment for students in the internal medicine clerkship at the USU continues to be a sensitive predictor of subsequent poor performance during internship, when compared with assessments in other USU third year clerkships. Utilizing Program Director survey results from 2007 through 2011 and U.S. Medical Licensing Examination (USMLE) Step 3 examination results as the outcomes of interest, we compared performance during internship for students who had less than passing performance in the internal medicine clerkship and required remediation, against students whose performance in the internal medicine clerkship was successful. We further analyzed internship ratings for students who received less than passing grades during the same time period on other third year clerkships such as general surgery, pediatrics, obstetrics and gynecology, family medicine, and psychiatry to evaluate whether poor performance on other individual clerkships were associated with future poor performance at the internship level. Results for this recent cohort of graduates were compared with previously published findings. The overall survey response rate for this 5 year cohort was 81% (689/853). Students who received a less than passing grade in the internal medicine clerkship and required further remediation were 4.5 times more likely to be given poor ratings in the domain of medical expertise and 18.7 times more likely to demonstrate poor professionalism during internship. Further, students requiring internal medicine remediation were 8.5 times more likely to fail USMLE Step 3. No other individual clerkship showed any statistically significant associations with performance at the intern level. On the other hand, 40% of students who successfully remediated and did graduate were not identified during internship as having poor performance. Unsuccessful clinical performance which requires remediation in the third year internal medicine clerkship at Uniformed Services University of the Health Sciences continues to be strongly associated with poor performance at the internship level. No significant associations existed between any of the other clerkships and poor performance during internship and Step 3 failure. The strength of this association with the internal medicine clerkship is most likely because of an increased level of sensitivity in detecting poor performance. Reprint & Copyright © 2015 Association of Military Surgeons of the U.S.
Contact-aware simulations of particulate Stokesian suspensions
NASA Astrophysics Data System (ADS)
Lu, Libin; Rahimian, Abtin; Zorin, Denis
2017-10-01
We present an efficient, accurate, and robust method for simulation of dense suspensions of deformable and rigid particles immersed in Stokesian fluid in two dimensions. We use a well-established boundary integral formulation for the problem as the foundation of our approach. This type of formulation, with a high-order spatial discretization and an implicit and adaptive time discretization, have been shown to be able to handle complex interactions between particles with high accuracy. Yet, for dense suspensions, very small time-steps or expensive implicit solves as well as a large number of discretization points are required to avoid non-physical contact and intersections between particles, leading to infinite forces and numerical instability. Our method maintains the accuracy of previous methods at a significantly lower cost for dense suspensions. The key idea is to ensure interference-free configuration by introducing explicit contact constraints into the system. While such constraints are unnecessary in the formulation, in the discrete form of the problem, they make it possible to eliminate catastrophic loss of accuracy by preventing contact explicitly. Introducing contact constraints results in a significant increase in stable time-step size for explicit time-stepping, and a reduction in the number of points adequate for stability.
Experimental studies of systematic multiple-energy operation at HIMAC synchrotron
NASA Astrophysics Data System (ADS)
Mizushima, K.; Katagiri, K.; Iwata, Y.; Furukawa, T.; Fujimoto, T.; Sato, S.; Hara, Y.; Shirai, T.; Noda, K.
2014-07-01
Multiple-energy synchrotron operation providing carbon-ion beams with various energies has been used for scanned particle therapy at NIRS. An energy range from 430 to 56 MeV/u and about 200 steps within this range are required to vary the Bragg peak position for effective treatment. The treatment also demands the slow extraction of beam with highly reliable properties, such as spill, position and size, for all energies. We propose an approach to generating multiple-energy operation meeting these requirements within a short time. In this approach, the device settings at most energy steps are determined without manual adjustments by using systematic parameter tuning depending on the beam energy. Experimental verification was carried out at the HIMAC synchrotron, and its results proved that this approach can greatly reduce the adjustment period.
Accelerated step-temperature aging of Al/x/Ga/1-x/As heterojunction laser diodes
NASA Technical Reports Server (NTRS)
Kressel, H.; Ettenberg, M.; Ladany, I.
1978-01-01
Double-heterojunction A2(0.3)Ga(0.7)As/Al(0.08)Ga(0.92)As lasers (oxide-striped and Al2O3 facet coated) were subjected to step-temperature aging from 60 to 100 C. The change in threshold current and spontaneous output was monitored at 22 C. The average time required for a 20% pulsed threshold current increase ranges from about 500 h, when operating at 100 C, to about 5000 h at a 70 C ambience. At 22 C, the extrapolated time is about 1 million h. The time needed for a 50% spontaneous emission reduction is of the same order of magnitude. The resulting activation energies are approximately 0.95 eV for laser degradation and approximately 1.1 eV for the spontaneous output decrease
[Implementation of a rational standard of hygiene for preparation of operating rooms].
Bauer, M; Scheithauer, S; Moerer, O; Pütz, H; Sliwa, B; Schmidt, C E; Russo, S G; Waeschle, R M
2015-10-01
The assurance of high standards of care is a major requirement in German hospitals while cost reduction and efficient use of resources are mandatory. These requirements are particularly evident in the high-risk and cost-intensive operating theatre field with multiple process steps. The cleaning of operating rooms (OR) between surgical procedures is of major relevance for patient safety and requires time and human resources. The hygiene procedure plan for OR cleaning between operations at the university hospital in Göttingen was revised and optimized according to the plan-do-check-act principle due to not clearly defined specifications of responsibilities, use of resources, prolonged process times and increased staff engagement. The current status was evaluated in 2012 as part of the first step "plan". The subsequent step "do" included an expert symposium with external consultants, interdisciplinary consensus conferences with an actualization of the former hygiene procedure plan and the implementation process. All staff members involved were integrated into this management change process. The penetration rate of the training and information measures as well as the acceptance and compliance with the new hygiene procedure plan were reviewed within step "check". The rates of positive swabs and air sampling as well as of postoperative wound infections were analyzed for quality control and no evidence for a reduced effectiveness of the new hygiene plan was found. After the successful implementation of these measures the next improvement cycle ("act") was performed in 2014 which led to a simplification of the hygiene plan by reduction of the number of defined cleaning and disinfection programs for preparation of the OR. The reorganization measures described led to a comprehensive commitment of the hygiene procedure plan by distinct specifications for responsibilities, for the course of action and for the use of resources. Furthermore, a simplification of the plan, a rational staff assignment and reduced process times were accomplished. Finally, potential conflicts due to an insufficient evidence-based knowledge of personnel was reduced. This present project description can be used by other hospitals as a guideline for similar changes in management processes.
Method for network analyzation and apparatus
Bracht, Roger B.; Pasquale, Regina V.
2001-01-01
A portable network analyzer and method having multiple channel transmit and receive capability for real-time monitoring of processes which maintains phase integrity, requires low power, is adapted to provide full vector analysis, provides output frequencies of up to 62.5 MHz and provides fine sensitivity frequency resolution. The present invention includes a multi-channel means for transmitting and a multi-channel means for receiving, both in electrical communication with a software means for controlling. The means for controlling is programmed to provide a signal to a system under investigation which steps consecutively over a range of predetermined frequencies. The resulting received signal from the system provides complete time domain response information by executing a frequency transform of the magnitude and phase information acquired at each frequency step.
Autonomous reinforcement learning with experience replay.
Wawrzyński, Paweł; Tanwani, Ajay Kumar
2013-05-01
This paper considers the issues of efficiency and autonomy that are required to make reinforcement learning suitable for real-life control tasks. A real-time reinforcement learning algorithm is presented that repeatedly adjusts the control policy with the use of previously collected samples, and autonomously estimates the appropriate step-sizes for the learning updates. The algorithm is based on the actor-critic with experience replay whose step-sizes are determined on-line by an enhanced fixed point algorithm for on-line neural network training. An experimental study with simulated octopus arm and half-cheetah demonstrates the feasibility of the proposed algorithm to solve difficult learning control problems in an autonomous way within reasonably short time. Copyright © 2012 Elsevier Ltd. All rights reserved.
Strotman, Lindsay N; Lin, Guangyun; Berry, Scott M; Johnson, Eric A; Beebe, David J
2012-09-07
Extraction and purification of DNA is a prerequisite to detection and analytical techniques. While DNA sample preparation methods have improved over the last few decades, current methods are still time consuming and labor intensive. Here we demonstrate a technology termed IFAST (Immiscible Filtration Assisted by Surface Tension), that relies on immiscible phase filtration to reduce the time and effort required to purify DNA. IFAST replaces the multiple wash and centrifugation steps required by traditional DNA sample preparation methods with a single step. To operate, DNA from lysed cells is bound to paramagnetic particles (PMPs) and drawn through an immiscible fluid phase barrier (i.e. oil) by an external handheld magnet. Purified DNA is then eluted from the PMPs. Here, detection of Clostridium botulinum type A (BoNT/A) in food matrices (milk, orange juice), a bioterrorism concern, was used as a model system to establish IFAST's utility in detection assays. Data validated that the DNA purified by IFAST was functional as a qPCR template to amplify the bont/A gene. The sensitivity limit of IFAST was comparable to the commercially available Invitrogen ChargeSwitch® method. Notably, pathogen detection via IFAST required only 8.5 μL of sample and was accomplished in five-fold less time. The simplicity, rapidity and portability of IFAST offer significant advantages when compared to existing DNA sample preparation methods.
Kapur, Ajay; Adair, Nilda; O'Brien, Mildred; Naparstek, Nikoleta; Cangelosi, Thomas; Zuvic, Petrina; Joseph, Sherin; Meier, Jason; Bloom, Beatrice; Potters, Louis
Modern external beam radiation therapy treatment delivery processes potentially increase the number of tasks to be performed by therapists and thus opportunities for errors, yet the need to treat a large number of patients daily requires a balanced allocation of time per treatment slot. The goal of this work was to streamline the underlying workflow in such time-interval constrained processes to enhance both execution efficiency and active safety surveillance using a Kaizen approach. A Kaizen project was initiated by mapping the workflow within each treatment slot for 3 Varian TrueBeam linear accelerators. More than 90 steps were identified, and average execution times for each were measured. The time-consuming steps were stratified into a 2 × 2 matrix arranged by potential workflow improvement versus the level of corrective effort required. A work plan was created to launch initiatives with high potential for workflow improvement but modest effort to implement. Time spent on safety surveillance and average durations of treatment slots were used to assess corresponding workflow improvements. Three initiatives were implemented to mitigate unnecessary therapist motion, overprocessing of data, and wait time for data transfer defects, respectively. A fourth initiative was implemented to make the division of labor by treating therapists as well as peer review more explicit. The average duration of treatment slots reduced by 6.7% in the 9 months following implementation of the initiatives (P = .001). A reduction of 21% in duration of treatment slots was observed on 1 of the machines (P < .001). Time spent on safety reviews remained the same (20% of the allocated interval), but the peer review component increased. The Kaizen approach has the potential to improve operational efficiency and safety with quick turnaround in radiation therapy practice by addressing non-value-adding steps characteristic of individual department workflows. Higher effort opportunities are identified to guide continual downstream quality improvements. Copyright © 2017 American Society for Radiation Oncology. Published by Elsevier Inc. All rights reserved.
Gas Chromatographic Determination of Fatty Acid Compositions.
ERIC Educational Resources Information Center
Heinzen, Horacio; And Others
1985-01-01
Describes an experiment that: (1) has a derivation step using readily available reagents; (2) requires limited manipulative skills, centering attention on methodology; (3) can be completed within the time constraints of a normal laboratory period; and (4) investigates materials that are easy to acquire and are of great technical/biological…
Uncovering the Dark Energy of Aging.
Melov, Simon
2016-10-26
A medically relevant understanding of aging requires an appreciation for how time degrades specific, healthy features of individual organisms over the course of their lives. Zach Pincus and colleagues make a key step in this direction, using C. elegans as a model system. Copyright © 2016 Elsevier Inc. All rights reserved.
Some Steps To Improve Your Spoken English.
ERIC Educational Resources Information Center
Mellor, Jeff
The guide contains suggestions for University of Tennessee, Knoxville foreign teaching assistants to improve their command of spoken English. The first section offers four speech improvement suggestions requiring little or no extra time, including speaking English as much and as often as possible, imitating native speakers, asking for the correct…
29 CFR 1915.74 - Access to vessels.
Code of Federal Regulations, 2010 CFR
2010-07-01
... strength, provided with side boards, well maintained and properly secured. (2) Unless employees can step... employees to board or leave any vessel, except a barge or river towboat, until the following requirements... is not being used. (4) The gangway shall be kept properly trimmed at all times. (5) When a fixed...
ERIC Educational Resources Information Center
Jago, Carol
2012-01-01
Great literature gives students a window to other places and times, but it often requires students to step outside their comfort zones and take on challenges they wouldn't usually attempt. Unfortunately, research shows that many schools are not assigning literature that pushes students beyond their current reading level. Jago encourages teachers…
Sea Stories: A Collaborative Tool for Articulating Tactical Knowledge.
ERIC Educational Resources Information Center
Radtke, Paul H.; Frey, Paul R.
Having subject matter experts (SMEs) identify the skills and knowledge to be taught is among the more difficult and time-consuming steps in the training development process. A procedure has been developed for identifying specific tactical decision-making knowledge requirements and translating SME knowledge into appropriate multimedia…
Barker, Daniel; D'Este, Catherine; Campbell, Michael J; McElduff, Patrick
2017-03-09
Stepped wedge cluster randomised trials frequently involve a relatively small number of clusters. The most common frameworks used to analyse data from these types of trials are generalised estimating equations and generalised linear mixed models. A topic of much research into these methods has been their application to cluster randomised trial data and, in particular, the number of clusters required to make reasonable inferences about the intervention effect. However, for stepped wedge trials, which have been claimed by many researchers to have a statistical power advantage over the parallel cluster randomised trial, the minimum number of clusters required has not been investigated. We conducted a simulation study where we considered the most commonly used methods suggested in the literature to analyse cross-sectional stepped wedge cluster randomised trial data. We compared the per cent bias, the type I error rate and power of these methods in a stepped wedge trial setting with a binary outcome, where there are few clusters available and when the appropriate adjustment for a time trend is made, which by design may be confounding the intervention effect. We found that the generalised linear mixed modelling approach is the most consistent when few clusters are available. We also found that none of the common analysis methods for stepped wedge trials were both unbiased and maintained a 5% type I error rate when there were only three clusters. Of the commonly used analysis approaches, we recommend the generalised linear mixed model for small stepped wedge trials with binary outcomes. We also suggest that in a stepped wedge design with three steps, at least two clusters be randomised at each step, to ensure that the intervention effect estimator maintains the nominal 5% significance level and is also reasonably unbiased.
The effect of a novel minimally invasive strategy for infected necrotizing pancreatitis.
Tong, Zhihui; Shen, Xiao; Ke, Lu; Li, Gang; Zhou, Jing; Pan, Yiyuan; Li, Baiqiang; Yang, Dongliang; Li, Weiqin; Li, Jieshou
2017-11-01
Step-up approach consisting of multiple minimally invasive techniques has gradually become the mainstream for managing infected pancreatic necrosis (IPN). In the present study, we aimed to compare the safety and efficacy of a novel four-step approach and the conventional approach in managing IPN. According to the treatment strategy, consecutive patients fulfilling the inclusion criteria were put into two time intervals to conduct a before-and-after comparison: the conventional group (2010-2011) and the novel four-step group (2012-2013). The conventional group was essentially open necrosectomy for any patient who failed percutaneous drainage of infected necrosis. And the novel drainage approach consisted of four different steps including percutaneous drainage, negative pressure irrigation, endoscopic necrosectomy and open necrosectomy in sequence. The primary endpoint was major complications (new-onset organ failure, sepsis or local complications, etc.). Secondary endpoints included mortality during hospitalization, need of emergency surgery, duration of organ failure and sepsis, etc. Of the 229 recruited patients, 92 were treated with the conventional approach and the remaining 137 were managed with the novel four-step approach. New-onset major complications occurred in 72 patients (78.3%) in the two-step group and 75 patients (54.7%) in the four-step group (p < 0.001). For other important endpoints, although there was no statistical difference in mortality between the two groups (p = 0.403), significantly fewer patients in the four-step group required emergency surgery when compared with the conventional group [14.6% (20/137) vs. 45.6% (42/92), p < 0.001]. In addition, stratified analysis revealed that the four-step approach group presented significantly lower incidence of new-onset organ failure and other major complications in patients with the most severe type of AP. Comparing with the conventional approach, the novel four-step approach significantly reduced the rate of new-onset major complications and requirement of emergency operations in treating IPN, especially in those with the most severe type of acute pancreatitis.
Espie, Colin A
2009-12-01
There is a large body of evidence that Cognitive Behavioral Therapy for insomnia (CBT) is an effective treatment for persistent insomnia. However, despite two decades of research it is still not readily available, and there are no immediate signs that this situation is about to change. This paper proposes that a service delivery model, based on "stepped care" principles, would enable this relatively scarce healthcare expertise to be applied in a cost-effective way to achieve optimal development of CBT services and best clinical care. The research evidence on methods of delivering CBT, and the associated clinical leadership roles, is reviewed. On this basis, self-administered CBT is posited as the "entry level" treatment for stepped care, with manualized, small group, CBT delivered by nurses, at the next level. Overall, a hierarchy comprising five levels of CBT stepped care is suggested. Allocation to a particular level should reflect assessed need, which in turn represents increased resource requirement in terms of time, cost and expertise. Stepped care models must also be capable of "referring" people upstream where there is an incomplete therapeutic response to a lower level intervention. Ultimately, the challenge is for CBT to be delivered competently and effectively in diversified formats on a whole population basis. That is, it needs to become "scalable". This will require a robust approach to clinical governance.
10 Steps to Building an Architecture for Space Surveillance Projects
NASA Astrophysics Data System (ADS)
Gyorko, E.; Barnhart, E.; Gans, H.
Space surveillance is an increasingly complex task, requiring the coordination of a multitude of organizations and systems, while dealing with competing capabilities, proprietary processes, differing standards, and compliance issues. In order to fully understand space surveillance operations, analysts and engineers need to analyze and break down their operations and systems using what are essentially enterprise architecture processes and techniques. These techniques can be daunting to the first- time architect. This paper provides a summary of simplified steps to analyze a space surveillance system at the enterprise level in order to determine capabilities, services, and systems. These steps form the core of an initial Model-Based Architecting process. For new systems, a well defined, or well architected, space surveillance enterprise leads to an easier transition from model-based architecture to model-based design and provides a greater likelihood that requirements are fulfilled the first time. Both new and existing systems benefit from being easier to manage, and can be sustained more easily using portfolio management techniques, based around capabilities documented in the model repository. The resulting enterprise model helps an architect avoid 1) costly, faulty portfolio decisions; 2) wasteful technology refresh efforts; 3) upgrade and transition nightmares; and 4) non-compliance with DoDAF directives. The Model-Based Architecting steps are based on a process that Harris Corporation has developed from practical experience architecting space surveillance systems and ground systems. Examples are drawn from current work on documenting space situational awareness enterprises. The process is centered on DoDAF 2 and its corresponding meta-model so that terminology is standardized and communicable across any disciplines that know DoDAF architecting, including acquisition, engineering and sustainment disciplines. Each step provides a guideline for the type of data to collect, and also the appropriate views to generate. The steps include 1) determining the context of the enterprise, including active elements and high level capabilities or goals; 2) determining the desired effects of the capabilities and mapping capabilities against the project plan; 3) determining operational performers and their inter-relationships; 4) building information and data dictionaries; 5) defining resources associated with capabilities; 6) determining the operational behavior necessary to achieve each capability; 7) analyzing existing or planned implementations to determine systems, services and software; 8) cross-referencing system behavior to operational behavioral; 9) documenting system threads and functional implementations; and 10) creating any required textual documentation from the model.
DiMango, Emily; Rogers, Linda; Reibman, Joan; Gerald, Lynn B; Brown, Mark; Sugar, Elizabeth A; Henderson, Robert; Holbrook, Janet T
2018-06-04
Although national and international guidelines recommend reduction of asthma controller therapy or 'step-down" therapy in patients with well controlled asthma, it is expected that some individuals may experience worsening of asthma symptoms or asthma exacerbations during step-down. Characteristics associated with subsequent exacerbations during step-down therapy have not been well defined. The effect of environmental tobacco smoke (ETS) exposure on risk of treatment failure during asthma step down therapy has not been reported. To identify baseline characteristics associated with treatment failure and asthma exacerbation during maintenance and guideline-based step-down therapy. The present analysis uses data collected from a completed randomized controlled trial of optimal step-down therapy in patients with well controlled asthma taking moderate dose combination inhaled corticosteroids/long acting beta agonists. Participants were 12 years or older with physician diagnosed asthma and were enrolled between December 2011 and May 2014. An Emergency Room visit in the previous year was predictive of a subsequent treatment failure (HR 1.53 (1.06, 2.21 CI). For every 10% increase in baseline forced expiratory volume in one second percent predicted, the hazard for treatment failure was reduced by 14% (95% CI: 0.74-0.99). There was no difference in risk of treatment failure between adults and children, nor did duration of asthma increase risk of treatment failure. Age of asthma onset was not associated with increased risk of treatment failure. Unexpected emergency room visit in the previous year was the only risk factor significantly associated with subsequent asthma exacerbations requiring systemic corticosteroids. Time to treatment failure or exacerbation did not differ in participants with and without self-report of ETS exposure. The present findings can help clinicians identify patients more likely to develop treatment failures and exacerbations and who may therefore require closer monitoring during asthma step-down treatment. Individuals with reduced pulmonary function, a history of exacerbations, and early onset disease, even if otherwise well controlled, may require closer observation to prevent treatment failures and asthma exacerbations. Clinical trial registered with ClinicalTrials.gov (NCT01437995).
NASA Astrophysics Data System (ADS)
Suresh Babu, Arun Vishnu; Ramesh, Kiran; Gopalarathnam, Ashok
2017-11-01
In previous research, Ramesh et al. (JFM,2014) developed a low-order discrete vortex method for modeling unsteady airfoil flows with intermittent leading edge vortex (LEV) shedding using a leading edge suction parameter (LESP). LEV shedding is initiated using discrete vortices (DVs) whenever the Leading Edge Suction Parameter (LESP) exceeds a critical value. In subsequent research, the method was successfully employed by Ramesh et al. (JFS, 2015) to predict aeroelastic limit-cycle oscillations in airfoil flows dominated by intermittent LEV shedding. When applied to flows that require large number of time steps, the computational cost increases due to the increasing vortex count. In this research, we apply an amalgamation strategy to actively control the DV count, and thereby reduce simulation time. A pair each of LEVs and TEVs are amalgamated at every time step. The ideal pairs for amalgamation are identified based on the requirement that the flowfield in the vicinity of the airfoil is least affected (Spalart, 1988). Instead of placing the amalgamated vortex at the centroid, we place it at an optimal location to ensure that the leading-edge suction and the airfoil bound circulation are conserved. Results of the initial study are promising.
ASIS v1.0: an adaptive solver for the simulation of atmospheric chemistry
NASA Astrophysics Data System (ADS)
Cariolle, Daniel; Moinat, Philippe; Teyssèdre, Hubert; Giraud, Luc; Josse, Béatrice; Lefèvre, Franck
2017-04-01
This article reports on the development and tests of the adaptive semi-implicit scheme (ASIS) solver for the simulation of atmospheric chemistry. To solve the ordinary differential equation systems associated with the time evolution of the species concentrations, ASIS adopts a one-step linearized implicit scheme with specific treatments of the Jacobian of the chemical fluxes. It conserves mass and has a time-stepping module to control the accuracy of the numerical solution. In idealized box-model simulations, ASIS gives results similar to the higher-order implicit schemes derived from the Rosenbrock's and Gear's methods and requires less computation and run time at the moderate precision required for atmospheric applications. When implemented in the MOCAGE chemical transport model and the Laboratoire de Météorologie Dynamique Mars general circulation model, the ASIS solver performs well and reveals weaknesses and limitations of the original semi-implicit solvers used by these two models. ASIS can be easily adapted to various chemical schemes and further developments are foreseen to increase its computational efficiency, and to include the computation of the concentrations of the species in aqueous-phase in addition to gas-phase chemistry.
Design of a laser rangefinder for Martian terrain measurements. M.S. Thesis
NASA Technical Reports Server (NTRS)
Palumbo, D. L.
1973-01-01
Three methods for using a laser for rangefinding are discussed: optical focusing, the phase difference method, and timed pulse. For application on a Mars Rover, the timed pulse method proves to be the better choice in view of the requirements set down. This is made possible by pulse expansion techniques described in detail. Initial steps taken toward building the range finder are given, followed by a conclusion.
Classifying seismic waveforms from scratch: a case study in the alpine environment
NASA Astrophysics Data System (ADS)
Hammer, C.; Ohrnberger, M.; Fäh, D.
2013-01-01
Nowadays, an increasing amount of seismic data is collected by daily observatory routines. The basic step for successfully analyzing those data is the correct detection of various event types. However, the visually scanning process is a time-consuming task. Applying standard techniques for detection like the STA/LTA trigger still requires the manual control for classification. Here, we present a useful alternative. The incoming data stream is scanned automatically for events of interest. A stochastic classifier, called hidden Markov model, is learned for each class of interest enabling the recognition of highly variable waveforms. In contrast to other automatic techniques as neural networks or support vector machines the algorithm allows to start the classification from scratch as soon as interesting events are identified. Neither the tedious process of collecting training samples nor a time-consuming configuration of the classifier is required. An approach originally introduced for the volcanic task force action allows to learn classifier properties from a single waveform example and some hours of background recording. Besides a reduction of required workload this also enables to detect very rare events. Especially the latter feature provides a milestone point for the use of seismic devices in alpine warning systems. Furthermore, the system offers the opportunity to flag new signal classes that have not been defined before. We demonstrate the application of the classification system using a data set from the Swiss Seismological Survey achieving very high recognition rates. In detail we document all refinements of the classifier providing a step-by-step guide for the fast set up of a well-working classification system.
Endoclip Magnetic Resonance Imaging Screening: A Local Practice Review.
Accorsi, Fabio; Lalonde, Alain; Leswick, David A
2018-05-01
Not all endoscopically placed clips (endoclips) are magnetic resonance imaging (MRI) compatible. At many institutions, endoclip screening is part of the pre-MRI screening process. Our objective is to determine the contribution of each step of this endoclip screening protocol in determining a patient's endoclip status at our institution. A retrospective review of patients' endoscopic histories on general MRI screening forms for patients scanned during a 40-day period was performed to assess the percentage of patients that require endoclip screening at our institution. Following this, a prospective evaluation of 614 patients' endoclip screening determined the percentage of these patients ultimately exposed to each step in the protocol (exposure), and the percentage of patients whose endoclip status was determined with reasonable certainty by each step (determination). Exposure and determination values for each step were calculated as follows (exposure, determination): verbal interview (100%, 86%), review of past available imaging (14%, 36%), review of endoscopy report (9%, 57%), and new abdominal radiograph (4%, 96%), or CT (0.2%, 100%) for evaluation of potential endoclips. Only 1 patient did not receive MRI because of screening (in situ gastrointestinal endoclip identified). Verbal interview is invaluable to endoclip screening, clearing 86% of patients with minimal monetary and time investment. Conversely, the limited availability of endoscopy reports and relevant past imaging somewhat restricts the determination rates of these. New imaging (radiograph or computed tomography) is required <5% of the time, and although costly and associated with patient irradiation, has excellent determination rates (above 96%) when needed. Copyright © 2017 Canadian Association of Radiologists. Published by Elsevier Inc. All rights reserved.
Ultrasound: a subexploited tool for sample preparation in metabolomics.
Luque de Castro, M D; Delgado-Povedano, M M
2014-01-02
Metabolomics, one of the most recently emerged "omics", has taken advantage of ultrasound (US) to improve sample preparation (SP) steps. The metabolomics-US assisted SP step binomial has experienced a dissimilar development that has depended on the area (vegetal or animal) and the SP step. Thus, vegetal metabolomics and US assisted leaching has received the greater attention (encompassing subdisciplines such as metallomics, xenometabolomics and, mainly, lipidomics), but also liquid-liquid extraction and (bio)chemical reactions in metabolomics have taken advantage of US energy. Also clinical and animal samples have benefited from US assisted SP in metabolomics studies but in a lesser extension. The main effects of US have been shortening of the time required for the given step, and/or increase of its efficiency or availability for automation; nevertheless, attention paid to potential degradation caused by US has been scant or nil. Achievements and weak points of the metabolomics-US assisted SP step binomial are discussed and possible solutions to the present shortcomings are exposed. Copyright © 2013 Elsevier B.V. All rights reserved.
Numerical solution methods for viscoelastic orthotropic materials
NASA Technical Reports Server (NTRS)
Gramoll, K. C.; Dillard, D. A.; Brinson, H. F.
1988-01-01
Numerical solution methods for viscoelastic orthotropic materials, specifically fiber reinforced composite materials, are examined. The methods include classical lamination theory using time increments, direction solution of the Volterra Integral, Zienkiewicz's linear Prony series method, and a new method called Nonlinear Differential Equation Method (NDEM) which uses a nonlinear Prony series. The criteria used for comparison of the various methods include the stability of the solution technique, time step size stability, computer solution time length, and computer memory storage. The Volterra Integral allowed the implementation of higher order solution techniques but had difficulties solving singular and weakly singular compliance function. The Zienkiewicz solution technique, which requires the viscoelastic response to be modeled by a Prony series, works well for linear viscoelastic isotropic materials and small time steps. The new method, NDEM, uses a modified Prony series which allows nonlinear stress effects to be included and can be used with orthotropic nonlinear viscoelastic materials. The NDEM technique is shown to be accurate and stable for both linear and nonlinear conditions with minimal computer time.
Burns, K E; Haysom, H E; Higgins, A M; Waters, N; Tahiri, R; Rushford, K; Dunstan, T; Saxby, K; Kaplan, Z; Chunilal, S; McQuilten, Z K; Wood, E M
2018-04-10
To describe the methodology to estimate the total cost of administration of a single unit of red blood cells (RBC) in adults with beta thalassaemia major in an Australian specialist haemoglobinopathy centre. Beta thalassaemia major is a genetic disorder of haemoglobin associated with multiple end-organ complications and typically requiring lifelong RBC transfusion therapy. New therapeutic agents are becoming available based on advances in understanding of the disorder and its consequences. Assessment of the true total cost of transfusion, incorporating both product and activity costs, is required in order to evaluate the benefits and costs of these new therapies. We describe the bottom-up, time-driven, activity-based costing methodology used to develop process maps to provide a step-by-step outline of the entire transfusion pathway. Detailed flowcharts for each process are described. Direct observations and timing of the process maps document all activities, resources, staff, equipment and consumables in detail. The analysis will include costs associated with performing these processes, including resources and consumables. Sensitivity analyses will be performed to determine the impact of different staffing levels, timings and probabilities associated with performing different tasks. Thirty-one process maps have been developed, with over 600 individual activities requiring multiple timings. These will be used for future detailed cost analyses. Detailed process maps using bottom-up, time-driven, activity-based costing for determining the cost of RBC transfusion in thalassaemia major have been developed. These could be adapted for wider use to understand and compare the costs and complexities of transfusion in other settings. © 2018 British Blood Transfusion Society.
Gaze shifts during dual-tasking stair descent.
Miyasike-daSilva, Veronica; McIlroy, William E
2016-11-01
To investigate the role of vision in stair locomotion, young adults descended a seven-step staircase during unrestricted walking (CONTROL), and while performing a concurrent visual reaction time (RT) task displayed on a monitor. The monitor was located at either 3.5 m (HIGH) or 0.5 m (LOW) above ground level at the end of the stairway, which either restricted (HIGH) or facilitated (LOW) the view of the stairs in the lower field of view as participants walked downstairs. Downward gaze shifts (recorded with an eye tracker) and gait speed were significantly reduced in HIGH and LOW compared with CONTROL. Gaze and locomotor behaviour were not different between HIGH and LOW. However, inter-individual variability increased in HIGH, in which participants combined different response characteristics including slower walking, handrail use, downward gaze, and/or increasing RTs. The fastest RTs occurred in the midsteps (non-transition steps). While gait and visual task performance were not statistically different prior to the top and bottom transition steps, gaze behaviour and RT were more variable prior to transition steps in HIGH. This study demonstrated that, in the presence of a visual task, people do not look down as often when walking downstairs and require minimum adjustments provided that the view of the stairs is available in the lower field of view. The middle of the stairs seems to require less from executive function, whereas visual attention appears a requirement to detect the last transition via gaze shifts or peripheral vision.
AGU journals should ask authors to publish results
NASA Astrophysics Data System (ADS)
Agnew, Duncan Carr
2012-07-01
The title of this Forum is meant to sound paradoxical: Isn't the publication of results what AGU journals are for? I argue that in some ways they aren't, and suggest how to fix this. Explaining this apparent paradox requires that we look at the structure of a published paper and of the research project that produced it. Any project involves many steps; for those using data to examine some problem the first step (step A) is for researchers to collect the relevant raw data. Next (step B), they analyze these data to learn about some phenomenon of interest; this analysis is very often purely computational. Then (step C), the researchers (we can now call them "the authors") arrange the results of this analysis in a way that shows the reader the evidence for the conclusions of the paper. Sometimes these results appear as a table, but more often they are shown pictorially, as, for example, a plot of a time series, a map, a correlation plot, or a cross-section. Finally (step D), the authors state the conclusions to be drawn from the results presented.
Ötes, Ozan; Flato, Hendrik; Winderl, Johannes; Hubbuch, Jürgen; Capito, Florian
2017-10-10
The protein A capture step is the main cost-driver in downstream processing, with high attrition costs especially when using protein A resin not until end of resin lifetime. Here we describe a feasibility study, transferring a batch downstream process to a hybrid process, aimed at replacing batch protein A capture chromatography with a continuous capture step, while leaving the polishing steps unchanged to minimize required process adaptations compared to a batch process. 35g of antibody were purified using the hybrid approach, resulting in comparable product quality and step yield compared to the batch process. Productivity for the protein A step could be increased up to 420%, reducing buffer amounts by 30-40% and showing robustness for at least 48h continuous run time. Additionally, to enable its potential application in a clinical trial manufacturing environment cost of goods were compared for the protein A step between hybrid process and batch process, showing a 300% cost reduction, depending on processed volumes and batch cycles. Copyright © 2017 Elsevier B.V. All rights reserved.
Saito, Maiko; Kurosawa, Yae; Okuyama, Tsuneo
2012-02-01
Antibody purification using proteins A and G has been a standard method for research and industrial processes. The conventional method, however, includes a three-step process, including buffer exchange, before chromatography. In addition, proteins A and G require low pH elution, which causes antibody aggregation and inactivates the antibody's immunity. This report proposes a two-step method using hydroxyapatite chromatography and membrane filtration, without proteins A and G. This novel method shortens the running time to one-third the conventional method for each cycle. Using our two-step method, 90.2% of the monoclonal antibodies purified were recovered in the elution fraction, the purity achieved was >90%, and most of the antigen-specific activity was retained. This report suggests that the two-step method using hydroxyapatite chromatography and membrane filtration should be considered as an alternative to purification using proteins A and G.
Effect of film-based versus filmless operation on the productivity of CT technologists.
Reiner, B I; Siegel, E L; Hooper, F J; Glasser, D
1998-05-01
To determine the relative time required for a technologist to perform a computed tomographic (CT) examination in a "filmless" versus a film-based environment. Time-motion studies were performed in 204 consecutive CT examinations. Images from 96 examinations were electronically transferred to a picture archiving and communication system (PACS) without being printed to film, and 108 were printed to film. The time required to obtain and electronically transfer the images or print the images to film and make the current and previous studies available to the radiologists for interpretation was recorded. The time required for a technologist to complete a CT examination was reduced by 45% with direct image transfer to the PACS compared with the time required in the film-based mode. This reduction was due to the elimination of a number of steps in the filming process, such as the printing at multiple window or level settings. The use of a PACS can result in the elimination of multiple time-intensive tasks for the CT technologist, resulting in a marked reduction in examination time. This reduction can result in increased productivity, and, hence greater cost-effectiveness with filmless operation.
Umari, A.M.; Gorelick, S.M.
1986-01-01
It is possible to obtain analytic solutions to the groundwater flow and solute transport equations if space variables are discretized but time is left continuous. From these solutions, hydraulic head and concentration fields for any future time can be obtained without ' marching ' through intermediate time steps. This analytical approach involves matrix exponentiation and is referred to as the Matrix Exponential Time Advancement (META) method. Two algorithms are presented for the META method, one for symmetric and the other for non-symmetric exponent matrices. A numerical accuracy indicator, referred to as the matrix condition number, was defined and used to determine the maximum number of significant figures that may be lost in the META method computations. The relative computational and storage requirements of the META method with respect to the time marching method increase with the number of nodes in the discretized problem. The potential greater accuracy of the META method and the associated greater reliability through use of the matrix condition number have to be weighed against this increased relative computational and storage requirements of this approach as the number of nodes becomes large. For a particular number of nodes, the META method may be computationally more efficient than the time-marching method, depending on the size of time steps used in the latter. A numerical example illustrates application of the META method to a sample ground-water-flow problem. (Author 's abstract)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Boggs, Paul T.; Althsuler, Alan; Larzelere, Alex R.
2005-08-01
The Design-through-Analysis Realization Team (DART) is chartered with reducing the time Sandia analysts require to complete the engineering analysis process. The DART system analysis team studied the engineering analysis processes employed by analysts in Centers 9100 and 8700 at Sandia to identify opportunities for reducing overall design-through-analysis process time. The team created and implemented a rigorous analysis methodology based on a generic process flow model parameterized by information obtained from analysts. They also collected data from analysis department managers to quantify the problem type and complexity distribution throughout Sandia's analyst community. They then used this information to develop a communitymore » model, which enables a simple characterization of processes that span the analyst community. The results indicate that equal opportunity for reducing analysis process time is available both by reducing the ''once-through'' time required to complete a process step and by reducing the probability of backward iteration. In addition, reducing the rework fraction (i.e., improving the engineering efficiency of subsequent iterations) offers approximately 40% to 80% of the benefit of reducing the ''once-through'' time or iteration probability, depending upon the process step being considered. Further, the results indicate that geometry manipulation and meshing is the largest portion of an analyst's effort, especially for structural problems, and offers significant opportunity for overall time reduction. Iteration loops initiated late in the process are more costly than others because they increase ''inner loop'' iterations. Identifying and correcting problems as early as possible in the process offers significant opportunity for time savings.« less
Special Year on Numerical Linear Algebra
1988-09-01
ORNL) Worley, Pat (ORNL) A special acknowledgement should go to Mary Drake (UT) and Mitzy Denson (ORNL) who carried the burden of making the innumerable...a time step appropriate for the regular cells with no stability restriction. Entrance to Y-12 requires a pass. Contact Mitzy Denson (615) 574-3125 to...requires a pass. Contact Mitzy Denson (615) 574-3125 to obtain one. ’This seminar is part of the Special Year on Numerical Linear Algebra sponsored by the
Li, Zhongjie; Xia, Yingfeng; Chen, Kai; Zhao, Hanchi; Liu, Yang
Prosthodontic oral rehabilitation procedures are time consuming and require efforts to maintain the confirmed maxillomandibular relationship. Several occlusal registrations and impressions are needed, and cross-mounting is performed to transfer the diagnostic wax-up to master working casts. The introduction of a digital workflow protocol reduces steps in the required process, and occlusal registrations with less deformation are used. The outcome is a maintained maxillomandibular position that is accurately and conveniently transferred.
Wolfs, Vincent; Villazon, Mauricio Florencio; Willems, Patrick
2013-01-01
Applications such as real-time control, uncertainty analysis and optimization require an extensive number of model iterations. Full hydrodynamic sewer models are not sufficient for these applications due to the excessive computation time. Simplifications are consequently required. A lumped conceptual modelling approach results in a much faster calculation. The process of identifying and calibrating the conceptual model structure could, however, be time-consuming. Moreover, many conceptual models lack accuracy, or do not account for backwater effects. To overcome these problems, a modelling methodology was developed which is suited for semi-automatic calibration. The methodology is tested for the sewer system of the city of Geel in the Grote Nete river basin in Belgium, using both synthetic design storm events and long time series of rainfall input. A MATLAB/Simulink(®) tool was developed to guide the modeller through the step-wise model construction, reducing significantly the time required for the conceptual modelling process.
Saving Material with Systematic Process Designs
NASA Astrophysics Data System (ADS)
Kerausch, M.
2011-08-01
Global competition is forcing the stamping industry to further increase quality, to shorten time-to-market and to reduce total cost. Continuous balancing between these classical time-cost-quality targets throughout the product development cycle is required to ensure future economical success. In today's industrial practice, die layout standards are typically assumed to implicitly ensure the balancing of company specific time-cost-quality targets. Although die layout standards are a very successful approach, there are two methodical disadvantages. First, the capabilities for tool design have to be continuously adapted to technological innovations; e.g. to take advantage of the full forming capability of new materials. Secondly, the great variety of die design aspects have to be reduced to a generic rule or guideline; e.g. binder shape, draw-in conditions or the use of drawbeads. Therefore, it is important to not overlook cost or quality opportunities when applying die design standards. This paper describes a systematic workflow with focus on minimizing material consumption. The starting point of the investigation is a full process plan for a typical structural part. All requirements are definedaccording to a predefined set of die design standards with industrial relevance are fulfilled. In a first step binder and addendum geometry is systematically checked for material saving potentials. In a second step, blank shape and draw-in are adjusted to meet thinning, wrinkling and springback targets for a minimum blank solution. Finally the identified die layout is validated with respect to production robustness versus splits, wrinkles and springback. For all three steps the applied methodology is based on finite element simulation combined with a stochastical variation of input variables. With the proposed workflow a well-balanced (time-cost-quality) production process assuring minimal material consumption can be achieved.
Design, Development and Testing of Web Services for Multi-Sensor Snow Cover Mapping
NASA Astrophysics Data System (ADS)
Kadlec, Jiri
This dissertation presents the design, development and validation of new data integration methods for mapping the extent of snow cover based on open access ground station measurements, remote sensing images, volunteer observer snow reports, and cross country ski track recordings from location-enabled mobile devices. The first step of the data integration procedure includes data discovery, data retrieval, and data quality control of snow observations at ground stations. The WaterML R package developed in this work enables hydrologists to retrieve and analyze data from multiple organizations that are listed in the Consortium of Universities for the Advancement of Hydrologic Sciences Inc (CUAHSI) Water Data Center catalog directly within the R statistical software environment. Using the WaterML R package is demonstrated by running an energy balance snowpack model in R with data inputs from CUAHSI, and by automating uploads of real time sensor observations to CUAHSI HydroServer. The second step of the procedure requires efficient access to multi-temporal remote sensing snow images. The Snow Inspector web application developed in this research enables the users to retrieve a time series of fractional snow cover from the Moderate Resolution Imaging Spectroradiometer (MODIS) for any point on Earth. The time series retrieval method is based on automated data extraction from tile images provided by a Web Map Tile Service (WMTS). The average required time for retrieving 100 days of data using this technique is 5.4 seconds, which is significantly faster than other methods that require the download of large satellite image files. The presented data extraction technique and space-time visualization user interface can be used as a model for working with other multi-temporal hydrologic or climate data WMTS services. The third, final step of the data integration procedure is generating continuous daily snow cover maps. A custom inverse distance weighting method has been developed to combine volunteer snow reports, cross-country ski track reports and station measurements to fill cloud gaps in the MODIS snow cover product. The method is demonstrated by producing a continuous daily time step snow presence probability map dataset for the Czech Republic region. The ability of the presented methodology to reconstruct MODIS snow cover under cloud is validated by simulating cloud cover datasets and comparing estimated snow cover to actual MODIS snow cover. The percent correctly classified indicator showed accuracy between 80 and 90% using this method. Using crowdsourcing data (volunteer snow reports and ski tracks) improves the map accuracy by 0.7--1.2%. The output snow probability map data sets are published online using web applications and web services. Keywords: crowdsourcing, image analysis, interpolation, MODIS, R statistical software, snow cover, snowpack probability, Tethys platform, time series, WaterML, web services, winter sports.
NASA Astrophysics Data System (ADS)
Cerchiari, G.; Croccolo, F.; Cardinaux, F.; Scheffold, F.
2012-10-01
We present an implementation of the analysis of dynamic near field scattering (NFS) data using a graphics processing unit. We introduce an optimized data management scheme thereby limiting the number of operations required. Overall, we reduce the processing time from hours to minutes, for typical experimental conditions. Previously the limiting step in such experiments, the processing time is now comparable to the data acquisition time. Our approach is applicable to various dynamic NFS methods, including shadowgraph, Schlieren and differential dynamic microscopy.
Bancroft, Matthew J.; Day, Brian L.
2016-01-01
Postural activity normally precedes the lift of a foot from the ground when taking a step, but its function is unclear. The throw-and-catch hypothesis of human gait proposes that the pre-step activity is organized to generate momentum for the body to fall ballistically along a specific trajectory during the step. The trajectory is appropriate for the stepping foot to land at its intended location while at the same time being optimally placed to catch the body and regain balance. The hypothesis therefore predicts a strong coupling between the pre-step activity and step location. Here we examine this coupling when stepping to visually-presented targets at different locations. Ten healthy, young subjects were instructed to step as accurately as possible onto targets placed in five locations that required either different step directions or different step lengths. In 75% of trials, the target location remained constant throughout the step. In the remaining 25% of trials, the intended step location was changed by making the target jump to a new location 96 ms ± 43 ms after initiation of the pre-step activity, long before foot lift. As predicted by the throw-and-catch hypothesis, when the target location remained constant, the pre-step activity led to body momentum at foot lift that was coupled to the intended step location. When the target location jumped, the pre-step activity was adjusted (median latency 223 ms) and prolonged (on average by 69 ms), which altered the body’s momentum at foot lift according to where the target had moved. We conclude that whenever possible the coupling between the pre-step activity and the step location is maintained. This provides further support for the throw-and-catch hypothesis of human gait. PMID:28066208
Bancroft, Matthew J; Day, Brian L
2016-01-01
Postural activity normally precedes the lift of a foot from the ground when taking a step, but its function is unclear. The throw-and-catch hypothesis of human gait proposes that the pre-step activity is organized to generate momentum for the body to fall ballistically along a specific trajectory during the step. The trajectory is appropriate for the stepping foot to land at its intended location while at the same time being optimally placed to catch the body and regain balance. The hypothesis therefore predicts a strong coupling between the pre-step activity and step location. Here we examine this coupling when stepping to visually-presented targets at different locations. Ten healthy, young subjects were instructed to step as accurately as possible onto targets placed in five locations that required either different step directions or different step lengths. In 75% of trials, the target location remained constant throughout the step. In the remaining 25% of trials, the intended step location was changed by making the target jump to a new location 96 ms ± 43 ms after initiation of the pre-step activity, long before foot lift. As predicted by the throw-and-catch hypothesis, when the target location remained constant, the pre-step activity led to body momentum at foot lift that was coupled to the intended step location. When the target location jumped, the pre-step activity was adjusted (median latency 223 ms) and prolonged (on average by 69 ms), which altered the body's momentum at foot lift according to where the target had moved. We conclude that whenever possible the coupling between the pre-step activity and the step location is maintained. This provides further support for the throw-and-catch hypothesis of human gait.
NASA Astrophysics Data System (ADS)
Lestari, Brina Cindy; Dewi, Dyah Santhi; Widodo, Rusminto Tjatur
2017-11-01
The elderly who has a particular disease need to take some medicines in everyday with correct dosages and appropriate by time schedules. However, the elderly frequently forget to take medicines because of their memory weakened. Consequently, the product innovation of elderly healthcare is required for helping elderly takes some medicine more easily. This research aims to develop a smart medicine box by applying quality function deployment method. The first step is identifying elderly requirements through an ethnographic approach by interviewing thirty-two of elderly people as respondents. Then, the second step is translated elderly requirements to technical parameter for designing a smart medicine box. The smart box design is focused on two main requirements which have highest importance rating including alarm reminder for taking medicine and automatic medicine box. Finally, the prototype design has been created and tested by using usability method. The result shown that 90% from ten respondents have positive respond on the feature of smart medicine box. The voice of alarm reminder smart medicine box is easy to understand by elderly people for taking medicines.
Isothermal Amplification Methods for the Detection of Nucleic Acids in Microfluidic Devices
Zanoli, Laura Maria; Spoto, Giuseppe
2012-01-01
Diagnostic tools for biomolecular detection need to fulfill specific requirements in terms of sensitivity, selectivity and high-throughput in order to widen their applicability and to minimize the cost of the assay. The nucleic acid amplification is a key step in DNA detection assays. It contributes to improving the assay sensitivity by enabling the detection of a limited number of target molecules. The use of microfluidic devices to miniaturize amplification protocols reduces the required sample volume and the analysis times and offers new possibilities for the process automation and integration in one single device. The vast majority of miniaturized systems for nucleic acid analysis exploit the polymerase chain reaction (PCR) amplification method, which requires repeated cycles of three or two temperature-dependent steps during the amplification of the nucleic acid target sequence. In contrast, low temperature isothermal amplification methods have no need for thermal cycling thus requiring simplified microfluidic device features. Here, the use of miniaturized analysis systems using isothermal amplification reactions for the nucleic acid amplification will be discussed. PMID:25587397
Implicit–explicit (IMEX) Runge–Kutta methods for non-hydrostatic atmospheric models
Gardner, David J.; Guerra, Jorge E.; Hamon, François P.; ...
2018-04-17
The efficient simulation of non-hydrostatic atmospheric dynamics requires time integration methods capable of overcoming the explicit stability constraints on time step size arising from acoustic waves. In this work, we investigate various implicit–explicit (IMEX) additive Runge–Kutta (ARK) methods for evolving acoustic waves implicitly to enable larger time step sizes in a global non-hydrostatic atmospheric model. The IMEX formulations considered include horizontally explicit – vertically implicit (HEVI) approaches as well as splittings that treat some horizontal dynamics implicitly. In each case, the impact of solving nonlinear systems in each implicit ARK stage in a linearly implicit fashion is also explored.The accuracymore » and efficiency of the IMEX splittings, ARK methods, and solver options are evaluated on a gravity wave and baroclinic wave test case. HEVI splittings that treat some vertical dynamics explicitly do not show a benefit in solution quality or run time over the most implicit HEVI formulation. While splittings that implicitly evolve some horizontal dynamics increase the maximum stable step size of a method, the gains are insufficient to overcome the additional cost of solving a globally coupled system. Solving implicit stage systems in a linearly implicit manner limits the solver cost but this is offset by a reduction in step size to achieve the desired accuracy for some methods. Overall, the third-order ARS343 and ARK324 methods performed the best, followed by the second-order ARS232 and ARK232 methods.« less
Implicit-explicit (IMEX) Runge-Kutta methods for non-hydrostatic atmospheric models
NASA Astrophysics Data System (ADS)
Gardner, David J.; Guerra, Jorge E.; Hamon, François P.; Reynolds, Daniel R.; Ullrich, Paul A.; Woodward, Carol S.
2018-04-01
The efficient simulation of non-hydrostatic atmospheric dynamics requires time integration methods capable of overcoming the explicit stability constraints on time step size arising from acoustic waves. In this work, we investigate various implicit-explicit (IMEX) additive Runge-Kutta (ARK) methods for evolving acoustic waves implicitly to enable larger time step sizes in a global non-hydrostatic atmospheric model. The IMEX formulations considered include horizontally explicit - vertically implicit (HEVI) approaches as well as splittings that treat some horizontal dynamics implicitly. In each case, the impact of solving nonlinear systems in each implicit ARK stage in a linearly implicit fashion is also explored. The accuracy and efficiency of the IMEX splittings, ARK methods, and solver options are evaluated on a gravity wave and baroclinic wave test case. HEVI splittings that treat some vertical dynamics explicitly do not show a benefit in solution quality or run time over the most implicit HEVI formulation. While splittings that implicitly evolve some horizontal dynamics increase the maximum stable step size of a method, the gains are insufficient to overcome the additional cost of solving a globally coupled system. Solving implicit stage systems in a linearly implicit manner limits the solver cost but this is offset by a reduction in step size to achieve the desired accuracy for some methods. Overall, the third-order ARS343 and ARK324 methods performed the best, followed by the second-order ARS232 and ARK232 methods.
Implicit–explicit (IMEX) Runge–Kutta methods for non-hydrostatic atmospheric models
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gardner, David J.; Guerra, Jorge E.; Hamon, François P.
The efficient simulation of non-hydrostatic atmospheric dynamics requires time integration methods capable of overcoming the explicit stability constraints on time step size arising from acoustic waves. In this work, we investigate various implicit–explicit (IMEX) additive Runge–Kutta (ARK) methods for evolving acoustic waves implicitly to enable larger time step sizes in a global non-hydrostatic atmospheric model. The IMEX formulations considered include horizontally explicit – vertically implicit (HEVI) approaches as well as splittings that treat some horizontal dynamics implicitly. In each case, the impact of solving nonlinear systems in each implicit ARK stage in a linearly implicit fashion is also explored.The accuracymore » and efficiency of the IMEX splittings, ARK methods, and solver options are evaluated on a gravity wave and baroclinic wave test case. HEVI splittings that treat some vertical dynamics explicitly do not show a benefit in solution quality or run time over the most implicit HEVI formulation. While splittings that implicitly evolve some horizontal dynamics increase the maximum stable step size of a method, the gains are insufficient to overcome the additional cost of solving a globally coupled system. Solving implicit stage systems in a linearly implicit manner limits the solver cost but this is offset by a reduction in step size to achieve the desired accuracy for some methods. Overall, the third-order ARS343 and ARK324 methods performed the best, followed by the second-order ARS232 and ARK232 methods.« less
Kelly, Elizabeth W; Kelly, Jonathan D; Hiestand, Brian; Wells-Kiser, Kathy; Starling, Stephanie; Hoekstra, James W
2010-01-01
Rapid reperfusion in patients with ST-elevation myocardial infarction (STEMI) is associated with lower mortality. Reduction in door-to-balloon (D2B) time for percutaneous coronary intervention requires multidisciplinary cooperation, process analysis, and quality improvement methodology. Six Sigma methodology was used to reduce D2B times in STEMI patients presenting to a tertiary care center. Specific steps in STEMI care were determined, time goals were established, and processes were changed to reduce each step's duration. Outcomes were tracked, and timely feedback was given to providers. After process analysis and implementation of improvements, mean D2B times decreased from 128 to 90 minutes. Improvement has been sustained; as of June 2010, the mean D2B was 56 minutes, with 100% of patients meeting the 90-minute window for the year. Six Sigma methodology and immediate provider feedback result in significant reductions in D2B times. The lessons learned may be extrapolated to other primary percutaneous coronary intervention centers. Copyright © 2010 Elsevier Inc. All rights reserved.
Adaptive Finite Element Methods for Continuum Damage Modeling
NASA Technical Reports Server (NTRS)
Min, J. B.; Tworzydlo, W. W.; Xiques, K. E.
1995-01-01
The paper presents an application of adaptive finite element methods to the modeling of low-cycle continuum damage and life prediction of high-temperature components. The major objective is to provide automated and accurate modeling of damaged zones through adaptive mesh refinement and adaptive time-stepping methods. The damage modeling methodology is implemented in an usual way by embedding damage evolution in the transient nonlinear solution of elasto-viscoplastic deformation problems. This nonlinear boundary-value problem is discretized by adaptive finite element methods. The automated h-adaptive mesh refinements are driven by error indicators, based on selected principal variables in the problem (stresses, non-elastic strains, damage, etc.). In the time domain, adaptive time-stepping is used, combined with a predictor-corrector time marching algorithm. The time selection is controlled by required time accuracy. In order to take into account strong temperature dependency of material parameters, the nonlinear structural solution a coupled with thermal analyses (one-way coupling). Several test examples illustrate the importance and benefits of adaptive mesh refinements in accurate prediction of damage levels and failure time.
NASA Astrophysics Data System (ADS)
Liu, Lang; Li, Han-Yu; Yu, Yao; Liu, Lin; Wu, Yue
2018-02-01
The fabrication of a current collector-contained in-plane micro-supercapacitor (MSC) usually requires the patterning of the current collector first and then subsequent patterning of the active material with the assistance of a photoresist and mask. However, this two-step patterning process is too complicated and the photoresist used is harmful to the properties of nanomaterials. Here, we demonstrate a one-step, mask-free strategy to pattern the current collector and the active material at the same time, for the fabrication of an all-solid-state flexible in-plane MSC. Silver nanowires (AgNWs) are used as the current collector. An atmospheric pressure pulsed cold micro-plasma-jet is used to realize the one-step, mask-free production of interdigitated multi-walled carbon nanotube (MWCNT)/AgNW electrodes. Remarkably, the fabricated MWCNT/AgNW-based MSC shows good flexibility and excellent rate capability. Moreover, the performance of properties including cyclic stability, equivalent series resistance, relaxation time and energy/power densities of the MWCNT/AgNW-based MSC are significantly enhanced by the presence of the AgNW current collector.
Design and characterization of an irradiation facility with real-time monitoring
NASA Astrophysics Data System (ADS)
Braisted, Jonathan David
Radiation causes performance degradation in electronics by inducing atomic displacements and ionizations. While radiation hardened components are available, non-radiation hardened electronics can be preferable because they are generally more compact, require less power, and less expensive than radiation tolerant equivalents. It is therefore important to characterize the performance of electronics, both hardened and non-hardened, to prevent costly system or mission failures. Radiation effects tests for electronics generally involve a handful of step irradiations, leading to poorly-resolved data. Step irradiations also introduce uncertainties in electrical measurements due to temperature annealing effects. This effect may be intensified if the time between exposure and measurement is significant. Induced activity in test samples also complicates data collection of step irradiated test samples. The University of Texas at Austin operates a 1.1 MW Mark II TRIGA research reactor. An in-core irradiation facility for radiation effects testing with a real-time monitoring capability has been designed for the UT TRIGA reactor. The facility is larger than any currently available non-central location in a TRIGA, supporting testing of larger electronic components as well as other in-core irradiation applications requiring significant volume such as isotope production or neutron transmutation doping of silicon. This dissertation describes the design and testing of the large in-core irradiation facility and the experimental campaign developed to test the real-time monitoring capability. This irradiation campaign was performed to test the real-time monitoring capability at various reactor power levels. The device chosen for characterization was the 4N25 general-purpose optocoupler. The current transfer ratio, which is an important electrical parameter for optocouplers, was calculated as a function of neutron fluence and gamma dose from the real-time voltage measurements. The resultant radiation effects data was seen to be repeatable and exceptionally finely-resolved. Therefore, the capability at UT TRIGA has been proven competitive with world-class effects characterization facilities.
Pater, Mackenzie L; Rosenblatt, Noah J; Grabiner, Mark D
2015-01-01
Tripping during locomotion, the leading cause of falls in older adults, generally occurs without prior warning and often while performing a secondary task. Prior warning can alter the state of physiological preparedness and beneficially influence the response to the perturbation. Previous studies have examined how altering the initial "preparedness" for an upcoming perturbation can affect kinematic responses following small disturbances that did not require a stepping response to restore dynamic stability. The purpose of this study was to examine how expectation affected fall outcome and recovery response kinematics following a large, treadmill-delivered perturbation simulating a trip and requiring at least one recovery step to avoid a fall. Following the perturbation, 47% of subjects fell when they were not expecting the perturbation whereas 12% fell when they were aware that the perturbation would occur "sometime in the next minute". The between-group differences were accompanied by slower reaction times in the non-expecting group (p < 0.01). Slower reaction times were associated with kinematics that have previously been shown to increase the likelihood of falling following a laboratory-induced trip. The results demonstrate the importance of considering the context under which recovery responses are assessed, and further, gives insight to the context during which task-specific perturbation training is administered. Copyright © 2014 Elsevier B.V. All rights reserved.
Solution procedure of dynamical contact problems with friction
NASA Astrophysics Data System (ADS)
Abdelhakim, Lotfi
2017-07-01
Dynamical contact is one of the common research topics because of its wide applications in the engineering field. The main goal of this work is to develop a time-stepping algorithm for dynamic contact problems. We propose a finite element approach for elastodynamics contact problems [1]. Sticking, sliding and frictional contact can be taken into account. Lagrange multipliers are used to enforce non-penetration condition. For the time discretization, we propose a scheme equivalent to the explicit Newmark scheme. Each time step requires solving a nonlinear problem similar to a static friction problem. The nonlinearity of the system of equation needs an iterative solution procedure based on Uzawa's algorithm [2][3]. The applicability of the algorithm is illustrated by selected sample numerical solutions to static and dynamic contact problems. Results obtained with the model have been compared and verified with results from an independent numerical method.
A Neural Dynamic Model Generates Descriptions of Object-Oriented Actions.
Richter, Mathis; Lins, Jonas; Schöner, Gregor
2017-01-01
Describing actions entails that relations between objects are discovered. A pervasively neural account of this process requires that fundamental problems are solved: the neural pointer problem, the binding problem, and the problem of generating discrete processing steps from time-continuous neural processes. We present a prototypical solution to these problems in a neural dynamic model that comprises dynamic neural fields holding representations close to sensorimotor surfaces as well as dynamic neural nodes holding discrete, language-like representations. Making the connection between these two types of representations enables the model to describe actions as well as to perceptually ground movement phrases-all based on real visual input. We demonstrate how the dynamic neural processes autonomously generate the processing steps required to describe or ground object-oriented actions. By solving the fundamental problems of neural pointing, binding, and emergent discrete processing, the model may be a first but critical step toward a systematic neural processing account of higher cognition. Copyright © 2017 The Authors. Topics in Cognitive Science published by Wiley Periodicals, Inc. on behalf of Cognitive Science Society.
Dvorak, Jiri; Kramer, Efraim B; Schmied, Christian M; Drezner, Jonathan A; Zideman, David; Patricios, Jon; Correia, Luis; Pedrinelli, André; Mandelbaum, Bert
2013-12-01
Life-threatening medical emergencies are an infrequent but regular occurrence on the football field. Proper prevention strategies, emergency medical planning and timely access to emergency equipment are required to prevent catastrophic outcomes. In a continuing commitment to player safety during football, this paper presents the FIFA Medical Emergency Bag and FIFA 11 Steps to prevent sudden cardiac death. These recommendations are intended to create a global standard for emergency preparedness and the medical response to serious or catastrophic on-field injuries in football.
Investigation of the Dynamics of Low-Tension Cables
1992-06-01
chapter 3. An implicit time domain routine is nec- essary as the high propagation speed of elastic waves would require prohibitively small time-step...singularities by ensuring smooth curvature. However, sustained boundary layers are found to develop, demonstrating the importance of the underlying physical...chain and elastic chain, EA* = 4.0 x 103 ............... 124 3.10 Mode shape for tension variation due to elastic waves , using EA* - 4.0 x 103.125 6.11
Time Step Considerations when Simulating Dynamic Behavior of High Performance Homes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tabares-Velasco, Paulo Cesar
2016-09-01
Building energy simulations, especially those concerning pre-cooling strategies and cooling/heating peak demand management, require careful analysis and detailed understanding of building characteristics. Accurate modeling of the building thermal response and material properties for thermally massive walls or advanced materials like phase change materials (PCMs) are critically important.
Comparing an annual and daily time-step model for predicting field-scale P loss
USDA-ARS?s Scientific Manuscript database
Several models with varying degrees of complexity are available for describing P movement through the landscape. The complexity of these models is dependent on the amount of data required by the model, the number of model parameters needed to be estimated, the theoretical rigor of the governing equa...
Estimating allowable-cut by area-scheduling
William B. Leak
2011-01-01
Estimation of the regulated allowable-cut is an important step in placing a forest property under management and ensuring a continued supply of timber over time. Regular harvests also provide for the maintenance of needed wildlife habitat. There are two basic approaches: (1) volume, and (2) area/volume regulation, with many variations of each. Some require...
Rep. Stivers, Steve [R-OH-15
2011-08-01
Senate - 03/28/2012 Read the second time. Placed on Senate Legislative Calendar under General Orders. Calendar No. 343. (All Actions) Tracker: This bill has the status Passed HouseHere are the steps for Status of Legislation:
Federal Register 2010, 2011, 2012, 2013, 2014
2013-12-06
.... demonstrate that its response program includes, or is taking reasonable steps to include, the four elements of.... Achievement of the four elements should be viewed as a priority. Section 128(a) authorizes funding for... record requirement. The four elements of a response program are described below: 1. Timely survey and...
Identifying Promising Items: The Use of Crowdsourcing in the Development of Assessment Instruments
ERIC Educational Resources Information Center
Sadler, Philip M.; Sonnert, Gerhard; Coyle, Harold P.; Miller, Kelly A.
2016-01-01
The psychometrically sound development of assessment instruments requires pilot testing of candidate items as a first step in gauging their quality, typically a time-consuming and costly effort. Crowdsourcing offers the opportunity for gathering data much more quickly and inexpensively than from most targeted populations. In a simulation of a…
Stepping around the Brick Wall: Overcoming Student Obstacles in Methods Courses
ERIC Educational Resources Information Center
Bos, Angela L.; Schneider, Monica C.
2009-01-01
Many political science departments offer, and increasing numbers of them require, undergraduate research methods courses. At the same time, studies cite high levels of student anxiety about such courses. Utilizing survey data from both students who take and faculty who teach methods, we conduct an analysis that compares the barriers students and…
43 CFR 45.40 - What are the requirements for prehearing conferences?
Code of Federal Regulations, 2010 CFR
2010-10-01
... and the schedule of remaining steps in the hearing process. (e) Failure to attend. Unless the ALJ... an initial prehearing conference with the parties at the time specified in the docketing notice under... material fact and exclude issues that do not qualify for review as factual, material, and disputed; (ii) To...
Strategy Execution in Cognitive Skill Learning: An Item-Level Test of Candidate Models
ERIC Educational Resources Information Center
Rickard, Timothy C.
2004-01-01
This article investigates the transition to memory-based performance that commonly occurs with practice on tasks that initially require use of a multistep algorithm. In an alphabet arithmetic task, item response times exhibited pronounced step-function decreases after moderate practice that were uniquely predicted by T. C. Rickard's (1997)…
The critical steps required to evaluating the feasiblity of establishing a water quality trading market in a testbed watershed is described. Focus is given toward describing the problem of thin markets as a specifi barrier to successful trading. Economic theory for considering an...
NASA Astrophysics Data System (ADS)
Vafadar, Bahareh; Bones, Philip J.
2012-10-01
There is a strong motivation to reduce the amount of acquired data necessary to reconstruct clinically useful MR images, since less data means faster acquisition sequences, less time for the patient to remain motionless in the scanner and better time resolution for observing temporal changes within the body. We recently introduced an improvement in image quality for reconstructing parallel MR images by incorporating a data ordering step with compressed sensing (CS) in an algorithm named `PECS'. That method requires a prior estimate of the image to be available. We are extending the algorithm to explore ways of utilizing the data ordering step without requiring a prior estimate. The method presented here first reconstructs an initial image x1 by compressed sensing (with scarcity enhanced by SVD), then derives a data ordering from x1, R'1 , which ranks the voxels of x1 according to their value. A second reconstruction is then performed which incorporates minimization of the first norm of the estimate after ordering by R'1 , resulting in a new reconstruction x2. Preliminary results are encouraging.
Users Manual for the Geospatial Stream Flow Model (GeoSFM)
Artan, Guleid A.; Asante, Kwabena; Smith, Jodie; Pervez, Md Shahriar; Entenmann, Debbie; Verdin, James P.; Rowland, James
2008-01-01
The monitoring of wide-area hydrologic events requires the manipulation of large amounts of geospatial and time series data into concise information products that characterize the location and magnitude of the event. To perform these manipulations, scientists at the U.S. Geological Survey Center for Earth Resources Observation and Science (EROS), with the cooperation of the U.S. Agency for International Development, Office of Foreign Disaster Assistance (USAID/OFDA), have implemented a hydrologic modeling system. The system includes a data assimilation component to generate data for a Geospatial Stream Flow Model (GeoSFM) that can be run operationally to identify and map wide-area streamflow anomalies. GeoSFM integrates a geographical information system (GIS) for geospatial preprocessing and postprocessing tasks and hydrologic modeling routines implemented as dynamically linked libraries (DLLs) for time series manipulations. Model results include maps that depicting the status of streamflow and soil water conditions. This Users Manual provides step-by-step instructions for running the model and for downloading and processing the input data required for initial model parameterization and daily operation.
Decreasing the temporal complexity for nonlinear, implicit reduced-order models by forecasting
Carlberg, Kevin; Ray, Jaideep; van Bloemen Waanders, Bart
2015-02-14
Implicit numerical integration of nonlinear ODEs requires solving a system of nonlinear algebraic equations at each time step. Each of these systems is often solved by a Newton-like method, which incurs a sequence of linear-system solves. Most model-reduction techniques for nonlinear ODEs exploit knowledge of system's spatial behavior to reduce the computational complexity of each linear-system solve. However, the number of linear-system solves for the reduced-order simulation often remains roughly the same as that for the full-order simulation. We propose exploiting knowledge of the model's temporal behavior to (1) forecast the unknown variable of the reduced-order system of nonlinear equationsmore » at future time steps, and (2) use this forecast as an initial guess for the Newton-like solver during the reduced-order-model simulation. To compute the forecast, we propose using the Gappy POD technique. As a result, the goal is to generate an accurate initial guess so that the Newton solver requires many fewer iterations to converge, thereby decreasing the number of linear-system solves in the reduced-order-model simulation.« less
Roos, Margaret A; Reisman, Darcy S; Hicks, Gregory; Rose, William; Rudolph, Katherine S
2016-01-01
Adults with stroke have difficulty avoiding obstacles when walking, especially when a time constraint is imposed. The Four Square Step Test (FSST) evaluates dynamic balance by requiring individuals to step over canes in multiple directions while being timed, but many people with stroke are unable to complete it. The purposes of this study were to (1) modify the FSST by replacing the canes with tape so that more persons with stroke could successfully complete the test and (2) examine the reliability and validity of the modified version. Fifty-five subjects completed the Modified FSST (mFSST) by stepping over tape in all four directions while being timed. The mFSST resulted in significantly greater numbers of subjects completing the test than the FSST (39/55 [71%] and 33/55 [60%], respectively) (p < 0.04). The test-retest, intrarater, and interrater reliability of the mFSST were excellent (intraclass correlation coefficient ranges: 0.81-0.99). Construct and concurrent validity of the mFSST were also established. The minimal detectable change was 6.73 s. The mFSST, an ideal measure of dynamic balance, can identify progress in people with stroke in varied settings and can be completed by a wide range of people with stroke in approximately 5 min with the use of minimal equipment (tape, stop watch).
Soltani, Maryam; Kerachian, Reza
2018-04-15
In this paper, a new methodology is proposed for the real-time trading of water withdrawal and waste load discharge permits in agricultural areas along the rivers. Total Dissolved Solids (TDS) is chosen as an indicator of river water quality and the TDS load that agricultural water users discharge to the river are controlled by storing a part of return flows in some evaporation ponds. Available surface water withdrawal and waste load discharge permits are determined using a non-linear multi-objective optimization model. Total available permits are then fairly reallocated among agricultural water users, proportional to their arable lands. Water users can trade their water withdrawal and waste load discharge permits simultaneously, in a bilateral, step by step framework, which takes advantage of differences in their water use efficiencies and agricultural return flow rates. A trade that would take place at each time step results in either more benefit or less diverted return flow. The Nucleolus cooperative game is used to redistribute the benefits generated through trades in different time steps. The proposed methodology is applied to PayePol region in the Karkheh River catchment, southwest Iran. Predicting that 1922.7 Million Cubic Meters (MCM) of annual flow is available to agricultural lands at the beginning of the cultivation year, the real-time optimization model estimates the total annual benefit to reach 46.07 million US Dollars (USD), which requires 6.31 MCM of return flow to be diverted to the evaporation ponds. Fair reallocation of the permits, changes these values to 35.38 million USD and 13.69 MCM, respectively. Results illustrate the effectiveness of the proposed methodology in the real-time water and waste load allocation and simultaneous trading of permits. Copyright © 2018 Elsevier Ltd. All rights reserved.
Imaging the eye fundus with real-time en-face spectral domain optical coherence tomography
Bradu, Adrian; Podoleanu, Adrian Gh.
2014-01-01
Real-time display of processed en-face spectral domain optical coherence tomography (SD-OCT) images is important for diagnosis. However, due to many steps of data processing requirements, such as Fast Fourier transformation (FFT), data re-sampling, spectral shaping, apodization, zero padding, followed by software cut of the 3D volume acquired to produce an en-face slice, conventional high-speed SD-OCT cannot render an en-face OCT image in real time. Recently we demonstrated a Master/Slave (MS)-OCT method that is highly parallelizable, as it provides reflectivity values of points at depth within an A-scan in parallel. This allows direct production of en-face images. In addition, the MS-OCT method does not require data linearization, which further simplifies the processing. The computation in our previous paper was however time consuming. In this paper we present an optimized algorithm that can be used to provide en-face MS-OCT images much quicker. Using such an algorithm we demonstrate around 10 times faster production of sets of en-face OCT images than previously obtained as well as simultaneous real-time display of up to 4 en-face OCT images of 200 × 200 pixels2 from the fovea and the optic nerve of a volunteer. We also demonstrate 3D and B-scan OCT images obtained from sets of MS-OCT C-scans, i.e. with no FFT and no intermediate step of generation of A-scans. PMID:24761303
NASA Astrophysics Data System (ADS)
Guthrey, Pierson Tyler
The relativistic Vlasov-Maxwell system (RVM) models the behavior of collisionless plasma, where electrons and ions interact via the electromagnetic fields they generate. In the RVM system, electrons could accelerate to significant fractions of the speed of light. An idea that is actively being pursued by several research groups around the globe is to accelerate electrons to relativistic speeds by hitting a plasma with an intense laser beam. As the laser beam passes through the plasma it creates plasma wakes, much like a ship passing through water, which can trap electrons and push them to relativistic speeds. Such setups are known as laser wakefield accelerators, and have the potential to yield particle accelerators that are significantly smaller than those currently in use. Ultimately, the goal of such research is to harness the resulting electron beams to generate electromagnetic waves that can be used in medical imaging applications. High-order accurate numerical discretizations of kinetic Vlasov plasma models are very effective at yielding low-noise plasma simulations, but are computationally expensive to solve because of the high dimensionality. In addition to the general difficulties inherent to numerically simulating Vlasov models, the relativistic Vlasov-Maxwell system has unique challenges not present in the non-relativistic case. One such issue is that operator splitting of the phase gradient leads to potential instabilities, thus we require an alternative to operator splitting of the phase. The goal of the current work is to develop a new class of high-order accurate numerical methods for solving kinetic Vlasov models of plasma. The main discretization in configuration space is handled via a high-order finite element method called the discontinuous Galerkin method (DG). One difficulty is that standard explicit time-stepping methods for DG suffer from time-step restrictions that are significantly worse than what a simple Courant-Friedrichs-Lewy (CFL) argument requires. The maximum stable time-step scales inversely with the highest degree in the DG polynomial approximation space and becomes progressively smaller with each added spatial dimension. In this work, we overcome this difficulty by introducing a novel time-stepping strategy: the regionally-implicit discontinuous Galerkin (RIDG) method. The RIDG is method is based on an extension of the Lax-Wendroff DG (LxW-DG) method, which previously had been shown to be equivalent (for linear constant coefficient problems) to a predictor-corrector approach, where the prediction is computed by a space-time DG method (STDG). The corrector is an explicit method that uses the space-time reconstructed solution from the predictor step. In this work, we modify the predictor to include not just local information, but also neighboring information. With this modification, we show that the stability is greatly enhanced; we show that we can remove the polynomial degree dependence of the maximum time-step and show vastly improved time-steps in multiple spatial dimensions. Upon the development of the general RIDG method, we apply it to the non-relativistic 1D1V Vlasov-Poisson equations and the relativistic 1D2V Vlasov-Maxwell equations. For each we validate the high-order method on several test cases. In the final test case, we demonstrate the ability of the method to simulate the acceleration of electrons to relativistic speeds in a simplified test case.
Assessing performance of an Electronic Health Record (EHR) using Cognitive Task Analysis.
Saitwal, Himali; Feng, Xuan; Walji, Muhammad; Patel, Vimla; Zhang, Jiajie
2010-07-01
Many Electronic Health Record (EHR) systems fail to provide user-friendly interfaces due to the lack of systematic consideration of human-centered computing issues. Such interfaces can be improved to provide easy to use, easy to learn, and error-resistant EHR systems to the users. To evaluate the usability of an EHR system and suggest areas of improvement in the user interface. The user interface of the AHLTA (Armed Forces Health Longitudinal Technology Application) was analyzed using the Cognitive Task Analysis (CTA) method called GOMS (Goals, Operators, Methods, and Selection rules) and an associated technique called KLM (Keystroke Level Model). The GOMS method was used to evaluate the AHLTA user interface by classifying each step of a given task into Mental (Internal) or Physical (External) operators. This analysis was performed by two analysts independently and the inter-rater reliability was computed to verify the reliability of the GOMS method. Further evaluation was performed using KLM to estimate the execution time required to perform the given task through application of its standard set of operators. The results are based on the analysis of 14 prototypical tasks performed by AHLTA users. The results show that on average a user needs to go through 106 steps to complete a task. To perform all 14 tasks, they would spend about 22 min (independent of system response time) for data entry, of which 11 min are spent on more effortful mental operators. The inter-rater reliability analysis performed for all 14 tasks was 0.8 (kappa), indicating good reliability of the method. This paper empirically reveals and identifies the following finding related to the performance of AHLTA: (1) large number of average total steps to complete common tasks, (2) high average execution time and (3) large percentage of mental operators. The user interface can be improved by reducing (a) the total number of steps and (b) the percentage of mental effort, required for the tasks. 2010 Elsevier Ireland Ltd. All rights reserved.
Comparing the efficacy of metronome beeps and stepping stones to adjust gait: steps to follow!
Bank, Paulina J M; Roerdink, Melvyn; Peper, C E
2011-03-01
Acoustic metronomes and visual targets have been used in rehabilitation practice to improve pathological gait. In addition, they may be instrumental in evaluating and training instantaneous gait adjustments. The aim of this study was to compare the efficacy of two cue types in inducing gait adjustments, viz. acoustic temporal cues in the form of metronome beeps and visual spatial cues in the form of projected stepping stones. Twenty healthy elderly (aged 63.2 ± 3.6 years) were recruited to walk on an instrumented treadmill at preferred speed and cadence, paced by either metronome beeps or projected stepping stones. Gait adaptations were induced using two manipulations: by perturbing the sequence of cues and by imposing switches from one cueing type to the other. Responses to these manipulations were quantified in terms of step-length and step-time adjustments, the percentage correction achieved over subsequent steps, and the number of steps required to restore the relation between gait and the beeps or stepping stones. The results showed that perturbations in a sequence of stepping stones were overcome faster than those in a sequence of metronome beeps. In switching trials, switching from metronome beeps to stepping stones was achieved faster than vice versa, indicating that gait was influenced more strongly by the stepping stones than the metronome beeps. Together these results revealed that, in healthy elderly, the stepping stones induced gait adjustments more effectively than did the metronome beeps. Potential implications for the use of metronome beeps and stepping stones in gait rehabilitation practice are discussed.
Prediction-Correction Algorithms for Time-Varying Constrained Optimization
Simonetto, Andrea; Dall'Anese, Emiliano
2017-07-26
This article develops online algorithms to track solutions of time-varying constrained optimization problems. Particularly, resembling workhorse Kalman filtering-based approaches for dynamical systems, the proposed methods involve prediction-correction steps to provably track the trajectory of the optimal solutions of time-varying convex problems. The merits of existing prediction-correction methods have been shown for unconstrained problems and for setups where computing the inverse of the Hessian of the cost function is computationally affordable. This paper addresses the limitations of existing methods by tackling constrained problems and by designing first-order prediction steps that rely on the Hessian of the cost function (and do notmore » require the computation of its inverse). In addition, the proposed methods are shown to improve the convergence speed of existing prediction-correction methods when applied to unconstrained problems. Numerical simulations corroborate the analytical results and showcase performance and benefits of the proposed algorithms. A realistic application of the proposed method to real-time control of energy resources is presented.« less
Observing in space and time the ephemeral nucleation of liquid-to-crystal phase transitions.
Yoo, Byung-Kuk; Kwon, Oh-Hoon; Liu, Haihua; Tang, Jau; Zewail, Ahmed H
2015-10-19
The phase transition of crystalline ordering is a general phenomenon, but its evolution in space and time requires microscopic probes for visualization. Here we report direct imaging of the transformation of amorphous titanium dioxide nanofilm, from the liquid state, passing through the nucleation step and finally to the ordered crystal phase. Single-pulse transient diffraction profiles at different times provide the structural transformation and the specific degree of crystallinity (η) in the evolution process. It is found that the temporal behaviour of η exhibits unique 'two-step' dynamics, with a robust 'plateau' that extends over a microsecond; the rate constants vary by two orders of magnitude. Such behaviour reflects the presence of intermediate structure(s) that are the precursor of the ordered crystal state. Theoretically, we extend the well-known Johnson-Mehl-Avrami-Kolmogorov equation, which describes the isothermal process with a stretched-exponential function, but here over the range of times covering the melt-to-crystal transformation.
Real-time digital signal processing in multiphoton and time-resolved microscopy
NASA Astrophysics Data System (ADS)
Wilson, Jesse W.; Warren, Warren S.; Fischer, Martin C.
2016-03-01
The use of multiphoton interactions in biological tissue for imaging contrast requires highly sensitive optical measurements. These often involve signal processing and filtering steps between the photodetector and the data acquisition device, such as photon counting and lock-in amplification. These steps can be implemented as real-time digital signal processing (DSP) elements on field-programmable gate array (FPGA) devices, an approach that affords much greater flexibility than commercial photon counting or lock-in devices. We will present progress toward developing two new FPGA-based DSP devices for multiphoton and time-resolved microscopy applications. The first is a high-speed multiharmonic lock-in amplifier for transient absorption microscopy, which is being developed for real-time analysis of the intensity-dependence of melanin, with applications in vivo and ex vivo (noninvasive histopathology of melanoma and pigmented lesions). The second device is a kHz lock-in amplifier running on a low cost (50-200) development platform. It is our hope that these FPGA-based DSP devices will enable new, high-speed, low-cost applications in multiphoton and time-resolved microscopy.
NASA Astrophysics Data System (ADS)
Wan, Hui; Zhang, Kai; Rasch, Philip J.; Singh, Balwinder; Chen, Xingyuan; Edwards, Jim
2017-02-01
A test procedure is proposed for identifying numerically significant solution changes in evolution equations used in atmospheric models. The test issues a fail
signal when any code modifications or computing environment changes lead to solution differences that exceed the known time step sensitivity of the reference model. Initial evidence is provided using the Community Atmosphere Model (CAM) version 5.3 that the proposed procedure can be used to distinguish rounding-level solution changes from impacts of compiler optimization or parameter perturbation, which are known to cause substantial differences in the simulated climate. The test is not exhaustive since it does not detect issues associated with diagnostic calculations that do not feedback to the model state variables. Nevertheless, it provides a practical and objective way to assess the significance of solution changes. The short simulation length implies low computational cost. The independence between ensemble members allows for parallel execution of all simulations, thus facilitating fast turnaround. The new method is simple to implement since it does not require any code modifications. We expect that the same methodology can be used for any geophysical model to which the concept of time step convergence is applicable.
Tendency for interlaboratory precision in the GMO analysis method based on real-time PCR.
Kodama, Takashi; Kurosawa, Yasunori; Kitta, Kazumi; Naito, Shigehiro
2010-01-01
The Horwitz curve estimates interlaboratory precision as a function only of concentration, and is frequently used as a method performance criterion in food analysis with chemical methods. The quantitative biochemical methods based on real-time PCR require an analogous criterion to progressively promote method validation. We analyzed the tendency of precision using a simplex real-time PCR technique in 53 collaborative studies of seven genetically modified (GM) crops. Reproducibility standard deviation (SR) and repeatability standard deviation (Sr) of the genetically modified organism (GMO) amount (%) was more or less independent of GM crops (i.e., maize, soybean, cotton, oilseed rape, potato, sugar beet, and rice) and evaluation procedure steps. Some studies evaluated whole steps consisting of DNA extraction and PCR quantitation, whereas others focused only on the PCR quantitation step by using DNA extraction solutions. Therefore, SR and Sr for GMO amount (%) are functions only of concentration similar to the Horwitz curve. We proposed S(R) = 0.1971C 0.8685 and S(r) = 0.1478C 0.8424, where C is the GMO amount (%). We also proposed a method performance index in GMO quantitative methods that is analogous to the Horwitz Ratio.
An adaptive grid algorithm for one-dimensional nonlinear equations
NASA Technical Reports Server (NTRS)
Gutierrez, William E.; Hills, Richard G.
1990-01-01
Richards' equation, which models the flow of liquid through unsaturated porous media, is highly nonlinear and difficult to solve. Step gradients in the field variables require the use of fine grids and small time step sizes. The numerical instabilities caused by the nonlinearities often require the use of iterative methods such as Picard or Newton interation. These difficulties result in large CPU requirements in solving Richards equation. With this in mind, adaptive and multigrid methods are investigated for use with nonlinear equations such as Richards' equation. Attention is focused on one-dimensional transient problems. To investigate the use of multigrid and adaptive grid methods, a series of problems are studied. First, a multigrid program is developed and used to solve an ordinary differential equation, demonstrating the efficiency with which low and high frequency errors are smoothed out. The multigrid algorithm and an adaptive grid algorithm is used to solve one-dimensional transient partial differential equations, such as the diffusive and convective-diffusion equations. The performance of these programs are compared to that of the Gauss-Seidel and tridiagonal methods. The adaptive and multigrid schemes outperformed the Gauss-Seidel algorithm, but were not as fast as the tridiagonal method. The adaptive grid scheme solved the problems slightly faster than the multigrid method. To solve nonlinear problems, Picard iterations are introduced into the adaptive grid and tridiagonal methods. Burgers' equation is used as a test problem for the two algorithms. Both methods obtain solutions of comparable accuracy for similar time increments. For the Burgers' equation, the adaptive grid method finds the solution approximately three times faster than the tridiagonal method. Finally, both schemes are used to solve the water content formulation of the Richards' equation. For this problem, the adaptive grid method obtains a more accurate solution in fewer work units and less computation time than required by the tridiagonal method. The performance of the adaptive grid method tends to degrade as the solution process proceeds in time, but still remains faster than the tridiagonal scheme.
Using performance measurement to drive improvement: a road map for change.
Galvin, Robert S; McGlynn, Elizabeth A
2003-01-01
Performance measures and reporting have not been adopted throughout the US health care system despite their central role in encouraging increased participation by consumers in decision-making. Understanding whether the failure of measurement and reporting to diffuse throughout the health system can be overcome is critical for determining future policy in this area. To create a conceptual framework for analyzing the current rate of adoption and evaluating alternatives for accelerating adoption, and to recommend a set of concrete steps that can be taken to increase the use of performance measurement and reporting. Review of three theoretic models (Rogers, Prochaska/DiClemente, Gladwell), examination of the literature on previous experiences with quality measurement and reporting, and interviews with select stakeholders. The three theoretic models provide a valuable framework for understanding why the use of performance measures is stalled ("the circle of unaccountability") and for generating ideas about concrete steps that could be taken to accelerate adoption. Six steps are recommended: (1) raise public awareness, (2) redesign measures and reports, (3) make the delivery of information timely, (4) require public reporting, (5) develop and implement systems to reward quality, and (6) actively court leaders. The recommended six steps are interconnected; action on all will be required to drive significant acceleration in rates of adoption of performance measurement and reporting. Leadership and coordination are necessary to ensure these steps are taken and that they work in concert with one another.
Simulation trainer for practicing emergent open thoracotomy procedures.
Hamilton, Allan J; Prescher, Hannes; Biffar, David E; Poston, Robert S
2015-07-01
An emergent open thoracotomy (OT) is a high-risk, low-frequency procedure uniquely suited for simulation training. We developed a cost-effective Cardiothoracic (CT) Surgery trainer and assessed its potential for improving technical and interprofessional skills during an emergent simulated OT. We modified a commercially available mannequin torso with artificial tissue models to create a custom CT Surgery trainer. The trainer's feasibility for simulating emergent OT was tested using a multidisciplinary CT team in three consecutive in situ simulations. Five discretely observable milestones were identified as requisite steps in carrying out an emergent OT; namely (1) diagnosis and declaration of a code situation, (2) arrival of the code cart, (3) arrival of the thoracotomy tray, (4) initiation of the thoracotomy incision, and (5) defibrillation of a simulated heart. The time required for a team to achieve each discrete step was measured by an independent observer over the course of each OT simulation trial and compared. Over the course of the three OT simulation trials conducted in the coronary care unit, there was an average reduction of 29.5% (P < 0.05) in the times required to achieve the five critical milestones. The time required to complete the whole OT procedure improved by 7 min and 31 s from the initial to the final trial-an overall improvement of 40%. In our preliminary evaluation, the CT Surgery trainer appears to be useful for improving team performance during a simulated emergent bedside OT in the coronary care unit. Copyright © 2015 Elsevier Inc. All rights reserved.
Ensemble Sampling vs. Time Sampling in Molecular Dynamics Simulations of Thermal Conductivity
Gordiz, Kiarash; Singh, David J.; Henry, Asegun
2015-01-29
In this report we compare time sampling and ensemble averaging as two different methods available for phase space sampling. For the comparison, we calculate thermal conductivities of solid argon and silicon structures, using equilibrium molecular dynamics. We introduce two different schemes for the ensemble averaging approach, and show that both can reduce the total simulation time as compared to time averaging. It is also found that velocity rescaling is an efficient mechanism for phase space exploration. Although our methodology is tested using classical molecular dynamics, the ensemble generation approaches may find their greatest utility in computationally expensive simulations such asmore » first principles molecular dynamics. For such simulations, where each time step is costly, time sampling can require long simulation times because each time step must be evaluated sequentially and therefore phase space averaging is achieved through sequential operations. On the other hand, with ensemble averaging, phase space sampling can be achieved through parallel operations, since each ensemble is independent. For this reason, particularly when using massively parallel architectures, ensemble sampling can result in much shorter simulation times and exhibits similar overall computational effort.« less
NASA Astrophysics Data System (ADS)
Vaidya, Bhargav; Prasad, Deovrat; Mignone, Andrea; Sharma, Prateek; Rickler, Luca
2017-12-01
An important ingredient in numerical modelling of high temperature magnetized astrophysical plasmas is the anisotropic transport of heat along magnetic field lines from higher to lower temperatures. Magnetohydrodynamics typically involves solving the hyperbolic set of conservation equations along with the induction equation. Incorporating anisotropic thermal conduction requires to also treat parabolic terms arising from the diffusion operator. An explicit treatment of parabolic terms will considerably reduce the simulation time step due to its dependence on the square of the grid resolution (Δx) for stability. Although an implicit scheme relaxes the constraint on stability, it is difficult to distribute efficiently on a parallel architecture. Treating parabolic terms with accelerated super-time-stepping (STS) methods has been discussed in literature, but these methods suffer from poor accuracy (first order in time) and also have difficult-to-choose tuneable stability parameters. In this work, we highlight a second-order (in time) Runge-Kutta-Legendre (RKL) scheme (first described by Meyer, Balsara & Aslam 2012) that is robust, fast and accurate in treating parabolic terms alongside the hyperbolic conversation laws. We demonstrate its superiority over the first-order STS schemes with standard tests and astrophysical applications. We also show that explicit conduction is particularly robust in handling saturated thermal conduction. Parallel scaling of explicit conduction using RKL scheme is demonstrated up to more than 104 processors.
Magnetically Enhanced Solid-Liquid Separation
NASA Astrophysics Data System (ADS)
Rey, C. M.; Keller, K.; Fuchs, B.
2005-07-01
DuPont is developing an entirely new method of solid-liquid filtration involving the use of magnetic fields and magnetic field gradients. The new hybrid process, entitled Magnetically Enhanced Solid-Liquid Separation (MESLS), is designed to improve the de-watering kinetics and reduce the residual moisture content of solid particulates mechanically separated from liquid slurries. Gravitation, pressure, temperature, centrifugation, and fluid dynamics have dictated traditional solid-liquid separation for the past 50 years. The introduction of an external field (i.e. the magnetic field) offers the promise to manipulate particle behavior in an entirely new manner, which leads to increased process efficiency. Traditional solid-liquid separation typically consists of two primary steps. The first is a mechanical step in which the solid particulate is separated from the liquid using e.g. gas pressure through a filter membrane, centrifugation, etc. The second step is a thermal drying process, which is required due to imperfect mechanical separation. The thermal drying process is over 100-200 times less energy efficient than the mechanical step. Since enormous volumes of materials are processed each year, more efficient mechanical solid-liquid separations can be leveraged into dramatic reductions in overall energy consumption by reducing downstream drying requirements have a tremendous impact on energy consumption. Using DuPont's MESLS process, initial test results showed four very important effects of the magnetic field on the solid-liquid filtration process: 1) reduction of the time to reach gas breakthrough, 2) less loss of solid into the filtrate, 3) reduction of the (solids) residual moisture content, and 4) acceleration of the de-watering kinetics. These test results and their potential impact on future commercial solid-liquid filtration is discussed. New applications can be found in mining, chemical and bioprocesses.
Double-Vacuum-Bag Process for Making Resin-Matrix Composites
NASA Technical Reports Server (NTRS)
Bradford, Larry J.
2007-01-01
A double-vacuum-bag process has been devised as a superior alternative to a single-vacuum-bag process used heretofore in making laminated fiber-reinforced resin-matrix composite-material structural components. This process is applicable to broad classes of high-performance matrix resins including polyimides and phenolics that emit volatile compounds (solvents and volatile by-products of resin-curing chemical reactions) during processing. The superiority of the double-vacuum-bag process lies in enhanced management of the volatile compounds. Proper management of volatiles is necessary for making composite-material components of high quality: if not removed and otherwise properly managed, volatiles can accumulate in interior pockets as resins cure, thereby forming undesired voids in the finished products. The curing cycle for manufacturing a composite laminate containing a reactive resin matrix usually consists of a two-step ramp-and-hold temperature profile and an associated single-step pressure profile as shown in Figure 1. The lower-temperature ramp-and-hold step is known in the art as the B stage. During the B stage, prepregs are heated and volatiles are generated. Because pressure is not applied at this stage, volatiles are free to escape. Pressure is applied during the higher-temperature ramp-and-hold step to consolidate the laminate and impart desired physical properties to the resin matrix. The residual volatile content and fluidity of the resin at the beginning of application of consolidation pressure are determined by the temperature and time parameters of the B stage. Once the consolidation pressure is applied, residual volatiles are locked in. In order to produce a void-free, high-quality laminate, it is necessary to design the curing cycle to obtain the required residual fluidity and the required temperature at the time of application of the consolidation pressure.
Winkler, Cornelia; Duma, M N; Popp, W; Sack, H; Budach, V; Molls, M; Kampfer, S
2014-10-01
The technical progress in radiotherapy in recent years has been tremendous. This also implies a change of human and time resources. However, there is a lack of data on this topic. Therefore, the DEGRO initiated several studies in the QUIRO project on this subject. The present publication focuses on results for tomotherapy systems and compares them with other IMRT techniques. Over a period of several months, time allocation was documented using a standard form at two university hospitals. The required time for individual steps in the treatment planning process was recorded for all involved professional groups (physicist, technician, and physician) by themselves. The time monitoring at the treatment machines was performed by auxiliary employees (student research assistants). Evaluation of the data was performed for all recorded data as well as by tumor site. A comparison was made between the two involved institutions. A total of 1,691 records were analyzed: 148 from head and neck (H&N) tumors, 460 from prostate cancer, 136 from breast cancer, and 947 from other tumor entities. The mean value of all data from both centers for the definition of the target volumes for H&N tumors took a radiation oncology specialist 75 min, while a physicist needed for the physical treatment planning 214 min. For prostate carcinomas, the times were 60 and 147 min, respectively, and for the group of other entities 63 and 192 min, respectively. For the first radiation treatment, the occupancy time of the linear accelerator room was 31, 26, and 30 min for each entity (H&N, prostate, other entities, respectively). For routine treatments 22, 18, and 21 min were needed for the particular entities. Major differences in the time required for the individual steps were observed between the two centers. This study gives an overview of the time and personnel requirements in radiation therapy using a tomotherapy system. The most representative analysis could be done for the room occupancy times during treatment in both centers. Due to the partly small amount of data and differing planning workflows between the two centers, it is problematic to draw a firm conclusion with regard to planning times. Overall, the time required for the tomotherapy treatment and planning is slightly higher compared to other IMRT techniques.
Method of Simulating Flow-Through Area of a Pressure Regulator
NASA Technical Reports Server (NTRS)
Hass, Neal E. (Inventor); Schallhorn, Paul A. (Inventor)
2011-01-01
The flow-through area of a pressure regulator positioned in a branch of a simulated fluid flow network is generated. A target pressure is defined downstream of the pressure regulator. A projected flow-through area is generated as a non-linear function of (i) target pressure, (ii) flow-through area of the pressure regulator for a current time step and a previous time step, and (iii) pressure at the downstream location for the current time step and previous time step. A simulated flow-through area for the next time step is generated as a sum of (i) flow-through area for the current time step, and (ii) a difference between the projected flow-through area and the flow-through area for the current time step multiplied by a user-defined rate control parameter. These steps are repeated for a sequence of time steps until the pressure at the downstream location is approximately equal to the target pressure.
Text-based Analytics for Biosurveillance
DOE Office of Scientific and Technical Information (OSTI.GOV)
Charles, Lauren E.; Smith, William P.; Rounds, Jeremiah
The ability to prevent, mitigate, or control a biological threat depends on how quickly the threat is identified and characterized. Ensuring the timely delivery of data and analytics is an essential aspect of providing adequate situational awareness in the face of a disease outbreak. This chapter outlines an analytic pipeline for supporting an advanced early warning system that can integrate multiple data sources and provide situational awareness of potential and occurring disease situations. The pipeline, includes real-time automated data analysis founded on natural language processing (NLP), semantic concept matching, and machine learning techniques, to enrich content with metadata related tomore » biosurveillance. Online news articles are presented as an example use case for the pipeline, but the processes can be generalized to any textual data. In this chapter, the mechanics of a streaming pipeline are briefly discussed as well as the major steps required to provide targeted situational awareness. The text-based analytic pipeline includes various processing steps as well as identifying article relevance to biosurveillance (e.g., relevance algorithm) and article feature extraction (who, what, where, why, how, and when). The ability to prevent, mitigate, or control a biological threat depends on how quickly the threat is identified and characterized. Ensuring the timely delivery of data and analytics is an essential aspect of providing adequate situational awareness in the face of a disease outbreak. This chapter outlines an analytic pipeline for supporting an advanced early warning system that can integrate multiple data sources and provide situational awareness of potential and occurring disease situations. The pipeline, includes real-time automated data analysis founded on natural language processing (NLP), semantic concept matching, and machine learning techniques, to enrich content with metadata related to biosurveillance. Online news articles are presented as an example use case for the pipeline, but the processes can be generalized to any textual data. In this chapter, the mechanics of a streaming pipeline are briefly discussed as well as the major steps required to provide targeted situational awareness. The text-based analytic pipeline includes various processing steps as well as identifying article relevance to biosurveillance (e.g., relevance algorithm) and article feature extraction (who, what, where, why, how, and when).« less
On reliable control system designs. Ph.D. Thesis; [actuators
NASA Technical Reports Server (NTRS)
Birdwell, J. D.
1978-01-01
A mathematical model for use in the design of reliable multivariable control systems is discussed with special emphasis on actuator failures and necessary actuator redundancy levels. The model consists of a linear time invariant discrete time dynamical system. Configuration changes in the system dynamics are governed by a Markov chain that includes transition probabilities from one configuration state to another. The performance index is a standard quadratic cost functional, over an infinite time interval. The actual system configuration can be deduced with a one step delay. The calculation of the optimal control law requires the solution of a set of highly coupled Riccati-like matrix difference equations. Results can be used for off-line studies relating the open loop dynamics, required performance, actuator mean time to failure, and functional or identical actuator redundancy, with and without feedback gain reconfiguration strategies.
Lee, Byoung-Hee
2016-04-01
[Purpose] This study investigated the effects of real-time feedback using infrared camera recognition technology-based augmented reality in gait training for children with cerebral palsy. [Subjects] Two subjects with cerebral palsy were recruited. [Methods] In this study, augmented reality based real-time feedback training was conducted for the subjects in two 30-minute sessions per week for four weeks. Spatiotemporal gait parameters were used to measure the effect of augmented reality-based real-time feedback training. [Results] Velocity, cadence, bilateral step and stride length, and functional ambulation improved after the intervention in both cases. [Conclusion] Although additional follow-up studies of the augmented reality based real-time feedback training are required, the results of this study demonstrate that it improved the gait ability of two children with cerebral palsy. These findings suggest a variety of applications of conservative therapeutic methods which require future clinical trials.
Tormo Calandín, C; Manrique Martínez, I
2002-06-01
Children who require cardiopulmonary resuscitation present high mortality and morbidity. The few studies that have been published on this subject use different terminology and methodology in data collection, which makes comparisons, evaluation of efficacy, and the performance of meta-analyses, etc. difficult. Consequently, standardized data collection both in clinical studies on cardiorespiratory arrest and in cardiopulmonary resuscitation in the pediatric age group are required. The Spanish Group of Pediatric Cardiopulmonary Resuscitation emphasizes that recommendations must be simple and easy to understand. The first step in the elaboration of guidelines on data collection is to develop uniform definitions (glossary of terms). The second step comprises the so-called time intervals that include time periods between two events. To describe the intervals of cardiorespiratory arrest different clocks are used: the patient's watch, that of the ambulance, the interval between call and response, etc.Thirdly, a series of clinical results are gathered to determine whether the efforts of cardiopulmonary resuscitation have a positive effect on the patient, the patient's family and society. With the information gathered a registry of data that includes the patient's personal details, general data of the cardiopulmonary resuscitation, treatment, times of performance and definitive patient outcome is made.
Toward Modeling the Intrinsic Complexity of Test Problems
ERIC Educational Resources Information Center
Shoufan, Abdulhadi
2017-01-01
The concept of intrinsic complexity explains why different problems of the same type, tackled by the same problem solver, can require different times to solve and yield solutions of different quality. This paper proposes a general four-step approach that can be used to establish a model for the intrinsic complexity of a problem class in terms of…
Rep. Ross, Dennis A. [R-FL-12
2011-06-01
House - 06/20/2011 Referred to the Subcommittee on Federal Workforce, U.S. Postal Service, and Labor Policy . (All Actions) Tracker: This bill has the status IntroducedHere are the steps for Status of Legislation:
Sen. Coburn, Tom [R-OK
2013-07-23
Senate - 07/23/2013 Read twice and referred to the Committee on Homeland Security and Governmental Affairs. (All Actions) Tracker: This bill has the status IntroducedHere are the steps for Status of Legislation:
Civil Service Training in Kazakhstan: The Implementation of New Approaches
ERIC Educational Resources Information Center
Suleimenova, Gulimzhan
2016-01-01
Kazakhstan is one of the few countries in Central Asia in a historically short period of time managed to take strong positions in the international arena. However, under the conditions of rapidly changing world, the country has to face challenges driven by new requirements to civil servants professional level. Therefore, the 100 Steps Government…
7 CFR 1.640 - What are the requirements for prehearing conferences?
Code of Federal Regulations, 2010 CFR
2010-01-01
... and the schedule of remaining steps in the hearing process. (e) Failure to attend. Unless the ALJ... prehearing conference with the parties at the time specified in the docketing notice under § 1.630, on or... exclude issues that do not qualify for review as factual, material, and disputed; (ii) To consider the...
MaRIE: A facility for time-dependent materials science at the mesoscale
DOE Office of Scientific and Technical Information (OSTI.GOV)
Barnes, Cris William; Kippen, Karen Elizabeth
To meet new and emerging national security issues the Laboratory is stepping up to meet another grand challenge—transitioning from observing to controlling a material’s performance. This challenge requires the best of experiment, modeling, simulation, and computational tools. MaRIE is the Laboratory’s proposed flagship experimental facility intended to meet the challenge.
One-time pad, complexity of verification of keys, and practical security of quantum cryptography
DOE Office of Scientific and Technical Information (OSTI.GOV)
Molotkov, S. N., E-mail: sergei.molotkov@gmail.com
2016-11-15
A direct relation between the complexity of the complete verification of keys, which is one of the main criteria of security in classical systems, and a trace distance used in quantum cryptography is demonstrated. Bounds for the minimum and maximum numbers of verification steps required to determine the actual key are obtained.
Recruiting High Quality Students through a Fifth Year Program: It Can Be Done.
ERIC Educational Resources Information Center
James, Terry L.; And Others
This paper describes the process of recruiting high quality liberal arts graduates into teacher preparation programs at Memphis State University, Tennessee. The two graduate programs, the Master of Arts in Teaching and the Lyndhurst, both require full-time attendance, have lock-step delivery systems and extended internships, and are intensive. In…
Sen. Coburn, Tom [R-OK
2010-03-23
Senate - 03/24/2010 Read the second time. Placed on Senate Legislative Calendar under General Orders. Calendar No. 334. (All Actions) Tracker: This bill has the status IntroducedHere are the steps for Status of Legislation:
Developing Symbolic Capacity One Step at a Time
ERIC Educational Resources Information Center
Huttenlocher, Janellen; Vasilyeva, Marina; Newcombe, Nora; Duffy, Sean
2008-01-01
The present research examines the ability of children as young as 4 years to use models in tasks that require scaling of distance along a single dimension. In Experiment 1, we found that tasks involving models are similar in difficulty to those involving maps that we studied earlier (Huttenlocher, J., Newcombe, N., & Vasilyeva, M. (1999). Spatial…
Reticles, write time, and the need for speed
NASA Astrophysics Data System (ADS)
Ackmann, Paul W.; Litt, Lloyd C.; Ning, Guo Xiang
2014-10-01
Historical data indicates reticle write times are increasing node-to-node. The cost of mask sets is increasing driven by the tighter requirements and more levels. The regular introduction of new generations of mask patterning tools with improved performance is unable to fully compensate for the increased data and complexity required. Write time is a primary metric that drives mask fabrication speed. Design (Raw data) is only the first step in the process and many interactions between mask and wafer technology such as OPC used, OPC efficiency for writers, fracture engines, and actual field size used drive total write time. Yield, technology, and inspection rules drive the remaining raw cycle time. Yield can be even more critical for speed of delivery as it drives re-writes and wasted time. While intrinsic process yield is important, repair capability is the reason mask delivery is still able to deliver 100% good reticles to the fab. Advanced nodes utilizing several layers of multiple patterning may require mask writer tool dedication to meet image placement specifications. This will increase the effective mask cycle time for a layer mask set and drive the need for additional mask write capability in order to deliver masks at the rate required by the wafer fab production schedules.
Pedersen, E S L; Danquah, I H; Petersen, C B; Tolstrup, J S
2016-12-03
Accelerometers can obtain precise measurements of movements during the day. However, the individual activity pattern varies from day-to-day and there is limited evidence on measurement days needed to obtain sufficient reliability. The aim of this study was to examine variability in accelerometer derived data on sedentary behaviour and physical activity at work and in leisure-time during week days among Danish office employees. We included control participants (n = 135) from the Take a Stand! Intervention; a cluster randomized controlled trial conducted in 19 offices. Sitting time and physical activity were measured using an ActiGraph GT3X+ fixed on the thigh and data were processed using Acti4 software. Variability was examined for sitting time, standing time, steps and time spent in moderate-to-vigorous physical activity (MVPA) per day by multilevel mixed linear regression modelling. Results of this study showed that the number of days needed to obtain a reliability of 80% when measuring sitting time was 4.7 days for work and 5.5 days for leisure time. For physical activity at work, 4.0 days and 4.2 days were required to measure steps and MVPA, respectively. During leisure time, more monitoring time was needed to reliably estimate physical activity (6.8 days for steps and 5.8 days for MVPA). The number of measurement days needed to reliably estimate activity patterns was greater for leisure time than for work time. The domain specific variability is of great importance to researchers and health promotion workers planning to use objective measures of sedentary behaviour and physical activity. Clinical trials NCT01996176 .
Kielar, Ania Z; El-Maraghi, Robert H; Schweitzer, Mark E
2010-08-01
In Canada, equal access to health care is the goal, but this is associated with wait times. Wait times should be fair rather than uniform, taking into account the urgency of the problem as well as the time an individual has already waited. In November 2004, the Ontario government began addressing this issue. One of the first steps was to institute benchmarks reflecting "acceptable" wait times for CT and MRI. A public Web site was developed indicating wait times at each Local Health Integration Network. Since starting the Wait Time Information Program, there has been a sustained reduction in wait times for Ontarians requiring CT and MRI. The average wait time for a CT scan went from 81 days in September 2005 to 47 days in September 2009. For MRI, the resulting wait time was reduced from 120 to 105 days. Increased patient scans have been achieved by purchasing new CT and MRI scanners, expanding hours of operation, and improving patient throughput using strategies learned from the Lean initiative, based on Toyota's manufacturing philosophy for car production. Institution-specific changes in booking procedures have been implemented. Concurrently, government guidelines have been developed to ensure accountability for monies received. The Ontario Wait Time Information Program is an innovative first step in improving fair and equitable access to publicly funded imaging services. There have been reductions in wait times for both CT and MRI. As various new processes are implemented, further review will be necessary for each step to determine their individual efficacy. Copyright 2010 American College of Radiology. Published by Elsevier Inc. All rights reserved.
Geostationary Operational Environmental Statellite(GEOS-N report)
NASA Technical Reports Server (NTRS)
1991-01-01
The Advanced Missions Analysis Office (AMAO) of GSFC has completed a study of the Geostationary Operational Environmental Satellites (GOES-N) series. The feasibility, risks, schedules, and associated costs of advanced space and ground system concepts responsive to National Oceanic and Atmospheric Administration (NOAA) requirements were evaluated. The study is the first step in a multi-phased procurement effort that is expected to result in launch ready hardware in the post 2000 time frame. This represents the latest activity of GSFC in translating meteorological requirements of NOAA into viable space systems in geosynchronous earth orbits (GEO). GOES-N represents application of the latest spacecraft, sensor, and instrument technologies to enhance NOAA meteorological capabilities via remote and in-situ sensing from GEO. The GOES-N series, if successfully developed, could become another significant step in NOAA weather forecasting space systems, meeting increasingly complex emerging national needs for that agency's services.
Methods, systems and devices for detecting and locating ferromagnetic objects
Roybal, Lyle Gene [Idaho Falls, ID; Kotter, Dale Kent [Shelley, ID; Rohrbaugh, David Thomas [Idaho Falls, ID; Spencer, David Frazer [Idaho Falls, ID
2010-01-26
Methods for detecting and locating ferromagnetic objects in a security screening system. One method includes a step of acquiring magnetic data that includes magnetic field gradients detected during a period of time. Another step includes representing the magnetic data as a function of the period of time. Another step includes converting the magnetic data to being represented as a function of frequency. Another method includes a step of sensing a magnetic field for a period of time. Another step includes detecting a gradient within the magnetic field during the period of time. Another step includes identifying a peak value of the gradient detected during the period of time. Another step includes identifying a portion of time within the period of time that represents when the peak value occurs. Another step includes configuring the portion of time over the period of time to represent a ratio.
Earthquake models using rate and state friction and fast multipoles
NASA Astrophysics Data System (ADS)
Tullis, T.
2003-04-01
The most realistic current earthquake models employ laboratory-derived non-linear constitutive laws. These are the rate and state friction laws having both a non-linear viscous or direct effect and an evolution effect in which frictional resistance depends on time of stationary contact and has a memory of past slip velocity that fades with slip. The frictional resistance depends on the log of the slip velocity as well as the log of stationary hold time, and the fading memory involves an approximately exponential decay with slip. Due to the nonlinearly of these laws, analytical earthquake models are not attainable and numerical models are needed. The situation is even more difficult if true dynamic models are sought that deal with inertial forces and slip velocities on the order of 1 m/s as are observed during dynamic earthquake slip. Additional difficulties that exist if the dynamic slip phase of earthquakes is modeled arise from two sources. First, many physical processes might operate during dynamic slip, but they are only poorly understood, the relative importance of the processes is unknown, and the processes are even more nonlinear than those described by the current rate and state laws. Constitutive laws describing such behaviors are still being developed. Second, treatment of inertial forces and the influence that dynamic stresses from elastic waves may have on slip on the fault requires keeping track of the history of slip on remote parts of the fault as far into the past as it takes waves to travel from there. This places even more stringent requirements on computer time. Challenges for numerical modeling of complete earthquake cycles are that both time steps and mesh sizes must be small. Time steps must be milliseconds during dynamic slip, and yet models must represent earthquake cycles 100 years or more in length; methods using adaptive step sizes are essential. Element dimensions need to be on the order of meters, both to approximate continuum behavior adequately and to model microseismicity as well as large earthquakes. In order to model significant sized earthquakes this requires millions of elements. Modeling methods like the boundary element method that involve Green's functions normally require computation times that increase with the number N of elements squared, so using large N becomes impossible. We have adapted the Fast Multipole method to this problem in which the influence of sufficiently remote elements are grouped together and the elements are indexed such that the computations more efficient when run on parallel computers. Compute time varies with N log N rather than N squared. Computer programs are available that use this approach (http://www.servogrid.org/slide/GEM/PARK). Whether the multipole approach can be adapted to dynamic modeling is unclear.
Large time-step stability of explicit one-dimensional advection schemes
NASA Technical Reports Server (NTRS)
Leonard, B. P.
1993-01-01
There is a wide-spread belief that most explicit one-dimensional advection schemes need to satisfy the so-called 'CFL condition' - that the Courant number, c = udelta(t)/delta(x), must be less than or equal to one, for stability in the von Neumann sense. This puts severe limitations on the time-step in high-speed, fine-grid calculations and is an impetus for the development of implicit schemes, which often require less restrictive time-step conditions for stability, but are more expensive per time-step. However, it turns out that, at least in one dimension, if explicit schemes are formulated in a consistent flux-based conservative finite-volume form, von Neumann stability analysis does not place any restriction on the allowable Courant number. Any explicit scheme that is stable for c is less than 1, with a complex amplitude ratio, G(c), can be easily extended to arbitrarily large c. The complex amplitude ratio is then given by exp(- (Iota)(Nu)(Theta)) G(delta(c)), where N is the integer part of c, and delta(c) = c - N (less than 1); this is clearly stable. The CFL condition is, in fact, not a stability condition at all, but, rather, a 'range restriction' on the 'pieces' in a piece-wise polynomial interpolation. When a global view is taken of the interpolation, the need for a CFL condition evaporates. A number of well-known explicit advection schemes are considered and thus extended to large delta(t). The analysis also includes a simple interpretation of (large delta(t)) total-variation-diminishing (TVD) constraints.
Gynecologic oncology group strategies to improve timeliness of publication.
Bialy, Sally; Blessing, John A; Stehman, Frederick B; Reardon, Anne M; Blaser, Kim M
2013-08-01
The Gynecologic Oncology Group (GOG) is a multi-institution cooperative group funded by the National Cancer Institute to conduct clinical trials encompassing clinical and basic scientific research in gynecologic malignancies. These results are disseminated via publication in peer-reviewed journals. This process requires collaboration of numerous investigators located in diverse cancer research centers. Coordination of manuscript development is positioned within the Statistical and Data Center (SDC), thus allowing the SDC personnel to manage the process and refine strategies to promote earlier dissemination of results. A major initiative to improve timeliness utilizing the assignment, monitoring, and enforcement of deadlines for each phase of manuscript development is the focus of this investigation. Document improvement in timeliness via comparison of deadline compliance and time to journal submission due to expanded administrative and technologic initiatives implemented in 2006. Major steps in the publication process include generation of first draft by the First Author and submission to SDC, Co-author review, editorial review by Publications Subcommittee, response to journal critique, and revision. Associated with each step are responsibilities of First Author to write or revise, collaborating Biostatistician to perform analysis and interpretation, and assigned SDC Clinical Trials Editorial Associate to format/revise according to journal requirements. Upon the initiation of each step, a deadline for completion is assigned. In order to improve efficiency, a publications database was developed to track potential steps in manuscript development that enables the SDC Director of Administration and the Publications Subcommittee Chair to assign, monitor, and enforce deadlines. They, in turn, report progress to Group Leadership through the Operations Committee. The success of the strategies utilized to improve the GOG publication process was assessed by comparing the timeliness of each potential step in the development of primary Phase II manuscripts during 2003-2006 versus 2007-2010. Improvement was noted in 10 of 11 identified steps resulting in a cumulative average improvement of 240 days from notification of data maturity to First Author through first submission to a journal. Moreover, the average time to journal acceptance has improved by an average of 346 days. The investigation is based on only Phase II trials to ensure comparability of manuscript complexity. Nonetheless, the procedures employed are applicable to the development of any clinical trials manuscript. The assignment, monitoring, and enforcement of deadlines for all stages of manuscript development have resulted in increased efficiency and timeliness. The positioning and support of manuscript development within the SDC provide a valuable resource to authors in meeting assigned deadlines, accomplishing peer review, and complying with journal requirements.
Two-dimensional Euler and Navier-Stokes Time accurate simulations of fan rotor flows
NASA Technical Reports Server (NTRS)
Boretti, A. A.
1990-01-01
Two numerical methods are presented which describe the unsteady flow field in the blade-to-blade plane of an axial fan rotor. These methods solve the compressible, time-dependent, Euler and the compressible, turbulent, time-dependent, Navier-Stokes conservation equations for mass, momentum, and energy. The Navier-Stokes equations are written in Favre-averaged form and are closed with an approximate two-equation turbulence model with low Reynolds number and compressibility effects included. The unsteady aerodynamic component is obtained by superposing inflow or outflow unsteadiness to the steady conditions through time-dependent boundary conditions. The integration in space is performed by using a finite volume scheme, and the integration in time is performed by using k-stage Runge-Kutta schemes, k = 2,5. The numerical integration algorithm allows the reduction of the computational cost of an unsteady simulation involving high frequency disturbances in both CPU time and memory requirements. Less than 200 sec of CPU time are required to advance the Euler equations in a computational grid made up of about 2000 grid during 10,000 time steps on a CRAY Y-MP computer, with a required memory of less than 0.3 megawords.
NASA Astrophysics Data System (ADS)
Steingroewer, Juliane; Bley, Thomas; Bergemann, Christian; Boschke, Elke
2007-04-01
Analyses of food-borne pathogens are of great importance in order to minimize the health risk for customers. Thus, very sensitive and rapid detection methods are required. Current conventional culture techniques are very time consuming. Modern immunoassays and biochemical analysis also require pre-enrichment steps resulting in a turnaround time of at least 24 h. Biomagnetic separation (BMS) is a promising more rapid method. In this study we describe the isolation of high affine and specific peptides from a phage-peptide library, which combined with BMS allows the detection of Salmonella spp. with a similar sensitivity as that of immunomagnetic separation using antibodies.
Aban, Inmaculada B.; Wolfe, Gil I.; Cutter, Gary R.; Kaminski, Henry J.; Jaretzki, Alfred; Minisman, Greg; Conwit, Robin; Newsom-Davis, John
2008-01-01
We present our experience planning and launching a multinational, NIH/NINDS funded study of thymectomy in myasthenia gravis. We highlight the additional steps required for international sites and analyze and contrast the time investment required to bring U.S. and non-U.S. sites into full regulatory compliance. Results show the mean time for non- U.S. centers to achieve regulatory approval was significantly longer (mean 13.4 ± 0.96 months) than for U.S. sites (9.67 ± 0.74 months; p = 0.003, t-test). The delay for non- U.S. sites was mainly attributable to Federalwide Assurance certification and State Department clearance. PMID:18675464
Bezodis, Ian N; Kerwin, David G; Cooper, Stephen-Mark; Salo, Aki I T
2017-11-15
To understand how training periodization influences sprint performance and key step characteristics over an extended training period in an elite sprint training group. Four sprinters were studied during five months of training. Step velocities, step lengths and step frequencies were measured from video of the maximum velocity phase of training sprints. Bootstrapped mean values were calculated for each athlete for each session and 139 within-athlete, between-session comparisons were made with a repeated measures ANOVA. As training progressed, a link in the changes in velocity and step frequency was maintained. There were 71 between-session comparisons with a change in step velocity yielding at least a large effect size (>1.2), of which 73% had a correspondingly large change in step frequency in the same direction. Within-athlete mean session step length remained relatively constant throughout. Reductions in step velocity and frequency occurred during training phases of high volume lifting and running, with subsequent increases in step velocity and frequency happening during phases of low volume lifting and high intensity sprint work. The importance of step frequency over step length to the changes in performance within a training year was clearly evident for the sprinters studied. Understanding the magnitudes and timings of these changes in relation to the training program is important for coaches and athletes. The underpinning neuro-muscular mechanisms require further investigation, but are likely explained by an increase in force producing capability followed by an increase in the ability to produce that force rapidly.
Khedr, Maan; El-Sheimy, Nasser
2017-01-01
The growing market of smart devices make them appealing for various applications. Motion tracking can be achieved using such devices, and is important for various applications such as navigation, search and rescue, health monitoring, and quality of life-style assessment. Step detection is a crucial task that affects the accuracy and quality of such applications. In this paper, a new step detection technique is proposed, which can be used for step counting and activity monitoring for health applications as well as part of a Pedestrian Dead Reckoning (PDR) system. Inertial and Magnetic sensors measurements are analyzed and fused for detecting steps under varying step modes and device pose combinations using a free-moving handheld device (smartphone). Unlike most of the state of the art research in the field, the proposed technique does not require a classifier, and adaptively tunes the filters and thresholds used without the need for presets while accomplishing the task in a real-time operation manner. Testing shows that the proposed technique successfully detects steps under varying motion speeds and device use cases with an average performance of 99.6%, and outperforms some of the state of the art techniques that rely on classifiers and commercial wristband products. PMID:29117143
Paquette, Maxime R; Fuller, Jason R; Adkin, Allan L; Vallis, Lori Ann
2008-09-01
This study investigated the effects of altering the base of support (BOS) at the turn point on anticipatory locomotor adjustments during voluntary changes in travel direction in healthy young and older adults. Participants were required to walk at their preferred pace along a 3-m straight travel path and continue to walk straight ahead or turn 40 degrees to the left or right for an additional 2-m. The starting foot and occasionally the gait starting point were adjusted so that participants had to execute the turn using a cross-over step with a narrow BOS or a lead-out step with a wide BOS. Spatial and temporal gait variables, magnitudes of angular segmental movement, and timing and sequencing of body segment reorientation were similar despite executing the turn with a narrow or wide BOS. A narrow BOS during turning generated an increased step width in the step prior to the turn for both young and older adults. Age-related changes when turning included reduced step velocity and step length for older compared to young adults. Age-related changes in the timing and sequencing of body segment reorientation prior to the turn point were also observed. A reduction in walking speed and an increase in step width just prior to the turn, combined with a delay in motion of the center of mass suggests that older adults used a more cautious combined foot placement and hip strategy to execute changes in travel direction compared to young adults. The results of this study provide insight into mobility constraints during a common locomotor task in older adults.
Electric and hybrid electric vehicle study utilizing a time-stepping simulation
NASA Technical Reports Server (NTRS)
Schreiber, Jeffrey G.; Shaltens, Richard K.; Beremand, Donald G.
1992-01-01
The applicability of NASA's advanced power technologies to electric and hybrid vehicles was assessed using a time-stepping computer simulation to model electric and hybrid vehicles operating over the Federal Urban Driving Schedule (FUDS). Both the energy and power demands of the FUDS were taken into account and vehicle economy, range, and performance were addressed simultaneously. Results indicate that a hybrid electric vehicle (HEV) configured with a flywheel buffer energy storage device and a free-piston Stirling convertor fulfills the emissions, fuel economy, range, and performance requirements that would make it acceptable to the consumer. It is noted that an assessment to determine which of the candidate technologies are suited for the HEV application has yet to be made. A proper assessment should take into account the fuel economy and range, along with the driveability and total emissions produced.
Classical and special relativity in four steps
NASA Astrophysics Data System (ADS)
Browne, K. M.
2018-03-01
The most fundamental and pedagogically useful path to the space-time transformations of both classical and special relativity is to postulate the principle of relativity, derive the generalised or Ignatowsky transformation which contains both, then apply two different second postulates that give either the Galilean or Lorentz transformation. What is new here is (a) a simple two-step derivation of the Ignatowsky transformation, (b) a second postulate of universal time which yields the Galilean transformation, and (c) a different second postulate of finite universal lightspeed to give the Lorentz transformation using a simple Ignatowsky transformation of a light wave. This method demonstrates that the fundamental difference between Galilean and Lorentz transformations is not that lightspeed is universal (which is true for both) but whether the model requires lightspeed to be infinite or finite (as once mentioned by Einstein).
NASA Astrophysics Data System (ADS)
Goma, Sergio R.
2015-03-01
In current times, mobile technologies are ubiquitous and the complexity of problems is continuously increasing. In the context of advancement of engineering, we explore in this paper possible reasons that could cause a saturation in technology evolution - namely the ability of problem solving based on previous results and the ability of expressing solutions in a more efficient way, concluding that `thinking outside of brain' - as in solving engineering problems that are expressed in a virtual media due to their complexity - would benefit from mobile technology augmentation. This could be the necessary evolutionary step that would provide the efficiency required to solve new complex problems (addressing the `running out of time' issue) and remove the communication of results barrier (addressing the human `perception/expression imbalance' issue). Some consequences are discussed, as in this context the artificial intelligence becomes an automation tool aid instead of a necessary next evolutionary step. The paper concludes that research in modeling as problem solving aid and data visualization as perception aid augmented with mobile technologies could be the path to an evolutionary step in advancing engineering.
NASA Astrophysics Data System (ADS)
Milroy, Daniel J.; Baker, Allison H.; Hammerling, Dorit M.; Jessup, Elizabeth R.
2018-02-01
The Community Earth System Model Ensemble Consistency Test (CESM-ECT) suite was developed as an alternative to requiring bitwise identical output for quality assurance. This objective test provides a statistical measurement of consistency between an accepted ensemble created by small initial temperature perturbations and a test set of CESM simulations. In this work, we extend the CESM-ECT suite with an inexpensive and robust test for ensemble consistency that is applied to Community Atmospheric Model (CAM) output after only nine model time steps. We demonstrate that adequate ensemble variability is achieved with instantaneous variable values at the ninth step, despite rapid perturbation growth and heterogeneous variable spread. We refer to this new test as the Ultra-Fast CAM Ensemble Consistency Test (UF-CAM-ECT) and demonstrate its effectiveness in practice, including its ability to detect small-scale events and its applicability to the Community Land Model (CLM). The new ultra-fast test facilitates CESM development, porting, and optimization efforts, particularly when used to complement information from the original CESM-ECT suite of tools.
Study of the SCC Behavior of 7075 Aluminum Alloy After One-Step Aging at 163 °C
NASA Astrophysics Data System (ADS)
Silva, G.; Rivolta, B.; Gerosa, R.; Derudi, U.
2013-01-01
For the past many years, 7075 aluminum alloys have been widely used especially in those applications for which high mechanical performances are required. It is well known that the alloy in the T6 condition is characterized by the highest ultimate and yield strengths, but, at the same time, by poor stress corrosion cracking (SCC) resistance. For this reason, in the aeronautic applications, new heat treatments have been introduced to produce T7X conditions, which are characterized by lower mechanical strength, but very good SCC behavior, when compared with the T6 condition. The aim of this study is to study the tensile properties and the SCC behavior of 7075 thick plates when submitted to a single-step aging by varying the aging times. The tests were carried out according to the standards and the data obtained from the SCC tests were analyzed quantitatively using an image analysis software. The results show that, when compared with the T7X conditions, the single-step aging performed in the laboratory can produce acceptable tensile and SCC properties.
Pradhan, Madhumita A.; Blackford, John A.; Devaiah, Ballachanda N.; Thompson, Petria S.; Chow, Carson C.; Singer, Dinah S.; Simons, S. Stoney
2016-01-01
Most of the steps in, and many of the factors contributing to, glucocorticoid receptor (GR)-regulated gene induction are currently unknown. A competition assay, based on a validated chemical kinetic model of steroid hormone action, is now used to identify two new factors (BRD4 and negative elongation factor (NELF)-E) and to define their sites and mechanisms of action. BRD4 is a kinase involved in numerous initial steps of gene induction. Consistent with its complicated biochemistry, BRD4 is shown to alter both the maximal activity (Amax) and the steroid concentration required for half-maximal induction (EC50) of GR-mediated gene expression by acting at a minimum of three different kinetically defined steps. The action at two of these steps is dependent on BRD4 concentration, whereas the third step requires the association of BRD4 with P-TEFb. BRD4 is also found to bind to NELF-E, a component of the NELF complex. Unexpectedly, NELF-E modifies GR induction in a manner that is independent of the NELF complex. Several of the kinetically defined steps of BRD4 in this study are proposed to be related to its known biochemical actions. However, novel actions of BRD4 and of NELF-E in GR-controlled gene induction have been uncovered. The model-based competition assay is also unique in being able to order, for the first time, the sites of action of the various reaction components: GR < Cdk9 < BRD4 ≤ induced gene < NELF-E. This ability to order factor actions will assist efforts to reduce the side effects of steroid treatments. PMID:26504077
Burnstein, Bryan D; Steele, Russell J; Shrier, Ian
2011-01-01
Fitness testing is used frequently in many areas of physical activity, but the reliability of these measurements under real-world, practical conditions is unknown. To evaluate the reliability of specific fitness tests using the methods and time periods used in the context of real-world sport and occupational management. Cohort study. Eighteen different Cirque du Soleil shows. Cirque du Soleil physical performers who completed 4 consecutive tests (6-month intervals) and were free of injury or illness at each session (n = 238 of 701 physical performers). Performers completed 6 fitness tests on each assessment date: dynamic balance, Harvard step test, handgrip, vertical jump, pull-ups, and 60-second jump test. We calculated the intraclass coefficient (ICC) and limits of agreement between baseline and each time point and the ICC over all 4 time points combined. Reliability was acceptable (ICC > 0.6) over an 18-month time period for all pairwise comparisons and all time points together for the handgrip, vertical jump, and pull-up assessments. The Harvard step test and 60-second jump test had poor reliability (ICC < 0.6) between baseline and other time points. When we excluded the baseline data and calculated the ICC for 6-month, 12-month, and 18-month time points, both the Harvard step test and 60-second jump test demonstrated acceptable reliability. Dynamic balance was unreliable in all contexts. Limit-of-agreement analysis demonstrated considerable intraindividual variability for some tests and a learning effect by administrators on others. Five of the 6 tests in this battery had acceptable reliability over an 18-month time frame, but the values for certain individuals may vary considerably from time to time for some tests. Specific tests may require a learning period for administrators.
Gaussian process regression for geometry optimization
NASA Astrophysics Data System (ADS)
Denzel, Alexander; Kästner, Johannes
2018-03-01
We implemented a geometry optimizer based on Gaussian process regression (GPR) to find minimum structures on potential energy surfaces. We tested both a two times differentiable form of the Matérn kernel and the squared exponential kernel. The Matérn kernel performs much better. We give a detailed description of the optimization procedures. These include overshooting the step resulting from GPR in order to obtain a higher degree of interpolation vs. extrapolation. In a benchmark against the Limited-memory Broyden-Fletcher-Goldfarb-Shanno optimizer of the DL-FIND library on 26 test systems, we found the new optimizer to generally reduce the number of required optimization steps.
Double-deprotected chemically amplified photoresists (DD-CAMP): higher-order lithography
NASA Astrophysics Data System (ADS)
Earley, William; Soucie, Deanna; Hosoi, Kenji; Takahashi, Arata; Aoki, Takashi; Cardineau, Brian; Miyauchi, Koichi; Chun, Jay; O'Sullivan, Michael; Brainard, Robert
2017-03-01
The synthesis and lithographic evaluation of 193-nm and EUV photoresists that utilize a higher-order reaction mechanism of deprotection is presented. Unique polymers utilize novel blocking groups that require two acid-catalyzed steps to be removed. When these steps occur with comparable reaction rates, the overall reaction can be higher order (<= 1.85). The LWR of these resists is plotted against PEB time for a variety of compounds to acquire insight into the effectiveness of the proposed higher-order mechanisms. Evidence acquired during testing of these novel photoresist materials supports the conclusion that higher-order reaction kinetics leads to improved LWR vs. control resists.
Alternative nuclear technologies
NASA Astrophysics Data System (ADS)
Schubert, E.
1981-10-01
The lead times required to develop a select group of nuclear fission reactor types and fuel cycles to the point of readiness for full commercialization are compared. Along with lead times, fuel material requirements and comparative costs of producing electric power were estimated. A conservative approach and consistent criteria for all systems were used in estimates of the steps required and the times involved in developing each technology. The impact of the inevitable exhaustion of the low- or reasonable-cost uranium reserves in the United States on the desirability of completing the breeder reactor program, with its favorable long-term result on fission fuel supplies, is discussed. The long times projected to bring the most advanced alternative converter reactor technologies the heavy water reactor and the high-temperature gas-cooled reactor into commercial deployment when compared to the time projected to bring the breeder reactor into equivalent status suggest that the country's best choice is to develop the breeder. The perceived diversion-proliferation problems with the uranium plutonium fuel cycle have workable solutions that can be developed which will enable the use of those materials at substantially reduced levels of diversion risk.
Thickness measurement by two-sided step-heating thermal imaging
NASA Astrophysics Data System (ADS)
Li, Xiaoli; Tao, Ning; Sun, J. G.; Zhang, Cunlin; Zhao, Yuejin
2018-01-01
Infrared thermal imaging is a promising nondestructive technique for thickness prediction. However, it is usually thought to be only appropriate for testing the thickness of thin objects or near-surface structures. In this study, we present a new two-sided step-heating thermal imaging method which employed a low-cost portable halogen lamp as the heating source and verified it with two stainless steel step wedges with thicknesses ranging from 5 mm to 24 mm. We first derived the one-dimensional step-heating thermography theory with the consideration of warm-up time of the lamp, and then applied the nonlinear regression method to fit the experimental data by the derived function to determine the thickness. After evaluating the reliability and accuracy of the experimental results, we concluded that this method is capable of testing thick objects. In addition, we provided the criterions for both the required data length and the applicable thickness range of the testing material. It is evident that this method will broaden the thermal imaging application for thickness measurement.
Passivation of Plasmonic Colors on Bulk Silver by Atomic Layer Deposition of Aluminum Oxide.
Guay, Jean-Michel; Killaire, Graham; Gordon, Peter G; Barry, Sean T; Berini, Pierre; Weck, Arnaud
2018-05-01
We report the passivation of angle-independent plasmonic colors on bulk silver by atomic layer deposition (ALD) of thin films of aluminum oxide. The colors are rendered by silver nanoparticles produced by laser ablation and redeposition on silver. We then apply a two-step approach to aluminum oxide conformal film formation via ALD. In the first step, a low-density film is deposited at low temperature to preserve and pin the silver nanoparticles. In the second step, a second denser film is deposited at a higher temperature to provide tarnish protection. This approach successfully protects the silver and plasmonic colors against tarnishing, humidity, and temperature, as demonstrated by aggressive exposure trials. The processing time associated with deposition of the conformal passivation layers meets industry requirements, and the approach is compatible with mass manufacturing.
Multi-off-grid methods in multi-step integration of ordinary differential equations
NASA Technical Reports Server (NTRS)
Beaudet, P. R.
1974-01-01
Description of methods of solving first- and second-order systems of differential equations in which all derivatives are evaluated at off-grid locations in order to circumvent the Dahlquist stability limitation on the order of on-grid methods. The proposed multi-off-grid methods require off-grid state predictors for the evaluation of the n derivatives at each step. Progressing forward in time, the off-grid states are predicted using a linear combination of back on-grid state values and off-grid derivative evaluations. A comparison is made between the proposed multi-off-grid methods and the corresponding Adams and Cowell on-grid integration techniques in integrating systems of ordinary differential equations, showing a significant reduction in the error at larger step sizes in the case of the multi-off-grid integrator.
Niman, Cassandra S; Zuckermann, Martin J; Balaz, Martina; Tegenfeldt, Jonas O; Curmi, Paul M G; Forde, Nancy R; Linke, Heiner
2014-12-21
Synthetic molecular motors typically take nanometer-scale steps through rectification of thermal motion. Here we propose Inchworm, a DNA-based motor that employs a pronounced power stroke to take micrometer-scale steps on a time scale of seconds, and we design, fabricate, and analyze the nanofluidic device needed to operate the motor. Inchworm is a kbp-long, double-stranded DNA confined inside a nanochannel in a stretched configuration. Motor stepping is achieved through externally controlled changes in salt concentration (changing the DNA's extension), coordinated with ligand-gated binding of the DNA's ends to the functionalized nanochannel surface. Brownian dynamics simulations predict that Inchworm's stall force is determined by its entropic spring constant and is ∼ 0.1 pN. Operation of the motor requires periodic cycling of four different buffers surrounding the DNA inside a nanochannel, while keeping constant the hydrodynamic load force on the DNA. We present a two-layer fluidic device incorporating 100 nm-radius nanochannels that are connected through a few-nm-wide slit to a microfluidic system used for in situ buffer exchanges, either diffusionally (zero flow) or with controlled hydrodynamic flow. Combining experiment with finite-element modeling, we demonstrate the device's key performance features and experimentally establish achievable Inchworm stepping times of the order of seconds or faster.
Differences in care burden of patients undergoing dialysis in different centres in the netherlands.
de Kleijn, Ria; Uyl-de Groot, Carin; Hagen, Chris; Diepenbroek, Adry; Pasker-de Jong, Pieternel; Ter Wee, Piet
2017-06-01
A classification model was developed to simplify planning of personnel at dialysis centres. This model predicted the care burden based on dialysis characteristics. However, patient characteristics and different dialysis centre categories might also influence the amount of care time required. To determine if there is a difference in care burden between different categories of dialysis centres and if specific patient characteristics predict nursing time needed for patient treatment. An observational study. Two hundred and forty-two patients from 12 dialysis centres. In 12 dialysis centres, nurses filled out the classification list per patient and completed a form with patient characteristics. Nephrologists filled out the Charlson Comorbidity Index. Independent observers clocked the time nurses spent on separate steps of the dialysis for each patient. Dialysis centres were categorised into four types. Data were analysed using regression models. In contrast to other dialysis centres, academic centres needed 14 minutes more care time per patient per dialysis treatment than predicted in the classification model. No patient characteristics were found that influenced this difference. The only patient characteristic that predicted the time required was gender, with more time required to treat women. Gender did not affect the difference between measured and predicted care time. Differences in care burden were observed between academic and other centres, with more time required for treatment in academic centres. Contribution of patient characteristics to the time difference was minimal. The only patient characteristics that predicted care time were previous transplantation, which reduced the time required, and gender, with women requiring more care time. © 2017 European Dialysis and Transplant Nurses Association/European Renal Care Association.
Local anesthesia for inguinal hernia repair step-by-step procedure.
Amid, P K; Shulman, A G; Lichtenstein, I L
1994-01-01
OBJECTIVE. The authors introduce a simple six-step infiltration technique that results in satisfactory local anesthesia and prolonged postoperative analgesia, requiring a maximum of 30 to 40 mL of local anesthetic solution. SUMMARY BACKGROUND DATA. For the last 20 years, more than 12,000 groin hernia repairs have been performed under local anesthesia at the Lichtenstein Hernia Institute. Initially, field block was the mean of achieving local anesthesia. During the last 5 years, a simple infiltration technique has been used because the field block was more time consuming and required larger volume of the local anesthetic solution. Furthermore, because of the blind nature of the procedure, it did not always result in satisfactory anesthesia and, at times, accidental needle puncture of the ilioinguinal nerve resulted in prolonged postoperative pain, burning, or electric shock sensation within the field of the ilioinguinal nerve innervation. METHODS. More than 12,000 patients underwent operations in a private practice setting in general hospitals. RESULTS. For 2 decades, more than 12,000 adult patients with reducible groin hernias satisfactorily underwent operations under local anesthesia without complications. CONCLUSIONS. The preferred choice of anesthesia for all reducible adult inguinal hernia repair is local. It is safe, simple, effective, and economical, without postanesthesia side effects. Furthermore, local anesthesia administered before the incision produces longer postoperative analgesia because local infiltration, theoretically, inhibits build-up of local nociceptive molecules and, therefore, there is better pain control in the postoperative period. Images Figure 1. Figure 2. PMID:7986138
Schedler, Kathrin; Assadian, Ojan; Brautferger, Uta; Müller, Gerald; Koburger, Torsten; Classen, Simon; Kramer, Axel
2017-02-13
Currently, there is no agreed standard for exploring the antimicrobial activity of wound antiseptics in a phase 2/ step 2 test protocol. In the present study, a standardised in-vitro test is proposed, which allows to test potential antiseptics in a more realistically simulation of conditions found in wounds as in a suspension test. Furthermore, factors potentially influencing test results such as type of materials used as test carrier or various compositions of organic soil challenge were investigated in detail. This proposed phase 2/ step 2 test method was modified on basis of the EN 14561 by drying the microbial test suspension on a metal carrier for 1 h, overlaying the test wound antiseptic, washing-off, neutralization, and dispersion at serial dilutions at the end of the required exposure time yielded reproducible, consistent test results. The difference between the rapid onset of the antiseptic effect of PVP-I and the delayed onset especially of polihexanide was apparent. Among surface-active antimicrobial compounds, octenidine was more effective than chlorhexidine digluconate and polihexanide, with some differences depending on the test organisms. However, octenidine and PVP-I were approximately equivalent in efficiency and microbial spectrum, while polihexanide required longer exposure times or higher concentrations for a comparable antimicrobial efficacy. Overall, this method allowed testing and comparing differ liquid and gel based antimicrobial compounds in a standardised setting.
Robust double gain unscented Kalman filter for small satellite attitude estimation
NASA Astrophysics Data System (ADS)
Cao, Lu; Yang, Weiwei; Li, Hengnian; Zhang, Zhidong; Shi, Jianjun
2017-08-01
Limited by the low precision of small satellite sensors, the estimation theories with high performance remains the most popular research topic for the attitude estimation. The Kalman filter (KF) and its extensions have been widely applied in the satellite attitude estimation and achieved plenty of achievements. However, most of the existing methods just take use of the current time-step's priori measurement residuals to complete the measurement update and state estimation, which always ignores the extraction and utilization of the previous time-step's posteriori measurement residuals. In addition, the uncertainty model errors always exist in the attitude dynamic system, which also put forward the higher performance requirements for the classical KF in attitude estimation problem. Therefore, the novel robust double gain unscented Kalman filter (RDG-UKF) is presented in this paper to satisfy the above requirements for the small satellite attitude estimation with the low precision sensors. It is assumed that the system state estimation errors can be exhibited in the measurement residual; therefore, the new method is to derive the second Kalman gain Kk2 for making full use of the previous time-step's measurement residual to improve the utilization efficiency of the measurement data. Moreover, the sequence orthogonal principle and unscented transform (UT) strategy are introduced to robust and enhance the performance of the novel Kalman Filter in order to reduce the influence of existing uncertainty model errors. Numerical simulations show that the proposed RDG-UKF is more effective and robustness in dealing with the model errors and low precision sensors for the attitude estimation of small satellite by comparing with the classical unscented Kalman Filter (UKF).
FASTPM: a new scheme for fast simulations of dark matter and haloes
NASA Astrophysics Data System (ADS)
Feng, Yu; Chu, Man-Yat; Seljak, Uroš; McDonald, Patrick
2016-12-01
We introduce FASTPM, a highly scalable approximated particle mesh (PM) N-body solver, which implements the PM scheme enforcing correct linear displacement (1LPT) evolution via modified kick and drift factors. Employing a two-dimensional domain decomposing scheme, FASTPM scales extremely well with a very large number of CPUs. In contrast to Comoving-Lagrangian (COLA) approach, we do not require to split the force or track separately the 2LPT solution, reducing the code complexity and memory requirements. We compare FASTPM with different number of steps (Ns) and force resolution factor (B) against three benchmarks: halo mass function from friends-of-friends halo finder; halo and dark matter power spectrum; and cross-correlation coefficient (or stochasticity), relative to a high-resolution TREEPM simulation. We show that the modified time stepping scheme reduces the halo stochasticity when compared to COLA with the same number of steps and force resolution. While increasing Ns and B improves the transfer function and cross-correlation coefficient, for many applications FASTPM achieves sufficient accuracy at low Ns and B. For example, Ns = 10 and B = 2 simulation provides a substantial saving (a factor of 10) of computing time relative to Ns = 40, B = 3 simulation, yet the halo benchmarks are very similar at z = 0. We find that for abundance matched haloes the stochasticity remains low even for Ns = 5. FASTPM compares well against less expensive schemes, being only 7 (4) times more expensive than 2LPT initial condition generator for Ns = 10 (Ns = 5). Some of the applications where FASTPM can be useful are generating a large number of mocks, producing non-linear statistics where one varies a large number of nuisance or cosmological parameters, or serving as part of an initial conditions solver.
Soft-Bake Purification of SWCNTs Produced by Pulsed Laser Vaporization
NASA Technical Reports Server (NTRS)
Yowell, Leonard; Nikolaev, Pavel; Gorelik, Olga; Allada, Rama Kumar; Sosa, Edward; Arepalli, Sivaram
2013-01-01
The "soft-bake" method is a simple and reliable initial purification step first proposed by researchers at Rice University for single-walled carbon nanotubes (SWCNT) produced by high-pressure carbon mon oxide disproportionation (HiPco). Soft-baking consists of annealing as-produced (raw) SWCNT, at low temperatures in humid air, in order to degrade the heavy graphitic shells that surround metal particle impurities. Once these shells are cracked open by the expansion and slow oxidation of the metal particles, the metal impurities can be digested through treatment with hydrochloric acid. The soft-baking of SWCNT produced by pulsed-laser vaporization (PLV) is not straightforward, because the larger average SWCNT diameters (.1.4 nm) and heavier graphitic shells surrounding metal particles call for increased temperatures during soft-bake. A part of the technology development focused on optimizing the temperature so that effective cracking of the graphitic shells is balanced with maintaining a reasonable yield, which was a critical aspect of this study. Once the ideal temperature was determined, a number of samples of raw SWCNT were purified using the soft-bake method. An important benefit to this process is the reduced time and effort required for soft-bake versus the standard purification route for SWCNT. The total time spent purifying samples by soft-bake is one week per batch, which equates to a factor of three reduction in the time required for purification as compared to the standard acid purification method. Reduction of the number of steps also appears to be an important factor in improving reproducibility of yield and purity of SWCNT, as small deviations are likely to get amplified over the course of a complicated multi-step purification process.
A nursing-specific model of EPR documentation: organizational and professional requirements.
von Krogh, Gunn; Nåden, Dagfinn
2008-01-01
To present the Norwegian documentation KPO model (quality assurance, problem solving, and caring). To present the requirements and multiple electronic patient record (EPR) functions the model is designed to address. The model's professional substance, a conceptual framework for nursing practice is developed by examining, reorganizing, and completing existing frameworks. The model's methodology, an information management system, is developed using an expert group. Both model elements were clinically tested over a period of 1 year. The model is designed for nursing documentation in step with statutory, organizational, and professional requirements. Complete documentation is arranged for by incorporating the Nursing Minimum Data Set. A systematic and comprehensive documentation is arranged for by establishing categories as provided in the model's framework domains. Consistent documentation is arranged for by incorporating NANDA-I Nursing Diagnoses, Nursing Intervention Classification, and Nursing Outcome Classification. The model can be used as a tool in cooperation with vendors to ensure the interests of the nursing profession is met when developing EPR solutions in healthcare. The model can provide clinicians with a framework for documentation in step with legal and organizational requirements and at the same time retain the ability to record all aspects of clinical nursing.
Data-Based Predictive Control with Multirate Prediction Step
NASA Technical Reports Server (NTRS)
Barlow, Jonathan S.
2010-01-01
Data-based predictive control is an emerging control method that stems from Model Predictive Control (MPC). MPC computes current control action based on a prediction of the system output a number of time steps into the future and is generally derived from a known model of the system. Data-based predictive control has the advantage of deriving predictive models and controller gains from input-output data. Thus, a controller can be designed from the outputs of complex simulation code or a physical system where no explicit model exists. If the output data happens to be corrupted by periodic disturbances, the designed controller will also have the built-in ability to reject these disturbances without the need to know them. When data-based predictive control is implemented online, it becomes a version of adaptive control. One challenge of MPC is computational requirements increasing with prediction horizon length. This paper develops a closed-loop dynamic output feedback controller that minimizes a multi-step-ahead receding-horizon cost function with multirate prediction step. One result is a reduced influence of prediction horizon and the number of system outputs on the computational requirements of the controller. Another result is an emphasis on portions of the prediction window that are sampled more frequently. A third result is the ability to include more outputs in the feedback path than in the cost function.
Performance Limiting Flow Processes in High-State Loading High-Mach Number Compressors
2008-03-13
the Doctoral Thesis Committee of the doctoral student. 3 3.0 Technical Background A strong incentive exists to reduce airfoil count in aircraft engine ...Advanced Turbine Engine ). A basic constraint on blade reduction is seen from the Euler turbine equation, which shows that, although a design can be carried...on the vane to rotor blade ratio of 8:11). Within the MSU Turbo code, specifying a small number of time steps requires more iteration at each time
Synchronization of Chaotic Systems without Direct Connections Using Reinforcement Learning
NASA Astrophysics Data System (ADS)
Sato, Norihisa; Adachi, Masaharu
In this paper, we propose a control method for the synchronization of chaotic systems that does not require the systems to be connected, unlike existing methods such as that proposed by Pecora and Carroll in 1990. The method is based on the reinforcement learning algorithm. We apply our method to two discrete-time chaotic systems with mismatched parameters and achieve M step delay synchronization. Moreover, we extend the proposed method to the synchronization of continuous-time chaotic systems.
2006-11-30
except in the simplest of circumstances. This belief has driven the com- putational research community to devise clever kinetic Monte Carlo ( KMC ... KMC rou- tine is very slow; cutting the error in half requires four times the number of simulations. Since a single simulation may contain huge numbers...subintervals [9–14]. Both approximation types, system partitioning and τ leaping, have been very successful in increasing the scope of problems to which KMC
Larsen, T; Doll, J C; Loizeau, F; Hosseini, N; Peng, A W; Fantner, G; Ricci, A J; Pruitt, B L
2017-01-01
Electrothermal actuators have many advantages compared to other actuators used in Micro-Electro-Mechanical Systems (MEMS). They are simple to design, easy to fabricate and provide large displacements at low voltages. Low voltages enable less stringent passivation requirements for operation in liquid. Despite these advantages, thermal actuation is typically limited to a few kHz bandwidth when using step inputs due to its intrinsic thermal time constant. However, the use of pre-shaped input signals offers a route for reducing the rise time of these actuators by orders of magnitude. We started with an electrothermally actuated cantilever having an initial 10-90% rise time of 85 μs in air and 234 μs in water for a standard open-loop step input. We experimentally characterized the linearity and frequency response of the cantilever when operated in air and water, allowing us to obtain transfer functions for the two cases. We used these transfer functions, along with functions describing desired reduced rise-time system responses, to numerically simulate the required input signals. Using these pre-shaped input signals, we improved the open-loop 10-90% rise time from 85 μs to 3 μs in air and from 234 μs to 5 μs in water, an improvement by a factor of 28 and 47, respectively. Using this simple control strategy for MEMS electrothermal actuators makes them an attractive alternative to other high speed micromechanical actuators such as piezoelectric stacks or electrostatic comb structures which are more complex to design, fabricate, or operate.
Environmental Benign Process for Production of Molybdenum Metal from Sulphide Based Minerals
NASA Astrophysics Data System (ADS)
Rajput, Priyanka; Janakiram, Vangada; Jayasankar, Kalidoss; Angadi, Shivakumar; Bhoi, Bhagyadhar; Mukherjee, Partha Sarathi
2017-10-01
Molybdenum is a strategic and high temperature refractory metal which is not found in nature in free state, it is predominantly found in earth's crust in the form of MoO3/MoS2. The main disadvantage of the industrial treatment of Mo concentrate is that the process contains many stages and requires very high temperature. Almost in every step many gaseous, liquid, solid chemical substances are formed which require further treatment. To overcome the above drawback, a new alternative one step novel process is developed for the treatment of sulphide and trioxide molybdenum concentrates. This paper presents the results of the investigations on molybdenite dissociation (MoS2) using microwave assisted plasma unit as well as transferred arc thermal plasma torch. It is a single step process for the preparation of pure molybdenum metal from MoS2 by hydrogen reduction in thermal plasma. Process variable such as H2 gas, Ar gas, input current, voltage and time have been examined to prepare molybdenum metal. Molybdenum recovery of the order of 95% was achieved. The XRD results confirm the phases of molybdenum metal and the chemical analysis of the end product indicate the formation of metallic molybdenum (Mo 98%).
Systematic development of technical textiles
NASA Astrophysics Data System (ADS)
Beer, M.; Schrank, V.; Gloy, Y.-S.; Gries, T.
2016-07-01
Technical textiles are used in various fields of applications, ranging from small scale (e.g. medical applications) to large scale products (e.g. aerospace applications). The development of new products is often complex and time consuming, due to multiple interacting parameters. These interacting parameters are production process related and also a result of the textile structure and used material. A huge number of iteration steps are necessary to adjust the process parameter to finalize the new fabric structure. A design method is developed to support the systematic development of technical textiles and to reduce iteration steps. The design method is subdivided into six steps, starting from the identification of the requirements. The fabric characteristics vary depending on the field of application. If possible, benchmarks are tested. A suitable fabric production technology needs to be selected. The aim of the method is to support a development team within the technology selection without restricting the textile developer. After a suitable technology is selected, the transformation and correlation between input and output parameters follows. This generates the information for the production of the structure. Afterwards, the first prototype can be produced and tested. The resulting characteristics are compared with the initial product requirements.
Using the critical incident technique in community-based participatory research: a case study.
Belkora, Jeffrey; Stupar, Lauren; O'Donnell, Sara
2011-01-01
Successful community-based participatory research involves the community partner in every step of the research process. The primary study for this paper took place in rural, Northern California. Collaborative partners included an academic researcher and two community based resource centers that provide supportive services to people diagnosed with cancer. This paper describes our use of the Critical Incident Technique (CIT) to conduct Community-based Participatory Research. We ask: Did the CIT facilitate or impede the active engagement of the community in all steps of the study process? We identified factors about the Critical Incident Technique that were either barriers or facilitators to involving the community partner in every step of the research process. Facilitators included the CIT's ability to accommodate involvement from a large spectrum of the community, its flexible design, and its personal approach. Barriers to community engagement included training required to conduct interviews, depth of interview probes, and time required. Overall, our academic-community partners felt that our use of the CIT facilitated community involvement in our Community-Based Participatory Research Project, where we used it to formally document the forces promoting and inhibiting successful achievement of community aims.
Supercomputing Aspects for Simulating Incompressible Flow
NASA Technical Reports Server (NTRS)
Kwak, Dochan; Kris, Cetin C.
2000-01-01
The primary objective of this research is to support the design of liquid rocket systems for the Advanced Space Transportation System. Since the space launch systems in the near future are likely to rely on liquid rocket engines, increasing the efficiency and reliability of the engine components is an important task. One of the major problems in the liquid rocket engine is to understand fluid dynamics of fuel and oxidizer flows from the fuel tank to plume. Understanding the flow through the entire turbo-pump geometry through numerical simulation will be of significant value toward design. One of the milestones of this effort is to develop, apply and demonstrate the capability and accuracy of 3D CFD methods as efficient design analysis tools on high performance computer platforms. The development of the Message Passage Interface (MPI) and Multi Level Parallel (MLP) versions of the INS3D code is currently underway. The serial version of INS3D code is a multidimensional incompressible Navier-Stokes solver based on overset grid technology, INS3D-MPI is based on the explicit massage-passing interface across processors and is primarily suited for distributed memory systems. INS3D-MLP is based on multi-level parallel method and is suitable for distributed-shared memory systems. For the entire turbo-pump simulations, moving boundary capability and efficient time-accurate integration methods are built in the flow solver, To handle the geometric complexity and moving boundary problems, an overset grid scheme is incorporated with the solver so that new connectivity data will be obtained at each time step. The Chimera overlapped grid scheme allows subdomains move relative to each other, and provides a great flexibility when the boundary movement creates large displacements. Two numerical procedures, one based on artificial compressibility method and the other pressure projection method, are outlined for obtaining time-accurate solutions of the incompressible Navier-Stokes equations. The performance of the two methods is compared by obtaining unsteady solutions for the evolution of twin vortices behind a flat plate. Calculated results are compared with experimental and other numerical results. For an unsteady flow, which requires small physical time step, the pressure projection method was found to be computationally efficient since it does not require any subiteration procedure. It was observed that the artificial compressibility method requires a fast convergence scheme at each physical time step in order to satisfy the incompressibility condition. This was obtained by using a GMRES-ILU(0) solver in present computations. When a line-relaxation scheme was used, the time accuracy was degraded and time-accurate computations became very expensive.
A model to teach concomitant patient communication during psychomotor skill development.
Nicholls, Delwyn; Sweet, Linda; Muller, Amanda; Hyett, Jon
2018-01-01
Many health professionals use psychomotor or task-based skills in clinical practice that require concomitant communication with a conscious patient. Verbally engaging with the patient requires highly developed verbal communication skills, enabling the delivery of patient-centred care. Historically, priority has been given to learning the psychomotor skills essential to clinical practice. However, there has been a shift towards also ensuring competent communication with the patient during skill performance. While there is literature outlining the steps to teach and learn verbal communication skills, little is known about the most appropriate instructional approach to teach how to verbally engage with the patient when also learning to perform a task. A literature review was performed and it identified that there was no model or proven approach which could be used to integrate the learning of both psychomotor and communication skills. This paper reviews the steps to teach a communication skill and provides a suggested model to guide the acquisition and development of the concomitant -communication skills required with a patient at the time a psychomotor skill is performed. Copyright © 2017 Elsevier Ltd. All rights reserved.
Contingency Management Requirements Document: Preliminary Version. Revision F
NASA Technical Reports Server (NTRS)
2005-01-01
This is the High Altitude, Long Endurance (HALE) Remotely Operated Aircraft (ROA) Contingency Management (CM) Functional Requirements document. This document applies to HALE ROA operating within the National Airspace System (NAS) limited at this time to enroute operations above 43,000 feet (defined as Step 1 of the Access 5 project, sponsored by the National Aeronautics and Space Administration). A contingency is an unforeseen event requiring a response. The unforeseen event may be an emergency, an incident, a deviation, or an observation. Contingency Management (CM) is the process of evaluating the event, deciding on the proper course of action (a plan), and successfully executing the plan.
Prince, Linda M
2015-01-01
Inter-simple sequence repeat PCR (ISSR-PCR) is a fast, inexpensive genotyping technique based on length variation in the regions between microsatellites. The method requires no species-specific prior knowledge of microsatellite location or composition. Very small amounts of DNA are required, making this method ideal for organisms of conservation concern, or where the quantity of DNA is extremely limited due to organism size. ISSR-PCR can be highly reproducible but requires careful attention to detail. Optimization of DNA extraction, fragment amplification, and normalization of fragment peak heights during fluorescent detection are critical steps to minimizing the downstream time spent verifying and scoring the data.
Dengue Virus Genome Uncoating Requires Ubiquitination
Byk, Laura A.; Iglesias, Néstor G.; De Maio, Federico A.; Gebhard, Leopoldo G.; Rossi, Mario
2016-01-01
ABSTRACT The process of genome release or uncoating after viral entry is one of the least-studied steps in the flavivirus life cycle. Flaviviruses are mainly arthropod-borne viruses, including emerging and reemerging pathogens such as dengue, Zika, and West Nile viruses. Currently, dengue virus is one of the most significant human viral pathogens transmitted by mosquitoes and is responsible for about 390 million infections every year around the world. Here, we examined for the first time molecular aspects of dengue virus genome uncoating. We followed the fate of the capsid protein and RNA genome early during infection and found that capsid is degraded after viral internalization by the host ubiquitin-proteasome system. However, proteasome activity and capsid degradation were not necessary to free the genome for initial viral translation. Unexpectedly, genome uncoating was blocked by inhibiting ubiquitination. Using different assays to bypass entry and evaluate the first rounds of viral translation, a narrow window of time during infection that requires ubiquitination but not proteasome activity was identified. In this regard, ubiquitin E1-activating enzyme inhibition was sufficient to stabilize the incoming viral genome in the cytoplasm of infected cells, causing its retention in either endosomes or nucleocapsids. Our data support a model in which dengue virus genome uncoating requires a nondegradative ubiquitination step, providing new insights into this crucial but understudied viral process. PMID:27353759
Okubo, Yoshiro; Menant, Jasmine; Udyavar, Manasa; Brodie, Matthew A; Barry, Benjamin K; Lord, Stephen R; L Sturnieks, Daina
2017-05-01
Although step training improves the ability of quick stepping, some home-based step training systems train limited stepping directions and may cause harm by reducing stepping performance in untrained directions. This study examines the possible transfer effects of step training on stepping performance in untrained directions in older people. Fifty four older adults were randomized into: forward step training (FT); lateral plus forward step training (FLT); or no training (NT) groups. FT and FLT participants undertook a 15-min training session involving 200 step repetitions. Prior to and post training, choice stepping reaction time and stepping kinematics in untrained, diagonal and lateral directions were assessed. Significant interactions of group and time (pre/post-assessment) were evident for the first step after training indicating negative (delayed response time) and positive (faster peak stepping speed) transfer effects in the diagonal direction in the FT group. However, when the second to the fifth steps after training were included in the analysis, there were no significant interactions of group and time for measures in the diagonal stepping direction. Step training only in the forward direction improved stepping speed but may acutely slow response times in the untrained diagonal direction. However, this acute effect appears to dissipate after a few repeated step trials. Step training in both forward and lateral directions appears to induce no negative transfer effects in diagonal stepping. These findings suggest home-based step training systems present low risk of harm through negative transfer effects in untrained stepping directions. ANZCTR 369066. Copyright © 2017 Elsevier B.V. All rights reserved.
Nonparametric method for failures diagnosis in the actuating subsystem of aircraft control system
NASA Astrophysics Data System (ADS)
Terentev, M. N.; Karpenko, S. S.; Zybin, E. Yu; Kosyanchuk, V. V.
2018-02-01
In this paper we design a nonparametric method for failures diagnosis in the aircraft control system that uses the measurements of the control signals and the aircraft states only. It doesn’t require a priori information of the aircraft model parameters, training or statistical calculations, and is based on analytical nonparametric one-step-ahead state prediction approach. This makes it possible to predict the behavior of unidentified and failure dynamic systems, to weaken the requirements to control signals, and to reduce the diagnostic time and problem complexity.
NASA Astrophysics Data System (ADS)
Jothiprakash, V.; Magar, R. B.
2012-07-01
SummaryIn this study, artificial intelligent (AI) techniques such as artificial neural network (ANN), Adaptive neuro-fuzzy inference system (ANFIS) and Linear genetic programming (LGP) are used to predict daily and hourly multi-time-step ahead intermittent reservoir inflow. To illustrate the applicability of AI techniques, intermittent Koyna river watershed in Maharashtra, India is chosen as a case study. Based on the observed daily and hourly rainfall and reservoir inflow various types of time-series, cause-effect and combined models are developed with lumped and distributed input data. Further, the model performance was evaluated using various performance criteria. From the results, it is found that the performances of LGP models are found to be superior to ANN and ANFIS models especially in predicting the peak inflows for both daily and hourly time-step. A detailed comparison of the overall performance indicated that the combined input model (combination of rainfall and inflow) performed better in both lumped and distributed input data modelling. It was observed that the lumped input data models performed slightly better because; apart from reducing the noise in the data, the better techniques and their training approach, appropriate selection of network architecture, required inputs, and also training-testing ratios of the data set. The slight poor performance of distributed data is due to large variations and lesser number of observed values.
Operating system for a real-time multiprocessor propulsion system simulator. User's manual
NASA Technical Reports Server (NTRS)
Cole, G. L.
1985-01-01
The NASA Lewis Research Center is developing and evaluating experimental hardware and software systems to help meet future needs for real-time, high-fidelity simulations of air-breathing propulsion systems. Specifically, the real-time multiprocessor simulator project focuses on the use of multiple microprocessors to achieve the required computing speed and accuracy at relatively low cost. Operating systems for such hardware configurations are generally not available. A real time multiprocessor operating system (RTMPOS) that supports a variety of multiprocessor configurations was developed at Lewis. With some modification, RTMPOS can also support various microprocessors. RTMPOS, by means of menus and prompts, provides the user with a versatile, user-friendly environment for interactively loading, running, and obtaining results from a multiprocessor-based simulator. The menu functions are described and an example simulation session is included to demonstrate the steps required to go from the simulation loading phase to the execution phase.
Simplified installation of thrust bearings
NASA Technical Reports Server (NTRS)
Sensenbaugh, N. D.
1980-01-01
Special handling sleeve, key to method of installing thrust bearings, was developed for assembling bearings on shaft of low-pressure oxygen turbo-pump. Method eliminates cooling and vacuum-drying steps which saves time, while also eliminating possibility of corrosion formation. Procedure saves energy because it requires no liquid nitrogen for cooling shaft and no natural gas or electric power for operating vacuum oven.
Sen. Bunning, Jim [R-KY
2009-10-08
Senate - 10/13/2009 Read the second time. Placed on Senate Legislative Calendar under General Orders. Calendar No. 176. (All Actions) Tracker: This bill has the status IntroducedHere are the steps for Status of Legislation:
Schneidereit, Dominik; Kraus, Larissa; Meier, Jochen C; Friedrich, Oliver; Gilbert, Daniel F
2017-06-15
High-content screening microscopy relies on automation infrastructure that is typically proprietary, non-customizable, costly and requires a high level of skill to use and maintain. The increasing availability of rapid prototyping technology makes it possible to quickly engineer alternatives to conventional automation infrastructure that are low-cost and user-friendly. Here, we describe a 3D printed inexpensive open source and scalable motorized positioning stage for automated high-content screening microscopy and provide detailed step-by-step instructions to re-building the device, including a comprehensive parts list, 3D design files in STEP (Standard for the Exchange of Product model data) and STL (Standard Tessellation Language) format, electronic circuits and wiring diagrams as well as software code. System assembly including 3D printing requires approx. 30h. The fully assembled device is light-weight (1.1kg), small (33×20×8cm) and extremely low-cost (approx. EUR 250). We describe positioning characteristics of the stage, including spatial resolution, accuracy and repeatability, compare imaging data generated with our device to data obtained using a commercially available microplate reader, demonstrate its suitability to high-content microscopy in 96-well high-throughput screening format and validate its applicability to automated functional Cl - - and Ca 2+ -imaging with recombinant HEK293 cells as a model system. A time-lapse video of the stage during operation and as part of a custom assembled screening robot can be found at https://vimeo.com/158813199. Copyright © 2016 The Authors. Published by Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Sadowski, T.; Kneć, M.
2016-04-01
Fatigue tests were conducted since more than two hundred years ago. Despite this long period, as fatigue phenomena are very complex, assessment of fatigue response of standard materials or composites still requires a long time. Quite precise way to estimate fatigue parameters is to test at least 30 standardized specimens for the analysed material and further statistical post processing is required. In case of structural elements analysis like hybrid joints (Figure 1), the situation is much more complex as more factors influence the fatigue load capacity due to much more complicated structure of the joint in comparison to standard materials specimen, i.e. occurrence of: welded hot spots or rivets, adhesive layers, local notches creating the stress concentrations, etc. In order to shorten testing time some rapid methods are known: Locati's method [1] - step by step load increments up to failure, Prot's method [2] - constant increase of the load amplitude up to failure; Lehr's method [2] - seeking for the point during regular fatigue loading when an increase of temperature or strains become non-linear. The present article proposes new method of the fatigue response assessment - combination of the Locati's and Lehr's method.
Minimal-Approximation-Based Decentralized Backstepping Control of Interconnected Time-Delay Systems.
Choi, Yun Ho; Yoo, Sung Jin
2016-12-01
A decentralized adaptive backstepping control design using minimal function approximators is proposed for nonlinear large-scale systems with unknown unmatched time-varying delayed interactions and unknown backlash-like hysteresis nonlinearities. Compared with existing decentralized backstepping methods, the contribution of this paper is to design a simple local control law for each subsystem, consisting of an actual control with one adaptive function approximator, without requiring the use of multiple function approximators and regardless of the order of each subsystem. The virtual controllers for each subsystem are used as intermediate signals for designing a local actual control at the last step. For each subsystem, a lumped unknown function including the unknown nonlinear terms and the hysteresis nonlinearities is derived at the last step and is estimated by one function approximator. Thus, the proposed approach only uses one function approximator to implement each local controller, while existing decentralized backstepping control methods require the number of function approximators equal to the order of each subsystem and a calculation of virtual controllers to implement each local actual controller. The stability of the total controlled closed-loop system is analyzed using the Lyapunov stability theorem.
Rapid non-enzymatic extraction method for isolating PCR-quality camelpox virus DNA from skin.
Yousif, A Ausama; Al-Naeem, A Abdelmohsen; Al-Ali, M Ahmad
2010-10-01
Molecular diagnostic investigations of orthopoxvirus (OPV) infections are performed using a variety of clinical samples including skin lesions, tissues from internal organs, blood and secretions. Skin samples are particularly convenient for rapid diagnosis and molecular epidemiological investigations of camelpox virus (CMLV). Classical extraction procedures and commercial spin-column-based kits are time consuming, relatively expensive, and require multiple extraction and purification steps in addition to proteinase K digestion. A rapid non-enzymatic procedure for extracting CMLV DNA from dried scabs or pox lesions was developed to overcome some of the limitations of the available DNA extraction techniques. The procedure requires as little as 10mg of tissue and produces highly purified DNA [OD(260)/OD(280) ratios between 1.47 and 1.79] with concentrations ranging from 6.5 to 16 microg/ml. The extracted CMLV DNA was proven suitable for virus-specific qualitative and, semi-quantitative PCR applications. Compared to spin-column and conventional viral DNA extraction techniques, the two-step extraction procedure saves money and time, and retains the potential for automation without compromising CMLV PCR sensitivity. Copyright (c) 2010 Elsevier B.V. All rights reserved.
Stuebner, Michael; Haider, Mansoor A
2010-06-18
A new and efficient method for numerical solution of the continuous spectrum biphasic poroviscoelastic (BPVE) model of articular cartilage is presented. Development of the method is based on a composite Gauss-Legendre quadrature approximation of the continuous spectrum relaxation function that leads to an exponential series representation. The separability property of the exponential terms in the series is exploited to develop a numerical scheme that can be reduced to an update rule requiring retention of the strain history at only the previous time step. The cost of the resulting temporal discretization scheme is O(N) for N time steps. Application and calibration of the method is illustrated in the context of a finite difference solution of the one-dimensional confined compression BPVE stress-relaxation problem. Accuracy of the numerical method is demonstrated by comparison to a theoretical Laplace transform solution for a range of viscoelastic relaxation times that are representative of articular cartilage. Copyright (c) 2010 Elsevier Ltd. All rights reserved.
Microfluidic Remote Loading for Rapid Single-Step Liposomal Drug Preparation
Hood, R.R.; Vreeland, W. N.; DeVoe, D.L.
2014-01-01
Microfluidic-directed formation of liposomes is combined with in-line sample purification and remote drug loading for single step, continuous-flow synthesis of nanoscale vesicles containing high concentrations of stably loaded drug compounds. Using an on-chip microdialysis element, the system enables rapid formation of large transmembrane pH and ion gradients, followed by immediate introduction of amphipathic drug for real-time remote loading into the liposomes. The microfluidic process enables in-line formation of drug-laden liposomes with drug:lipid molar ratios of up to 1.3, and a total on-chip residence time of approximately 3 min, representing a significant improvement over conventional bulk-scale methods which require hours to days for combined liposome synthesis and remote drug loading. The microfluidic platform may be further optimized to support real-time generation of purified liposomal drug formulations with high concentrations of drugs and minimal reagent waste for effective liposomal drug preparation at or near the point of care. PMID:25003823
DOE Office of Scientific and Technical Information (OSTI.GOV)
Simonetto, Andrea; Dall'Anese, Emiliano
This article develops online algorithms to track solutions of time-varying constrained optimization problems. Particularly, resembling workhorse Kalman filtering-based approaches for dynamical systems, the proposed methods involve prediction-correction steps to provably track the trajectory of the optimal solutions of time-varying convex problems. The merits of existing prediction-correction methods have been shown for unconstrained problems and for setups where computing the inverse of the Hessian of the cost function is computationally affordable. This paper addresses the limitations of existing methods by tackling constrained problems and by designing first-order prediction steps that rely on the Hessian of the cost function (and do notmore » require the computation of its inverse). In addition, the proposed methods are shown to improve the convergence speed of existing prediction-correction methods when applied to unconstrained problems. Numerical simulations corroborate the analytical results and showcase performance and benefits of the proposed algorithms. A realistic application of the proposed method to real-time control of energy resources is presented.« less
ERIC Educational Resources Information Center
Stille, J. K.
1981-01-01
Following a comparison of chain-growth and step-growth polymerization, focuses on the latter process by describing requirements for high molecular weight, step-growth polymerization kinetics, synthesis and molecular weight distribution of some linear step-growth polymers, and three-dimensional network step-growth polymers. (JN)
Tu, Xijuan; Ma, Shuangqin; Gao, Zhaosheng; Wang, Jing; Huang, Shaokang; Chen, Wenbin
2017-11-01
Flavonoids are frequently found as glycosylated derivatives in plant materials. To determine contents of flavonoid aglycones in these matrices, procedures for the extraction and hydrolysis of flavonoid glycosides are required. The current sample preparation method is both labour and time consuming. Develop a modified matrix solid phase dispersion (MSPD) procedure as an alternative methodology for the one-step extraction and hydrolysis of flavonoid glycosides. HPLC-DAD was applied for demonstrating the one-step extraction and hydrolysis of flavonoids in rape bee pollen. The obtained contents of flavonoid aglycones (quercetin, kaempferol, isorhamnetin) were used for the optimisation and validation of the method. The extraction and hydrolysis were accomplished in one step. The procedure completes in 2 h with silica gel as dispersant, a 1:2 ratio of sample to dispersant, and 60% aqueous ethanol with 0.3 M hydrochloric acid as the extraction solution. The relative standard deviations (RSDs) of repeatability were less than 5%, and the recoveries at two fortified levels were between 88.3 and 104.8%. The proposed methodology is simple and highly efficient, with good repeatability and recovery. Compared with currently available methods, the present work has advantages of using less time and labour, higher extraction efficiency, and less consumption of the acid catalyst. This method may have applications for the one-step extraction and hydrolysis of bioactive compounds from plant materials. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.
Gâteau, Jérôme; Marsac, Laurent; Pernot, Mathieu; Aubry, Jean-Francois; Tanter, Mickaël; Fink, Mathias
2010-01-01
Brain treatment through the skull with High Intensity Focused Ultrasound (HIFU) can be achieved with multichannel arrays and adaptive focusing techniques such as time-reversal. This method requires a reference signal to be either emitted by a real source embedded in brain tissues or computed from a virtual source, using the acoustic properties of the skull derived from CT images. This non-invasive computational method focuses with precision, but suffers from modeling and repositioning errors that reduce the accessible acoustic pressure at the focus in comparison with fully experimental time-reversal using an implanted hydrophone. In this paper, this simulation-based targeting has been used experimentally as a first step for focusing through an ex vivo human skull at a single location. It has enabled the creation of a cavitation bubble at focus that spontaneously emitted an ultrasonic wave received by the array. This active source signal has allowed 97%±1.1% of the reference pressure (hydrophone-based) to be restored at the geometrical focus. To target points around the focus with an optimal pressure level, conventional electronic steering from the initial focus has been combined with bubble generation. Thanks to step by step bubble generation, the electronic steering capabilities of the array through the skull were improved. PMID:19770084
Ivancevic, Marko K; Kwee, Thomas C; Takahara, Taro; Ogino, Tetsuo; Hussain, Hero K; Liu, Peter S; Chenevert, Thomas L
2009-11-01
To assess the feasibility of TRacking Only Navigator echo (TRON) for diffusion-weighted magnetic resonance imaging (DWI) of the liver at 3.0T. Ten volunteers underwent TRON, respiratory triggered, and free breathing DWI of the liver at 3.0 Tesla (T). Scan times were measured. Image sharpness, degree of stair-step and stripe artifacts for the three methods were assessed by two observers. Mean scan times of TRON and respiratory triggered DWI relative to free breathing DWI were 34% and 145% longer respectively. In four of eight comparisons (two observers, two b-values, two slice orientations), TRON DWI image sharpness was significantly better than free breathing DWI, but inferior to respiratory triggered DWI. In two of four comparisons (two observers, two b-values), degree of stair-step artifacts in TRON DWI was significantly lower than in respiratory triggered DWI. Degree of stripe artifacts between the three methods was not significantly different. DWI of the liver at 3.0T using TRON is feasible. Image sharpness in TRON DWI is superior to that in free breathing DWI. Although image sharpness of respiratory triggered DWI is still better, TRON DWI requires less scan time and reduces stair-step artifacts.
Wan, Hui; Zhang, Kai; Rasch, Philip J.; ...
2017-02-03
A test procedure is proposed for identifying numerically significant solution changes in evolution equations used in atmospheric models. The test issues a fail signal when any code modifications or computing environment changes lead to solution differences that exceed the known time step sensitivity of the reference model. Initial evidence is provided using the Community Atmosphere Model (CAM) version 5.3 that the proposed procedure can be used to distinguish rounding-level solution changes from impacts of compiler optimization or parameter perturbation, which are known to cause substantial differences in the simulated climate. The test is not exhaustive since it does not detect issues associatedmore » with diagnostic calculations that do not feedback to the model state variables. Nevertheless, it provides a practical and objective way to assess the significance of solution changes. The short simulation length implies low computational cost. The independence between ensemble members allows for parallel execution of all simulations, thus facilitating fast turnaround. The new method is simple to implement since it does not require any code modifications. We expect that the same methodology can be used for any geophysical model to which the concept of time step convergence is applicable.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Melius, C
2007-12-05
The epidemiological and economic modeling of poultry diseases requires knowing the size, location, and operational type of each poultry type operation within the US. At the present time, the only national database of poultry operations that is available to the general public is the USDA's 2002 Agricultural Census data, published by the National Agricultural Statistics Service, herein referred to as the 'NASS data'. The NASS data provides census data at the county level on poultry operations for various operation types (i.e., layers, broilers, turkeys, ducks, geese). However, the number of farms and sizes of farms for the various types aremore » not independent since some facilities have more than one type of operation. Furthermore, some data on the number of birds represents the number sold, which does not represent the number of birds present at any given time. In addition, any data tabulated by NASS that could identify numbers of birds or other data reported by an individual respondent is suppressed by NASS and coded with a 'D'. To be useful for epidemiological and economic modeling, the NASS data must be converted into a unique set of facility types (farms having similar operational characteristics). The unique set must not double count facilities or birds. At the same time, it must account for all the birds, including those for which the data has been suppressed. Therefore, several data processing steps are required to work back from the published NASS data to obtain a consistent database for individual poultry operations. This technical report documents data processing steps that were used to convert the NASS data into a national poultry facility database with twenty-six facility types (7 egg-laying, 6 broiler, 1 backyard, 3 turkey, and 9 others, representing ducks, geese, ostriches, emus, pigeons, pheasants, quail, game fowl breeders and 'other'). The process involves two major steps. The first step defines the rules used to estimate the data that is suppressed within the NASS database. The first step is similar to the first step used to estimate suppressed data for livestock [Melius et al (2006)]. The second step converts the NASS poultry types into the operational facility types used by the epidemiological and economic model. We also define two additional facility types for high and low risk poultry backyards, and an additional two facility types for live bird markets and swap meets. The distribution of these additional facility types among counties is based on US population census data. The algorithm defining the number of premises and the corresponding distribution among counties and the resulting premises density plots for the continental US are provided.« less
Adaptive multi-time-domain subcycling for crystal plasticity FE modeling of discrete twin evolution
NASA Astrophysics Data System (ADS)
Ghosh, Somnath; Cheng, Jiahao
2018-02-01
Crystal plasticity finite element (CPFE) models that accounts for discrete micro-twin nucleation-propagation have been recently developed for studying complex deformation behavior of hexagonal close-packed (HCP) materials (Cheng and Ghosh in Int J Plast 67:148-170, 2015, J Mech Phys Solids 99:512-538, 2016). A major difficulty with conducting high fidelity, image-based CPFE simulations of polycrystalline microstructures with explicit twin formation is the prohibitively high demands on computing time. High strain localization within fast propagating twin bands requires very fine simulation time steps and leads to enormous computational cost. To mitigate this shortcoming and improve the simulation efficiency, this paper proposes a multi-time-domain subcycling algorithm. It is based on adaptive partitioning of the evolving computational domain into twinned and untwinned domains. Based on the local deformation-rate, the algorithm accelerates simulations by adopting different time steps for each sub-domain. The sub-domains are coupled back after coarse time increments using a predictor-corrector algorithm at the interface. The subcycling-augmented CPFEM is validated with a comprehensive set of numerical tests. Significant speed-up is observed with this novel algorithm without any loss of accuracy that is advantageous for predicting twinning in polycrystalline microstructures.
Use of electronic games by young children and fundamental movement skills?
Barnett, Lisa M; Hinkley, Trina; Okely, Anthony D; Hesketh, Kylie; Salmon, Jo
2012-06-01
This study investigated associations between pre-school children's time spent playing electronic games and their fundamental movement skills. In 2009, 53 children had physical activity (Actigraph accelerometer counts per minute), parent proxy-report of child's time in interactive and non-interactive electronic games (min./week), and movement skill (Test of Gross Motor Development-2) assessed. Hierarchical linear regression, adjusting for age (range = 3-6 years), sex (Step 1), and physical activity (cpm; M=687, SD=175.42; Step 2), examined the relationship between time in (a) non-interactive and (b) interactive electronic games and locomotor and object control skill. More than half (59%, n=31) of the children were female. Adjusted time in interactive game use was associated with object control but not locomotor skill. Adjusted time in non-interactive game use had no association with object control or locomotor skill. Greater time spent playing interactive electronic games is associated with higher object control skill proficiency in these young children. Longitudinal and experimental research is required to determine if playing these games improves object control skills or if children with greater object control skill proficiency prefer and play these games.
Apollo 15 time and motion study
NASA Technical Reports Server (NTRS)
Kubis, J. F.; Elrod, J. T.; Rusnak, R.; Barnes, J. E.
1972-01-01
A time and motion study of Apollo 15 lunar surface activity led to examination of four distinct areas of crewmen activity. These areas are: an analysis of lunar mobility, a comparative analysis of tasks performed in 1-g training and lunar EVA, an analysis of the metabolic cost of two activities that are performed in several EVAs, and a fall/near-fall analysis. An analysis of mobility showed that the crewmen used three basic mobility patterns (modified walk, hop, side step) while on the lunar surface. These mobility patterns were utilized as adaptive modes to compensate for the uneven terrain and varied soil conditions that the crewmen encountered. A comparison of the time required to perform tasks at the final 1-g lunar EVA training sessions and the time required to perform the same task on the lunar surface indicates that, in almost all cases, it took significantly more time (on the order of 40%) to perform tasks on the moon. This increased time was observed even after extraneous factors (e.g., hardware difficulties) were factored out.
Brown, Matthew L; Seyler, Thorsten M; Allen, John; Plate, Johannes F; Henshaw, Daryl S; Lang, Jason E
2015-03-01
Unicondylar knee arthroplasty (UKA) offers decreased morbidity, faster recovery, better functional outcomes, and equivalent survivorship compared to TKA for certain patients. To fully capture these benefits, regional anesthesia techniques must facilitate rather than compromise patients ability for early postoperative mobilization and safe discharge following UKA. The purpose of this study was to determine whether the predominantly sensory adductor canal blockade (ACB) shortens hospital stay after medial UKA (mUKA). Secondary endpoints were narcotic consumption, steps walked during PT sessions, and total PT sessions required prior to discharge. Twelve patients scheduled for elective mUKA received spinal anesthesia and single-shot ACB. ACB patients were matched by age, gender, body mass index (BMI), and Charlson Comorbidity Index in a 1:2 ratio to 24 lumbar plexus block (LPB) patients. Time to hospital discharge, number of physical therapy (PT) sessions required for safe discharge, and steps taken during PT sessions were retrospectively abstracted from each patient's medical record. Patients who received ACB had a significantly shorter hospital stay (27.8 ± 3.9 hours) compared with patients who received LPB (39.7 ±18.5 hours, p = 0.025). Patients treated with ACB required significantly fewer PT sessions (1.3 ± 0.6 sessions) compared to patients who received LPB (2.4 ± 1.5 sessions, p = 0.007). Patients treated with ACB walked significantly more steps during their first PT session (225.0 ± 156.6 steps) compared with patients treated with LPB (107.4 ± 170.0, p = 0.045). There was a trend towards decreased narcotic requirements in the ACB group. Data from our study suggests that ACB may permit earlier hospital discharge and better participation in PT without compromising the quality of perioperative analgesia. Thus, ACB may represent a promising option for patients undergoing mUKA in terms of improved clinical outcomes, decreased postoperative morbidity, and cost-effectiveness.
hp-Adaptive time integration based on the BDF for viscous flows
NASA Astrophysics Data System (ADS)
Hay, A.; Etienne, S.; Pelletier, D.; Garon, A.
2015-06-01
This paper presents a procedure based on the Backward Differentiation Formulas of order 1 to 5 to obtain efficient time integration of the incompressible Navier-Stokes equations. The adaptive algorithm performs both stepsize and order selections to control respectively the solution accuracy and the computational efficiency of the time integration process. The stepsize selection (h-adaptivity) is based on a local error estimate and an error controller to guarantee that the numerical solution accuracy is within a user prescribed tolerance. The order selection (p-adaptivity) relies on the idea that low-accuracy solutions can be computed efficiently by low order time integrators while accurate solutions require high order time integrators to keep computational time low. The selection is based on a stability test that detects growing numerical noise and deems a method of order p stable if there is no method of lower order that delivers the same solution accuracy for a larger stepsize. Hence, it guarantees both that (1) the used method of integration operates inside of its stability region and (2) the time integration procedure is computationally efficient. The proposed time integration procedure also features a time-step rejection and quarantine mechanisms, a modified Newton method with a predictor and dense output techniques to compute solution at off-step points.
NASA Astrophysics Data System (ADS)
Harkrider, Curtis Jason
2000-08-01
The incorporation of gradient-index (GRIN) material into optical systems offers novel and practical solutions to lens design problems. However, widespread use of gradient-index optics has been limited by poor correlation between gradient-index designs and the refractive index profiles produced by ion exchange between glass and molten salt. Previously, a design-for- manufacture model was introduced that connected the design and fabrication processes through use of diffusion modeling linked with lens design software. This project extends the design-for-manufacture model into a time- varying boundary condition (TVBC) diffusion model. TVBC incorporates the time-dependent phenomenon of melt poisoning and introduces a new index profile control method, multiple-step diffusion. The ions displaced from the glass during the ion exchange fabrication process can reduce the total change in refractive index (Δn). Chemical equilibrium is used to model this melt poisoning process. Equilibrium experiments are performed in a titania silicate glass and chemically analyzed. The equilibrium model is fit to ion concentration data that is used to calculate ion exchange boundary conditions. The boundary conditions are changed purposely to control the refractive index profile in multiple-step TVBC diffusion. The glass sample is alternated between ion exchange with a molten salt bath and annealing. The time of each diffusion step can be used to exert control on the index profile. The TVBC computer model is experimentally verified and incorporated into the design- for-manufacture subroutine that runs in lens design software. The TVBC design-for-manufacture model is useful for fabrication-based tolerance analysis of gradient-index lenses and for the design of manufactureable GRIN lenses. Several optical elements are designed and fabricated using multiple-step diffusion, verifying the accuracy of the model. The strength of multiple-step diffusion process lies in its versatility. An axicon, imaging lens, and curved radial lens, all with different index profile requirements, are designed out of a single glass composition.
Gaze shifts and fixations dominate gaze behavior of walking cats
Rivers, Trevor J.; Sirota, Mikhail G.; Guttentag, Andrew I.; Ogorodnikov, Dmitri A.; Shah, Neet A.; Beloozerova, Irina N.
2014-01-01
Vision is important for locomotion in complex environments. How it is used to guide stepping is not well understood. We used an eye search coil technique combined with an active marker-based head recording system to characterize the gaze patterns of cats walking over terrains of different complexity: (1) on a flat surface in the dark when no visual information was available, (2) on the flat surface in light when visual information was available but not required, (3) along the highly structured but regular and familiar surface of a horizontal ladder, a task for which visual guidance of stepping was required, and (4) along a pathway cluttered with many small stones, an irregularly structured surface that was new each day. Three cats walked in a 2.5 m corridor, and 958 passages were analyzed. Gaze activity during the time when the gaze was directed at the walking surface was subdivided into four behaviors based on speed of gaze movement along the surface: gaze shift (fast movement), gaze fixation (no movement), constant gaze (movement at the body’s speed), and slow gaze (the remainder). We found that gaze shifts and fixations dominated the cats’ gaze behavior during all locomotor tasks, jointly occupying 62–84% of the time when the gaze was directed at the surface. As visual complexity of the surface and demand on visual guidance of stepping increased, cats spent more time looking at the surface, looked closer to them, and switched between gaze behaviors more often. During both visually guided locomotor tasks, gaze behaviors predominantly followed a repeated cycle of forward gaze shift followed by fixation. We call this behavior “gaze stepping”. Each gaze shift took gaze to a site approximately 75–80 cm in front of the cat, which the cat reached in 0.7–1.2 s and 1.1–1.6 strides. Constant gaze occupied only 5–21% of the time cats spent looking at the walking surface. PMID:24973656
NASA Astrophysics Data System (ADS)
Dannberg, J.; Heister, T.; Grove, R. R.; Gassmoeller, R.; Spiegelman, M. W.; Bangerth, W.
2017-12-01
Earth's surface shows many features whose genesis can only be understood through the interplay of geodynamic and thermodynamic models. This is particularly important in the context of melt generation and transport: Mantle convection determines the distribution of temperature and chemical composition, the melting process itself is then controlled by the thermodynamic relations and in turn influences the properties and the transport of melt. Here, we present our extension of the community geodynamics code ASPECT, which solves the equations of coupled magma/mantle dynamics, and allows to integrate different parametrizations of reactions and phase transitions: They may alternatively be implemented as simple analytical expressions, look-up tables, or computed by a thermodynamics software. As ASPECT uses a variety of numerical methods and solvers, this also gives us the opportunity to compare different approaches of modelling the melting process. In particular, we will elaborate on the spatial and temporal resolution that is required to accurately model phase transitions, and show the potential of adaptive mesh refinement when applied to melt generation and transport. We will assess the advantages and disadvantages of iterating between fluid dynamics and chemical reactions derived from thermodynamic models within each time step, or decoupling them, allowing for different time step sizes. Beyond that, we will expand on the functionality required for an interface between computational thermodynamics and fluid dynamics models from the geodynamics side. Finally, using a simple example of melting of a two-phase, two-component system, we compare different time-stepping and solver schemes in terms of accuracy and efficiency, in dependence of the time scales of fluid flow and chemical reactions relative to each other. Our software provides a framework to integrate thermodynamic models in high resolution, 3d simulations of coupled magma/mantle dynamics, and can be used as a tool to study links between physical processes and geochemical signals in the Earth.
Time for actions in lucid dreams: effects of task modality, length, and complexity
Erlacher, Daniel; Schädlich, Melanie; Stumbrys, Tadas; Schredl, Michael
2014-01-01
The relationship between time in dreams and real time has intrigued scientists for centuries. The question if actions in dreams take the same time as in wakefulness can be tested by using lucid dreams where the dreamer is able to mark time intervals with prearranged eye movements that can be objectively identified in EOG recordings. Previous research showed an equivalence of time for counting in lucid dreams and in wakefulness (LaBerge, 1985; Erlacher and Schredl, 2004), but Erlacher and Schredl (2004) found that performing squats required about 40% more time in lucid dreams than in the waking state. To find out if the task modality, the task length, or the task complexity results in prolonged times in lucid dreams, an experiment with three different conditions was conducted. In the first condition, five proficient lucid dreamers spent one to three non-consecutive nights in the sleep laboratory. Participants counted to 10, 20, and 30 in wakefulness and in their lucid dreams. Lucidity and task intervals were time stamped with left-right-left-right eye movements. The same procedure was used for the second condition where eight lucid dreamers had to walk 10, 20, or 30 steps. In the third condition, eight lucid dreamers performed a gymnastics routine, which in the waking state lasted the same time as walking 10 steps. Again, we found that performing a motor task in a lucid dream requires more time than in wakefulness. Longer durations in the dream state were present for all three tasks, but significant differences were found only for the tasks with motor activity (walking and gymnastics). However, no difference was found for relative times (no disproportional time effects) and a more complex motor task did not result in more prolonged times. Longer durations in lucid dreams might be related to the lack of muscular feedback or slower neural processing during REM sleep. Future studies should explore factors that might be associated with prolonged durations. PMID:24474942
Time for actions in lucid dreams: effects of task modality, length, and complexity.
Erlacher, Daniel; Schädlich, Melanie; Stumbrys, Tadas; Schredl, Michael
2013-01-01
The relationship between time in dreams and real time has intrigued scientists for centuries. The question if actions in dreams take the same time as in wakefulness can be tested by using lucid dreams where the dreamer is able to mark time intervals with prearranged eye movements that can be objectively identified in EOG recordings. Previous research showed an equivalence of time for counting in lucid dreams and in wakefulness (LaBerge, 1985; Erlacher and Schredl, 2004), but Erlacher and Schredl (2004) found that performing squats required about 40% more time in lucid dreams than in the waking state. To find out if the task modality, the task length, or the task complexity results in prolonged times in lucid dreams, an experiment with three different conditions was conducted. In the first condition, five proficient lucid dreamers spent one to three non-consecutive nights in the sleep laboratory. Participants counted to 10, 20, and 30 in wakefulness and in their lucid dreams. Lucidity and task intervals were time stamped with left-right-left-right eye movements. The same procedure was used for the second condition where eight lucid dreamers had to walk 10, 20, or 30 steps. In the third condition, eight lucid dreamers performed a gymnastics routine, which in the waking state lasted the same time as walking 10 steps. Again, we found that performing a motor task in a lucid dream requires more time than in wakefulness. Longer durations in the dream state were present for all three tasks, but significant differences were found only for the tasks with motor activity (walking and gymnastics). However, no difference was found for relative times (no disproportional time effects) and a more complex motor task did not result in more prolonged times. Longer durations in lucid dreams might be related to the lack of muscular feedback or slower neural processing during REM sleep. Future studies should explore factors that might be associated with prolonged durations.
Airspace Operations Demo Functional Requirements Matrix
NASA Technical Reports Server (NTRS)
2005-01-01
The Flight IPT assessed the reasonableness of demonstrating each of the Access 5 Step 1 functional requirements. The functional requirements listed in this matrix are from the September 2005 release of the Access 5 Functional Requirements Document. The demonstration mission considered was a notional Western US mission (WUS). The conclusion of the assessment is that 90% of the Access 5 Step 1 functional requirements can be demonstrated using the notional Western US mission.
NASA Astrophysics Data System (ADS)
Parand, K.; Latifi, S.; Moayeri, M. M.; Delkhosh, M.
2018-05-01
In this study, we have constructed a new numerical approach for solving the time-dependent linear and nonlinear Fokker-Planck equations. In fact, we have discretized the time variable with Crank-Nicolson method and for the space variable, a numerical method based on Generalized Lagrange Jacobi Gauss-Lobatto (GLJGL) collocation method is applied. It leads to in solving the equation in a series of time steps and at each time step, the problem is reduced to a problem consisting of a system of algebraic equations that greatly simplifies the problem. One can observe that the proposed method is simple and accurate. Indeed, one of its merits is that it is derivative-free and by proposing a formula for derivative matrices, the difficulty aroused in calculation is overcome, along with that it does not need to calculate the General Lagrange basis and matrices; they have Kronecker property. Linear and nonlinear Fokker-Planck equations are given as examples and the results amply demonstrate that the presented method is very valid, effective, reliable and does not require any restrictive assumptions for nonlinear terms.
Method for Reducing the Refresh Rate of Fiber Bragg Grating Sensors
NASA Technical Reports Server (NTRS)
Parker, Allen R., Jr. (Inventor)
2014-01-01
The invention provides a method of obtaining the FBG data in final form (transforming the raw data into frequency and location data) by taking the raw FBG sensor data and dividing the data into a plurality of segments over time. By transforming the raw data into a plurality of smaller segments, processing time is significantly decreased. Also, by defining the segments over time, only one processing step is required. By employing this method, the refresh rate of FBG sensor systems can be improved from about 1 scan per second to over 20 scans per second.
1990-02-02
National Aero-Space Plane NTC no time counter TSS-2 Tethered Satellite System - 2 VHS variable hard sphere VSL viscous shock-layer Introduction With...required at each time step to evaluate the mass fractions Yi+’ it is shown in [21] that the matrix of this linear system is an M-matrix (see e.g. [42]), and...first rewrite system (4.7)- (4.8) under the following form, separating the time -dependent, convective, diffusive and reactive terms: VW’ + F(W)r + G(,W
Numerical Inverse Scattering for the Toda Lattice
NASA Astrophysics Data System (ADS)
Bilman, Deniz; Trogdon, Thomas
2017-06-01
We present a method to compute the inverse scattering transform (IST) for the famed Toda lattice by solving the associated Riemann-Hilbert (RH) problem numerically. Deformations for the RH problem are incorporated so that the IST can be evaluated in O(1) operations for arbitrary points in the ( n, t)-domain, including short- and long-time regimes. No time-stepping is required to compute the solution because ( n, t) appear as parameters in the associated RH problem. The solution of the Toda lattice is computed in long-time asymptotic regions where the asymptotics are not known rigorously.
Observational study of treatment space in individual neonatal cot spaces.
Hignett, Sue; Lu, Jun; Fray, Mike
2010-01-01
Technology developments in neonatal intensive care units have increased the spatial requirements for clinical activities. Because the effectiveness of healthcare delivery is determined in part by the design of the physical environment and the spatial organization of work, it is appropriate to apply an evidence-based approach to architectural design. This study aimed to provide empirical evidence of the spatial requirements for an individual cot or incubator space. Observational data from 2 simulation exercises were combined with an expert review to produce a final recommendation. A validated 5-step protocol was used to collect data. Step 1 defined the clinical specialty and space. In step 2, data were collected with 28 staff members and 15 neonates to produce a simulation scenario representing the frequent and safety-critical activities. In step 3, 21 staff members participated in functional space experiments to determine the average spatial requirements. Step 4 incorporated additional data (eg, storage and circulation) to produce a spatial recommendation. Finally, the recommendation was reviewed in step 5 by a national expert clinical panel to consider alternative layouts and technology. The average space requirement for an individual neonatal intensive care unit cot (incubator) space was 13.5 m2 (or 145.3 ft2). The circulation and storage space requirements added in step 4 increased this to 18.46 m2 (or 198.7 ft2). The expert panel reviewed the recommendation and agreed that the average individual cot space (13.5 m2/[or 145.3 ft2]) would accommodate variance in working practices. Care needs to be taken when extrapolating this recommendation to multiple cot areas to maintain the minimum spatial requirement.
Cakar, N; Tuŏrul, M; Demirarslan, A; Nahum, A; Adams, A; Akýncý, O; Esen, F; Telci, L
2001-04-01
To determine the time required for the partial pressure of arterial oxygen (PaO2) to reach equilibrium after a 0.20 increment or decrement in fractional inspired oxygen concentration (FIO2) during mechanical ventilation. A multi-disciplinary ICU in a university hospital. Twenty-five adult, non-COPD patients with stable blood gas values (PaO2/FIO2 > or = 180 on the day of the study) on pressure-controlled ventilation (PCV). Following a baseline PaO2 (PaO2b) measurement at FIO2 = 0.35, the FIO2 was increased to 0.55 for 30 min and then decreased to 0.35 without any other change in ventilatory parameters. Sequential blood gas measurements were performed at 3, 5, 7, 9, 11, 15, 20, 25 and 30 min in both periods. The PaO2 values measured at the 30th min after a step change in FIO2 (FIO2 = 0.55, PaO2[55] and FIO2 = 0.35, PaO2[35]) were accepted as representative of the equilibrium values for PaO2. Each patient's rise and fall in PaO2 over time, PaO2(t), were fitted to the following respective exponential equations: PaO2b + (PaO2[55]-PaO2b)(1-e-kt) and PaO2[55] + (PaO2[35]-PaO2[55])(e-kt) where "t" refers to time, PaO2[55] and PaO2[35] are the final PaO2 values obtained at a new FIO2 of 0.55 and 0.35, after a 0.20 increment and decrement in FIO2, respectively. Time constant "k" was determined by a non-linear fitting curve and 90% oxygenation times were defined as the time required to reach 90% of the final equilibrated PaO2 calculated by using the non-linear fitting curves. Time constant values for the rise and fall periods were 1.01 +/- 0.71 min-1, 0.69 +/- 0.42 min-1, respectively, and 90% oxygenation times for rises and falls in PaO2 periods were 4.2 +/- 4.1 min-1 and 5.5 +/- 4.8 min-1, respectively. There was no significant difference between the rise and fall periods for the two parameters (p > 0.05). We conclude that in stable patients ventilated with PCV, after a step change in FIO2 of 0.20, 5-10 min will be adequate for obtaining a blood gas sample to measure a PaO2 that will be representative of the equilibrium PaO2 value.
Web processing service for landslide hazard assessment
NASA Astrophysics Data System (ADS)
Sandric, I.; Ursaru, P.; Chitu, D.; Mihai, B.; Savulescu, I.
2012-04-01
Hazard analysis requires heavy computation and specialized software. Web processing services can offer complex solutions that can be accessed through a light client (web or desktop). This paper presents a web processing service (both WPS and Esri Geoprocessing Service) for landslides hazard assessment. The web processing service was build with Esri ArcGIS Server solution and Python, developed using ArcPy, GDAL Python and NumPy. A complex model for landslide hazard analysis using both predisposing and triggering factors combined into a Bayesian temporal network with uncertainty propagation was build and published as WPS and Geoprocessing service using ArcGIS Standard Enterprise 10.1. The model uses as predisposing factors the first and second derivatives from DEM, the effective precipitations, runoff, lithology and land use. All these parameters can be served by the client from other WFS services or by uploading and processing the data on the server. The user can select the option of creating the first and second derivatives from the DEM automatically on the server or to upload the data already calculated. One of the main dynamic factors from the landslide analysis model is leaf area index. The LAI offers the advantage of modelling not just the changes from different time periods expressed in years, but also the seasonal changes in land use throughout a year. The LAI index can be derived from various satellite images or downloaded as a product. The upload of such data (time series) is possible using a NetCDF file format. The model is run in a monthly time step and for each time step all the parameters values, a-priory, conditional and posterior probability are obtained and stored in a log file. The validation process uses landslides that have occurred during the period up to the active time step and checks the records of the probabilities and parameters values for those times steps with the values of the active time step. Each time a landslide has been positive identified new a-priory probabilities are recorded for each parameter. A complete log for the entire model is saved and used for statistical analysis and a NETCDF file is created and it can be downloaded from the server with the log file
Barreto, Goncalo; Soininen, Antti; Sillat, Tarvo; Konttinen, Yrjö T; Kaivosoja, Emilia
2014-01-01
Time-of-flight secondary ion mass spectrometry (ToF-SIMS) is increasingly being used in analysis of biological samples. For example, it has been applied to distinguish healthy and osteoarthritic human cartilage. This chapter discusses ToF-SIMS principle and instrumentation including the three modes of analysis in ToF-SIMS. ToF-SIMS sets certain requirements for the samples to be analyzed; for example, the samples have to be vacuum compatible. Accordingly, sample processing steps for different biological samples, i.e., proteins, cells, frozen and paraffin-embedded tissues and extracellular matrix for the ToF-SIMS are presented. Multivariate analysis of the ToF-SIMS data and the necessary data preprocessing steps (peak selection, data normalization, mean-centering, and scaling and transformation) are discussed in this chapter.
An efficient quantum algorithm for spectral estimation
NASA Astrophysics Data System (ADS)
Steffens, Adrian; Rebentrost, Patrick; Marvian, Iman; Eisert, Jens; Lloyd, Seth
2017-03-01
We develop an efficient quantum implementation of an important signal processing algorithm for line spectral estimation: the matrix pencil method, which determines the frequencies and damping factors of signals consisting of finite sums of exponentially damped sinusoids. Our algorithm provides a quantum speedup in a natural regime where the sampling rate is much higher than the number of sinusoid components. Along the way, we develop techniques that are expected to be useful for other quantum algorithms as well—consecutive phase estimations to efficiently make products of asymmetric low rank matrices classically accessible and an alternative method to efficiently exponentiate non-Hermitian matrices. Our algorithm features an efficient quantum-classical division of labor: the time-critical steps are implemented in quantum superposition, while an interjacent step, requiring much fewer parameters, can operate classically. We show that frequencies and damping factors can be obtained in time logarithmic in the number of sampling points, exponentially faster than known classical algorithms.
On coupling fluid plasma and kinetic neutral physics models
Joseph, I.; Rensink, M. E.; Stotler, D. P.; ...
2017-03-01
The coupled fluid plasma and kinetic neutral physics equations are analyzed through theory and simulation of benchmark cases. It is shown that coupling methods that do not treat the coupling rates implicitly are restricted to short time steps for stability. Fast charge exchange, ionization and recombination coupling rates exist, even after constraining the solution by requiring that the neutrals are at equilibrium. For explicit coupling, the present implementation of Monte Carlo correlated sampling techniques does not allow for complete convergence in slab geometry. For the benchmark case, residuals decay with particle number and increase with grid size, indicating that theymore » scale in a manner that is similar to the theoretical prediction for nonlinear bias error. Progress is reported on implementation of a fully implicit Jacobian-free Newton–Krylov coupling scheme. The present block Jacobi preconditioning method is still sensitive to time step and methods that better precondition the coupled system are under investigation.« less
Method of detecting system function by measuring frequency response
Morrison, John L.; Morrison, William H.; Christophersen, Jon P.; Motloch, Chester G.
2013-01-08
Methods of rapidly measuring an impedance spectrum of an energy storage device in-situ over a limited number of logarithmically distributed frequencies are described. An energy storage device is excited with a known input signal, and a response is measured to ascertain the impedance spectrum. An excitation signal is a limited time duration sum-of-sines consisting of a select number of frequencies. In one embodiment, magnitude and phase of each frequency of interest within the sum-of-sines is identified when the selected frequencies and sample rate are logarithmic integer steps greater than two. This technique requires a measurement with a duration of one period of the lowest frequency. In another embodiment, where selected frequencies are distributed in octave steps, the impedance spectrum can be determined using a captured time record that is reduced to a half-period of the lowest frequency.
Scalable asynchronous execution of cellular automata
NASA Astrophysics Data System (ADS)
Folino, Gianluigi; Giordano, Andrea; Mastroianni, Carlo
2016-10-01
The performance and scalability of cellular automata, when executed on parallel/distributed machines, are limited by the necessity of synchronizing all the nodes at each time step, i.e., a node can execute only after the execution of the previous step at all the other nodes. However, these synchronization requirements can be relaxed: a node can execute one step after synchronizing only with the adjacent nodes. In this fashion, different nodes can execute different time steps. This can be a notable advantageous in many novel and increasingly popular applications of cellular automata, such as smart city applications, simulation of natural phenomena, etc., in which the execution times can be different and variable, due to the heterogeneity of machines and/or data and/or executed functions. Indeed, a longer execution time at a node does not slow down the execution at all the other nodes but only at the neighboring nodes. This is particularly advantageous when the nodes that act as bottlenecks vary during the application execution. The goal of the paper is to analyze the benefits that can be achieved with the described asynchronous implementation of cellular automata, when compared to the classical all-to-all synchronization pattern. The performance and scalability have been evaluated through a Petri net model, as this model is very useful to represent the synchronization barrier among nodes. We examined the usual case in which the territory is partitioned into a number of regions, and the computation associated with a region is assigned to a computing node. We considered both the cases of mono-dimensional and two-dimensional partitioning. The results show that the advantage obtained through the asynchronous execution, when compared to the all-to-all synchronous approach is notable, and it can be as large as 90% in terms of speedup.
Real-time automated failure analysis for on-orbit operations
NASA Technical Reports Server (NTRS)
Kirby, Sarah; Lauritsen, Janet; Pack, Ginger; Ha, Anhhoang; Jowers, Steven; Mcnenny, Robert; Truong, The; Dell, James
1993-01-01
A system which is to provide real-time failure analysis support to controllers at the NASA Johnson Space Center Control Center Complex (CCC) for both Space Station and Space Shuttle on-orbit operations is described. The system employs monitored systems' models of failure behavior and model evaluation algorithms which are domain-independent. These failure models are viewed as a stepping stone to more robust algorithms operating over models of intended function. The described system is designed to meet two sets of requirements. It must provide a useful failure analysis capability enhancement to the mission controller. It must satisfy CCC operational environment constraints such as cost, computer resource requirements, verification, and validation. The underlying technology and how it may be used to support operations is also discussed.
Watanabe, Tatsunori; Tsutou, Kotaro; Saito, Kotaro; Ishida, Kazuto; Tanabe, Shigeo; Nojima, Ippei
2016-11-01
Choice reaction requires response conflict resolution, and the resolution processes that occur during a choice stepping reaction task undertaken in a standing position, which requires maintenance of balance, may be different to those processes occurring during a choice reaction task performed in a seated position. The study purpose was to investigate the resolution processes during a choice stepping reaction task at the cortical level using electroencephalography and compare the results with a control task involving ankle dorsiflexion responses. Twelve young adults either stepped forward or dorsiflexed the ankle in response to a visual imperative stimulus presented on a computer screen. We used the Simon task and examined the error-related negativity (ERN) that follows an incorrect response and the correct-response negativity (CRN) that follows a correct response. Error was defined as an incorrect initial weight transfer for the stepping task and as an incorrect initial tibialis anterior activation for the control task. Results revealed that ERN and CRN amplitudes were similar in size for the stepping task, whereas the amplitude of ERN was larger than that of CRN for the control task. The ERN amplitude was also larger in the stepping task than the control task. These observations suggest that a choice stepping reaction task involves a strategy emphasizing post-response conflict and general performance monitoring of actual and required responses and also requires greater cognitive load than a choice dorsiflexion reaction. The response conflict resolution processes appear to be different for stepping tasks and reaction tasks performed in a seated position.
NASA Astrophysics Data System (ADS)
Ficchì, Andrea; Perrin, Charles; Andréassian, Vazken
2016-07-01
Hydro-climatic data at short time steps are considered essential to model the rainfall-runoff relationship, especially for short-duration hydrological events, typically flash floods. Also, using fine time step information may be beneficial when using or analysing model outputs at larger aggregated time scales. However, the actual gain in prediction efficiency using short time-step data is not well understood or quantified. In this paper, we investigate the extent to which the performance of hydrological modelling is improved by short time-step data, using a large set of 240 French catchments, for which 2400 flood events were selected. Six-minute rain gauge data were available and the GR4 rainfall-runoff model was run with precipitation inputs at eight different time steps ranging from 6 min to 1 day. Then model outputs were aggregated at seven different reference time scales ranging from sub-hourly to daily for a comparative evaluation of simulations at different target time steps. Three classes of model performance behaviour were found for the 240 test catchments: (i) significant improvement of performance with shorter time steps; (ii) performance insensitivity to the modelling time step; (iii) performance degradation as the time step becomes shorter. The differences between these groups were analysed based on a number of catchment and event characteristics. A statistical test highlighted the most influential explanatory variables for model performance evolution at different time steps, including flow auto-correlation, flood and storm duration, flood hydrograph peakedness, rainfall-runoff lag time and precipitation temporal variability.
van Erp, Nicole H J; van Vugt, Maaike; Verhoeven, Dorien; Kroon, Hans
2009-01-01
This brief report addresses the systematic implementation of skills training modules for persons with schizophrenia or related disorders in three Dutch mental health agencies. Information on barriers, strategies and integration into routine daily practice was gathered at 0, 12 and 24 months through interviews with managers, program leaders, trainers, practitioners and clients. Overall implementation of the skills training modules for 74% of the persons with schizophrenia or related disorders was not feasible. Implementation was impeded by an incapable program leader, organizational changes, disappointing referrals and loss of trainers. The agencies made important steps forward to integrate the modules into routine daily practice. A reach percentage of 74% in two years time is too ambitious and needs to be adjusted. Systematic integration of the modules into routine daily practice is feasible, but requires solid program management and continuous effort to involve clients and practitioners.
DNA strand displacement system running logic programs.
Rodríguez-Patón, Alfonso; Sainz de Murieta, Iñaki; Sosík, Petr
2014-01-01
The paper presents a DNA-based computing model which is enzyme-free and autonomous, not requiring a human intervention during the computation. The model is able to perform iterated resolution steps with logical formulae in conjunctive normal form. The implementation is based on the technique of DNA strand displacement, with each clause encoded in a separate DNA molecule. Propositions are encoded assigning a strand to each proposition p, and its complementary strand to the proposition ¬p; clauses are encoded comprising different propositions in the same strand. The model allows to run logic programs composed of Horn clauses by cascading resolution steps. The potential of the model is demonstrated also by its theoretical capability of solving SAT. The resulting SAT algorithm has a linear time complexity in the number of resolution steps, whereas its spatial complexity is exponential in the number of variables of the formula. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.
Antibody-Mediated Small Molecule Detection Using Programmable DNA-Switches.
Rossetti, Marianna; Ippodrino, Rudy; Marini, Bruna; Palleschi, Giuseppe; Porchetta, Alessandro
2018-06-13
The development of rapid, cost-effective, and single-step methods for the detection of small molecules is crucial for improving the quality and efficiency of many applications ranging from life science to environmental analysis. Unfortunately, current methodologies still require multiple complex, time-consuming washing and incubation steps, which limit their applicability. In this work we present a competitive DNA-based platform that makes use of both programmable DNA-switches and antibodies to detect small target molecules. The strategy exploits both the advantages of proximity-based methods and structure-switching DNA-probes. The platform is modular and versatile and it can potentially be applied for the detection of any small target molecule that can be conjugated to a nucleic acid sequence. Here the rational design of programmable DNA-switches is discussed, and the sensitive, rapid, and single-step detection of different environmentally relevant small target molecules is demonstrated.
Farmery, A D; Hahn, C E
2000-08-01
Tidal ventilation gas-exchange models in respiratory physiology and medicine not only require solution of mass balance equations breath-by-breath but also may require within-breath measurements, which are instantaneous functions of time. This demands a degree of temporal resolution and fidelity of integration of gas flow and concentration signals that cannot be provided by most clinical gas analyzers because of their slow response times. We have characterized the step responses of the Datex Ultima (Datex Instrumentation, Helsinki, Finland) gas analyzer to oxygen, carbon dioxide, and nitrous oxide in terms of a Gompertz four-parameter sigmoidal function. By inversion of this function, we were able to reduce the rise times for all these gases almost fivefold, and, by its application to real on-line respiratory gas signals, it is possible to achieve a performance comparable to the fastest mass spectrometers. With the use of this technique, measurements required for non-steady-state and tidal gas-exchange models can be made easily and reliably in the clinical setting.
The 3D dynamics of the Cosserat rod as applied to continuum robotics
NASA Astrophysics Data System (ADS)
Jones, Charles Rees
2011-12-01
In the effort to simulate the biologically inspired continuum robot's dynamic capabilities, researchers have been faced with the daunting task of simulating---in real-time---the complete three dimensional dynamics of the "beam-like" structure which includes the three "stiff" degrees-of-freedom transverse and dilational shear. Therefore, researchers have traditionally limited the difficulty of the problem with simplifying assumptions. This study, however, puts forward a solution which makes no simplifying assumptions and trades off only the real-time requirement of the desired solution. The solution is a Finite Difference Time Domain method employing an explicit single step method with cheap right hands sides. The cheap right hand sides are the result of a rather ingenious formulation of the classical beam called the Cosserat rod by, first, the Cosserat brothers and, later, Stuart S. Antman which results in five nonlinear but uncoupled equations that require only multiplication and addition. The method is therefore suitable for hardware implementation thus moving the real-time requirement from a software solution to a hardware solution.
Volume 2: Explicit, multistage upwind schemes for Euler and Navier-Stokes equations
NASA Technical Reports Server (NTRS)
Elmiligui, Alaa; Ash, Robert L.
1992-01-01
The objective of this study was to develop a high-resolution-explicit-multi-block numerical algorithm, suitable for efficient computation of the three-dimensional, time-dependent Euler and Navier-Stokes equations. The resulting algorithm has employed a finite volume approach, using monotonic upstream schemes for conservation laws (MUSCL)-type differencing to obtain state variables at cell interface. Variable interpolations were written in the k-scheme formulation. Inviscid fluxes were calculated via Roe's flux-difference splitting, and van Leer's flux-vector splitting techniques, which are considered state of the art. The viscous terms were discretized using a second-order, central-difference operator. Two classes of explicit time integration has been investigated for solving the compressible inviscid/viscous flow problems--two-state predictor-corrector schemes, and multistage time-stepping schemes. The coefficients of the multistage time-stepping schemes have been modified successfully to achieve better performance with upwind differencing. A technique was developed to optimize the coefficients for good high-frequency damping at relatively high CFL numbers. Local time-stepping, implicit residual smoothing, and multigrid procedure were added to the explicit time stepping scheme to accelerate convergence to steady-state. The developed algorithm was implemented successfully in a multi-block code, which provides complete topological and geometric flexibility. The only requirement is C degree continuity of the grid across the block interface. The algorithm has been validated on a diverse set of three-dimensional test cases of increasing complexity. The cases studied were: (1) supersonic corner flow; (2) supersonic plume flow; (3) laminar and turbulent flow over a flat plate; (4) transonic flow over an ONERA M6 wing; and (5) unsteady flow of a compressible jet impinging on a ground plane (with and without cross flow). The emphasis of the test cases was validation of code, and assessment of performance, as well as demonstration of flexibility.
NASA Astrophysics Data System (ADS)
Hansen, Rebecca L.; Lee, Young Jin
2017-09-01
Metabolomics experiments require chemical identifications, often through MS/MS analysis. In mass spectrometry imaging (MSI), this necessitates running several serial tissue sections or using a multiplex data acquisition method. We have previously developed a multiplex MSI method to obtain MS and MS/MS data in a single experiment to acquire more chemical information in less data acquisition time. In this method, each raster step is composed of several spiral steps and each spiral step is used for a separate scan event (e.g., MS or MS/MS). One main limitation of this method is the loss of spatial resolution as the number of spiral steps increases, limiting its applicability for high-spatial resolution MSI. In this work, we demonstrate multiplex MS imaging is possible without sacrificing spatial resolution by the use of overlapping spiral steps, instead of spatially separated spiral steps as used in the previous work. Significant amounts of matrix and analytes are still left after multiple spectral acquisitions, especially with nanoparticle matrices, so that high quality MS and MS/MS data can be obtained on virtually the same tissue spot. This method was then applied to visualize metabolites and acquire their MS/MS spectra in maize leaf cross-sections at 10 μm spatial resolution. [Figure not available: see fulltext.
Effect of water hardness on cardiovascular mortality: an ecological time series approach.
Lake, I R; Swift, L; Catling, L A; Abubakar, I; Sabel, C E; Hunter, P R
2010-12-01
Numerous studies have suggested an inverse relationship between drinking water hardness and cardiovascular disease. However, the weight of evidence is insufficient for the WHO to implement a health-based guideline for water hardness. This study followed WHO recommendations to assess the feasibility of using ecological time series data from areas exposed to step changes in water hardness to investigate this issue. Monthly time series of cardiovascular mortality data, subdivided by age and sex, were systematically collected from areas reported to have undergone step changes in water hardness, calcium and magnesium in England and Wales between 1981 and 2005. Time series methods were used to investigate the effect of water hardness changes on mortality. No evidence was found of an association between step changes in drinking water hardness or drinking water calcium and cardiovascular mortality. The lack of areas with large populations and a reasonable change in magnesium levels precludes a definitive conclusion about the impact of this cation. We use our results on the variability of the series to consider the data requirements (size of population, time of water hardness change) for such a study to have sufficient power. Only data from areas with large populations (>500,000) are likely to be able to detect a change of the size suggested by previous studies (rate ratio of 1.06). Ecological time series studies of populations exposed to changes in drinking water hardness may not be able to provide conclusive evidence on the links between water hardness and cardiovascular mortality unless very large populations are studied. Investigations of individuals may be more informative.
Head movement compensation in real-time magnetoencephalographic recordings.
Little, Graham; Boe, Shaun; Bardouille, Timothy
2014-01-01
Neurofeedback- and brain-computer interface (BCI)-based interventions can be implemented using real-time analysis of magnetoencephalographic (MEG) recordings. Head movement during MEG recordings, however, can lead to inaccurate estimates of brain activity, reducing the efficacy of the intervention. Most real-time applications in MEG have utilized analyses that do not correct for head movement. Effective means of correcting for head movement are needed to optimize the use of MEG in such applications. Here we provide preliminary validation of a novel analysis technique, real-time source estimation (rtSE), that measures head movement and generates corrected current source time course estimates in real-time. rtSE was applied while recording a calibrated phantom to determine phantom position localization accuracy and source amplitude estimation accuracy under stationary and moving conditions. Results were compared to off-line analysis methods to assess validity of the rtSE technique. The rtSE method allowed for accurate estimation of current source activity at the source-level in real-time, and accounted for movement of the source due to changes in phantom position. The rtSE technique requires modifications and specialized analysis of the following MEG work flow steps.•Data acquisition•Head position estimation•Source localization•Real-time source estimation This work explains the technical details and validates each of these steps.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Haut, T. S.; Babb, T.; Martinsson, P. G.
2015-06-16
Our manuscript demonstrates a technique for efficiently solving the classical wave equation, the shallow water equations, and, more generally, equations of the form ∂u/∂t=Lu∂u/∂t=Lu, where LL is a skew-Hermitian differential operator. The idea is to explicitly construct an approximation to the time-evolution operator exp(τL)exp(τL) for a relatively large time-step ττ. Recently developed techniques for approximating oscillatory scalar functions by rational functions, and accelerated algorithms for computing functions of discretized differential operators are exploited. Principal advantages of the proposed method include: stability even for large time-steps, the possibility to parallelize in time over many characteristic wavelengths and large speed-ups over existingmore » methods in situations where simulation over long times are required. Numerical examples involving the 2D rotating shallow water equations and the 2D wave equation in an inhomogenous medium are presented, and the method is compared to the 4th order Runge–Kutta (RK4) method and to the use of Chebyshev polynomials. The new method achieved high accuracy over long-time intervals, and with speeds that are orders of magnitude faster than both RK4 and the use of Chebyshev polynomials.« less
A versatile valving toolkit for automating fluidic operations in paper microfluidic devices.
Toley, Bhushan J; Wang, Jessica A; Gupta, Mayuri; Buser, Joshua R; Lafleur, Lisa K; Lutz, Barry R; Fu, Elain; Yager, Paul
2015-03-21
Failure to utilize valving and automation techniques has restricted the complexity of fluidic operations that can be performed in paper microfluidic devices. We developed a toolkit of paper microfluidic valves and methods for automatic valve actuation using movable paper strips and fluid-triggered expanding elements. To the best of our knowledge, this is the first functional demonstration of this valving strategy in paper microfluidics. After introduction of fluids on devices, valves can actuate automatically after a) a certain period of time, or b) the passage of a certain volume of fluid. Timing of valve actuation can be tuned with greater than 8.5% accuracy by changing lengths of timing wicks, and we present timed on-valves, off-valves, and diversion (channel-switching) valves. The actuators require ~30 μl fluid to actuate and the time required to switch from one state to another ranges from ~5 s for short to ~50 s for longer wicks. For volume-metered actuation, the size of a metering pad can be adjusted to tune actuation volume, and we present two methods - both methods can achieve greater than 9% accuracy. Finally, we demonstrate the use of these valves in a device that conducts a multi-step assay for the detection of the malaria protein PfHRP2. Although slightly more complex than devices that do not have moving parts, this valving and automation toolkit considerably expands the capabilities of paper microfluidic devices. Components of this toolkit can be used to conduct arbitrarily complex, multi-step fluidic operations on paper-based devices, as demonstrated in the malaria assay device.
A versatile valving toolkit for automating fluidic operations in paper microfluidic devices
Toley, Bhushan J.; Wang, Jessica A.; Gupta, Mayuri; Buser, Joshua R.; Lafleur, Lisa K.; Lutz, Barry R.; Fu, Elain; Yager, Paul
2015-01-01
Failure to utilize valving and automation techniques has restricted the complexity of fluidic operations that can be performed in paper microfluidic devices. We developed a toolkit of paper microfluidic valves and methods for automatic valve actuation using movable paper strips and fluid-triggered expanding elements. To the best of our knowledge, this is the first functional demonstration of this valving strategy in paper microfluidics. After introduction of fluids on devices, valves can actuate automatically a) after a certain period of time, or b) after the passage of a certain volume of fluid. Timing of valve actuation can be tuned with greater than 8.5% accuracy by changing lengths of timing wicks, and we present timed on-valves, off-valves, and diversion (channel-switching) valves. The actuators require ~30 μl fluid to actuate and the time required to switch from one state to another ranges from ~5 s for short to ~50s for longer wicks. For volume-metered actuation, the size of a metering pad can be adjusted to tune actuation volume, and we present two methods – both methods can achieve greater than 9% accuracy. Finally, we demonstrate the use of these valves in a device that conducts a multi-step assay for the detection of the malaria protein PfHRP2. Although slightly more complex than devices that do not have moving parts, this valving and automation toolkit considerably expands the capabilities of paper microfluidic devices. Components of this toolkit can be used to conduct arbitrarily complex, multi-step fluidic operations on paper-based devices, as demonstrated in the malaria assay device. PMID:25606810
Practice quality improvement during residency: where do we stand and where can we improve?
Choudhery, Sadia; Richter, Michael; Anene, Alvin; Xi, Yin; Browning, Travis; Chason, David; Morriss, Michael Craig
2014-07-01
Completing a systems-based practice project, equivalent to a practice quality improvement project (PQI), is a residency requirement by the Accreditation Council for Graduate Medical Education and an American Board of Radiology milestone. The aim of this study was to assess the residents' perspectives on quality improvement projects in radiology. Survey data were collected from 154 trainee members of the Association of University Radiologists to evaluate the residents' views on PQI. Most residents were aware of the requirement of completing a PQI project and had faculty mentors for their projects. Residents who thought it was difficult to find a mentor were more likely to start their project later in residency (P < .0001). Publication rates were low overall, and lack of time was considered the greatest obstacle. Having dedicated time for a PQI project was associated with increased likelihood of publishing or presenting the data (P = .0091). Residents who rated the five surveyed PQI steps (coming up with an idea, finding a mentor, designing a project, finding resources, and finding time) as difficult steps were more likely to not have initiated a PQI project (P < .0001 for the first four and P = .0046 for time). We present five practical areas of improvement to make PQI a valuable learning experience: 1) Increasing awareness of PQI and providing ideas for projects, 2) encouraging faculty mentorship and publication, 3) educating residents about project design and implementation, 4) providing resources such as books and funds, and 5) allowing dedicated time. Copyright © 2014 AUR. Published by Elsevier Inc. All rights reserved.
BioNSi: A Discrete Biological Network Simulator Tool.
Rubinstein, Amir; Bracha, Noga; Rudner, Liat; Zucker, Noga; Sloin, Hadas E; Chor, Benny
2016-08-05
Modeling and simulation of biological networks is an effective and widely used research methodology. The Biological Network Simulator (BioNSi) is a tool for modeling biological networks and simulating their discrete-time dynamics, implemented as a Cytoscape App. BioNSi includes a visual representation of the network that enables researchers to construct, set the parameters, and observe network behavior under various conditions. To construct a network instance in BioNSi, only partial, qualitative biological data suffices. The tool is aimed for use by experimental biologists and requires no prior computational or mathematical expertise. BioNSi is freely available at http://bionsi.wix.com/bionsi , where a complete user guide and a step-by-step manual can also be found.
Ikegami, Toru; Shirabe, Ken; Yoshiya, Shohei; Soejima, Yuji; Yoshizumi, Tomoharu; Uchiyama, Hideaki; Toshima, Takeo; Motomura, Takashi; Maehara, Yoshihiko
2013-07-01
Reconstruction of the right inferior hepatic vein (RIHV) presents a major technical challenge in living donor liver transplantation (LDLT) using right lobe grafts. We studied 47 right lobe LDLT grafts with RIHV revascularization, comparing one-step reconstruction, performed post-May 2007 (n = 16), with direct anastomosis, performed pre-May 2007 (n = 31). In the one-step reconstruction technique, the internal jugular vein (n = 6), explanted portal vein (n = 5), inferior vena cava (n = 3), and shunt vessels (n = 2) were used as venous patch grafts for unifying the right hepatic vein, RIHVs, and middle hepatic vein tributaries. By 6 months after LDLT, there was no case of occlusion of the reconstructed RIHVs in the one-step reconstruction group, but a cumulative occlusion rate of 18.2 % in the direct anastomosis group. One-step reconstruction required a longer cold ischemic time (182 ± 40 vs. 115 ± 63, p < 0.001) and these patients had higher alanine transaminase values (142 ± 79 vs. 96 ± 46 IU/L, p = 0.024) on postoperative day POD 7. However, the 6-month short-term graft survival rates were 100 % with one-step reconstruction and 83.9 % with direct anastomosis, respectively. One-step reconstruction of the RIHVs using auto-venous grafts is an easy and feasible technique promoting successful right lobe LDLT.
Adaptation of catch-up saccades during the initiation of smooth pursuit eye movements.
Schütz, Alexander C; Souto, David
2011-04-01
Reduction of retinal speed and alignment of the line of sight are believed to be the respective primary functions of smooth pursuit and saccadic eye movements. As the eye muscles strength can change in the short-term, continuous adjustments of motor signals are required to achieve constant accuracy. While adaptation of saccade amplitude to systematic position errors has been extensively studied, we know less about the adaptive response to position errors during smooth pursuit initiation, when target motion has to be taken into account to program saccades, and when position errors at the saccade endpoint could also be corrected by increasing pursuit velocity. To study short-term adaptation (250 adaptation trials) of tracking eye movements, we introduced a position error during the first catch-up saccade made during the initiation of smooth pursuit-in a ramp-step-ramp paradigm. The target position was either shifted in the direction of the horizontally moving target (forward step), against it (backward step) or orthogonally to it (vertical step). Results indicate adaptation of catch-up saccade amplitude to back and forward steps. With vertical steps, saccades became oblique, by an inflexion of the early or late saccade trajectory. With a similar time course, post-saccadic pursuit velocity was increased in the step direction, adding further evidence that under some conditions pursuit and saccades can act synergistically to reduce position errors.
The Relaxation of Vicinal (001) with ZigZag [110] Steps
NASA Astrophysics Data System (ADS)
Hawkins, Micah; Hamouda, Ajmi Bh; González-Cabrera, Diego Luis; Einstein, Theodore L.
2012-02-01
This talk presents a kinetic Monte Carlo study of the relaxation dynamics of [110] steps on a vicinal (001) simple cubic surface. This system is interesting because [110] steps have different elementary excitation energetics and favor step diffusion more than close-packed [100] steps. In this talk we show how this leads to relaxation dynamics showing greater fluctuations on a shorter time scale for [110] steps as well as 2-bond breaking processes being rate determining in contrast to 3-bond breaking processes for [100] steps. The existence of a steady state is shown via the convergence of terrace width distributions at times much longer than the relaxation time. In this time regime excellent fits to the modified generalized Wigner distribution (as well as to the Berry-Robnik model when steps can overlap) were obtained. Also, step-position correlation function data show diffusion-limited increase for small distances along the step as well as greater average step displacement for zigzag steps compared to straight steps for somewhat longer distances along the step. Work supported by NSF-MRSEC Grant DMR 05-20471 as well as a DOE-CMCSN Grant.
A new algorithm for automatic Outlier Detection in GPS Time Series
NASA Astrophysics Data System (ADS)
Cannavo', Flavio; Mattia, Mario; Rossi, Massimo; Palano, Mimmo; Bruno, Valentina
2010-05-01
Nowadays continuous GPS time series are considered a crucial product of GPS permanent networks, useful in many geo-science fields, such as active tectonics, seismology, crustal deformation and volcano monitoring (Altamimi et al. 2002, Elósegui et al. 2006, Aloisi et al. 2009). Although the GPS data elaboration software has increased in reliability, the time series are still affected by different kind of noise, from the intrinsic noise (e.g. thropospheric delay) to the un-modeled noise (e.g. cycle slips, satellite faults, parameters changing). Typically GPS Time Series present characteristic noise that is a linear combination of white noise and correlated colored noise, and this characteristic is fractal in the sense that is evident for every considered time scale or sampling rate. The un-modeled noise sources result in spikes, outliers and steps. These kind of errors can appreciably influence the estimation of velocities of the monitored sites. The outlier detection in generic time series is a widely treated problem in literature (Wei, 2005), while is not fully developed for the specific kind of GPS series. We propose a robust automatic procedure for cleaning the GPS time series from the outliers and, especially for long daily series, steps due to strong seismic or volcanic events or merely instrumentation changing such as antenna and receiver upgrades. The procedure is basically divided in two steps: a first step for the colored noise reduction and a second step for outlier detection through adaptive series segmentation. Both algorithms present novel ideas and are nearly unsupervised. In particular, we propose an algorithm to estimate an autoregressive model for colored noise in GPS time series in order to subtract the effect of non Gaussian noise on the series. This step is useful for the subsequent step (i.e. adaptive segmentation) which requires the hypothesis of Gaussian noise. The proposed algorithms are tested in a benchmark case study and the results confirm that the algorithms are effective and reasonable. Bibliography - Aloisi M., A. Bonaccorso, F. Cannavò, S. Gambino, M. Mattia, G. Puglisi, E. Boschi, A new dyke intrusion style for the Mount Etna May 2008 eruption modelled through continuous tilt and GPS data, Terra Nova, Volume 21 Issue 4 , Pages 316 - 321, doi: 10.1111/j.1365-3121.2009.00889.x (August 2009) - Altamimi Z., Sillard P., Boucher C., ITRF2000: A new release of the International Terrestrial Reference frame for earth science applications, J Geophys Res-Solid Earth, 107 (B10): art. no.-2214, (Oct 2002) - Elósegui, P., J. L. Davis, D. Oberlander, R. Baena, and G. Ekström , Accuracy of high-rate GPS for seismology, Geophys. Res. Lett., 33, L11308, doi:10.1029/2006GL026065 (2006) - Wei W. S., Time Series Analysis: Univariate and Multivariate Methods, Addison Wesley (2 edition), ISBN-10: 0321322169 (July, 2005)
NASA Astrophysics Data System (ADS)
Wissmeier, L. C.; Barry, D. A.
2009-12-01
Computer simulations of water availability and quality play an important role in state-of-the-art water resources management. However, many of the most utilized software programs focus either on physical flow and transport phenomena (e.g., MODFLOW, MT3DMS, FEFLOW, HYDRUS) or on geochemical reactions (e.g., MINTEQ, PHREEQC, CHESS, ORCHESTRA). In recent years, several couplings between both genres of programs evolved in order to consider interactions between flow and biogeochemical reactivity (e.g., HP1, PHWAT). Software coupling procedures can be categorized as ‘close couplings’, where programs pass information via the memory stack at runtime, and ‘remote couplings’, where the information is exchanged at each time step via input/output files. The former generally involves modifications of software codes and therefore expert programming skills are required. We present a generic recipe for remotely coupling the PHREEQC geochemical modeling framework and flow and solute transport (FST) simulators. The iterative scheme relies on operator splitting with continuous re-initialization of PHREEQC and the FST of choice at each time step. Since PHREEQC calculates the geochemistry of aqueous solutions in contact with soil minerals, the procedure is primarily designed for couplings to FST’s for liquid phase flow in natural environments. It requires the accessibility of initial conditions and numerical parameters such as time and space discretization in the input text file for the FST and control of the FST via commands to the operating system (batch on Windows; bash/shell on Unix/Linux). The coupling procedure is based on PHREEQC’s capability to save the state of a simulation with all solid, liquid and gaseous species as a PHREEQC input file by making use of the dump file option in the TRANSPORT keyword. The output from one reaction calculation step is therefore reused as input for the following reaction step where changes in element amounts due to advection/dispersion are introduced as irreversible reactions. An example for the coupling of PHREEQC and MATLAB for the solution of unsaturated flow and transport is provided.
Getting here: five steps forwards and four back
NASA Astrophysics Data System (ADS)
Griffin, R. Elizabeth
The concept of libraries of stellar spectra is by no means new, though access to on-line ones is a relatively recent achievement. The road to the present state has been rocky, and we are still far short of what is needed and what can easily be attained. Spectra as by-products of individual research projects are inhomogeneous, biassed, and can be dangerously inadequate for modelling complex stellar systems. Archival products are eclectic, but unique in the time domain. Getting telescope time for the required level of homogeneity, inclusivity and completeness for new libraries requires strong scientific arguments that must be competitive. Using synthetic spectra builds misconceptions into the modelling. Attempts to set up the initial requirements (archives of observed spectra) encountered dogged resistance, much of which has never been resolved. Those struggles, and the indelible effects they have upon our science, will be reviewed, and the basics of a promotional programme outlined.
Jahanshahi-Anbuhi, Sana; Henry, Aleah; Leung, Vincent; Sicard, Clémence; Pennings, Kevin; Pelton, Robert; Brennan, John D; Filipe, Carlos D M
2014-01-07
Water soluble pullulan films were formatted into paper-based microfluidic devices, serving as a controlled time shutoff valve. The utility of the valve was demonstrated by a one-step, fully automatic implementation of a complex pesticide assay requiring timed, sequential exposure of an immobilized enzyme layer to separate liquid streams. Pullulan film dissolution and the capillary wicking of aqueous solutions through the device were measured and modeled providing valve design criteria. The films dissolve mainly by surface erosion, meaning the film thickness mainly controls the shutoff time. This method can also provide time-dependent sequential release of reagents without compromising the simplicity and low cost of paper-based devices.
Code of Federal Regulations, 2011 CFR
2011-07-01
... accordance with movement requirements of high-voltage power centers and portable transformers (§ 75.812) and... transformer. A step-up transformer is a transformer that steps up the low or medium voltage to high voltage... supplying low or medium voltage to the step-up transformer must meet the applicable requirements of 30 CFR...
Code of Federal Regulations, 2013 CFR
2013-07-01
... accordance with movement requirements of high-voltage power centers and portable transformers (§ 75.812) and... transformer. A step-up transformer is a transformer that steps up the low or medium voltage to high voltage... supplying low or medium voltage to the step-up transformer must meet the applicable requirements of 30 CFR...
Code of Federal Regulations, 2012 CFR
2012-07-01
... accordance with movement requirements of high-voltage power centers and portable transformers (§ 75.812) and... transformer. A step-up transformer is a transformer that steps up the low or medium voltage to high voltage... supplying low or medium voltage to the step-up transformer must meet the applicable requirements of 30 CFR...
Code of Federal Regulations, 2010 CFR
2010-07-01
... accordance with movement requirements of high-voltage power centers and portable transformers (§ 75.812) and... transformer. A step-up transformer is a transformer that steps up the low or medium voltage to high voltage... supplying low or medium voltage to the step-up transformer must meet the applicable requirements of 30 CFR...
Code of Federal Regulations, 2014 CFR
2014-07-01
... accordance with movement requirements of high-voltage power centers and portable transformers (§ 75.812) and... transformer. A step-up transformer is a transformer that steps up the low or medium voltage to high voltage... supplying low or medium voltage to the step-up transformer must meet the applicable requirements of 30 CFR...
Application of symbolic/numeric matrix solution techniques to the NASTRAN program
NASA Technical Reports Server (NTRS)
Buturla, E. M.; Burroughs, S. H.
1977-01-01
The matrix solving algorithm of any finite element algorithm is extremely important since solution of the matrix equations requires a large amount of elapse time due to null calculations and excessive input/output operations. An alternate method of solving the matrix equations is presented. A symbolic processing step followed by numeric solution yields the solution very rapidly and is especially useful for nonlinear problems.
1985-06-01
and ptosis 7. epicanthal folds 8. cleft lip or cleft palate 9. hirsuitism APPENDIX 2 PROCESSION PLAN Stage Activit Time Required Phase I step 1 Select...thin upper lip , and/or flattening of the maxillary area II. FETAL ALCOHOL EFFECTS: Any congenital abnormality seen in children as a result of maternal
Community Sediment Transport Model
2007-01-01
Woods Hole, MA 02543-1598 Phone: (508) 457-2269 Fax: (508) 457-2310 email: csherwood@usgs.gov Timothy Keen Naval Research Laboratory, Code...intended to be used as both a research tool and for practical applications. An accurate and useful model will require coupling sediment-transport with...and time steps range from seconds to minutes. We include higher-resolution sediment- transport calculation modules for research problems but, for
Sen. Cruz, Ted [R-TX
2013-11-07
Senate - 11/12/2013 Read the second time. Placed on Senate Legislative Calendar under General Orders. Calendar No. 242. (All Actions) Tracker: This bill has the status IntroducedHere are the steps for Status of Legislation:
Hardouin Duparc, V; Schaper, F
2017-10-14
Sulfonato-imine copper complexes with either chloride or triflate counteranions were prepared in a one-step reaction followed by anion-exchange. They are highly active in Chan-Evans-Lam couplings under mild conditions with a variety of amines or anilines, in particular with sterically hindered substrates. No optimization of reaction conditions other than time and/or temperature is required.
Time to take action to meet GS1 mandate.
Doyle, Chris
2014-10-01
A recent paper, NHS eProcurement Strategy,(1) published earlier this year by the Department of Health, mandated the use of GS1 standards throughout the NHS in England. The Strategy requires all NHS Trusts in England to produce a Board-approved adoption plan. Chris Doyle, head of Healthcare at independent global supply chain standards organization, GS1 UK, explains the next steps to becoming compliant.
A.R. Weiskittel; S.M. Garber; G.P. Johnson; D.A. Maguire; R.A. Monserud
2007-01-01
Simulating the influence of intensive management and annual weather fluctuations on tree growth requires a shorter time step than currently employed by most regional growth models. High-quality data sets are available for several plantation species in the Pacific Northwest region of the United States, but the growth periods ranged from 2 to 12 years. Measurement...
Sen. Toomey, Pat [R-PA
2011-07-26
Senate - 07/27/2011 Read the second time. Placed on Senate Legislative Calendar under General Orders. Calendar No. 112. (All Actions) Tracker: This bill has the status IntroducedHere are the steps for Status of Legislation:
Sen. Cruz, Ted [R-TX
2014-07-09
Senate - 07/10/2014 Read the second time. Placed on Senate Legislative Calendar under General Orders. Calendar No. 460. (All Actions) Tracker: This bill has the status IntroducedHere are the steps for Status of Legislation:
Consistency of internal fluxes in a hydrological model running at multiple time steps
NASA Astrophysics Data System (ADS)
Ficchi, Andrea; Perrin, Charles; Andréassian, Vazken
2016-04-01
Improving hydrological models remains a difficult task and many ways can be explored, among which one can find the improvement of spatial representation, the search for more robust parametrization, the better formulation of some processes or the modification of model structures by trial-and-error procedure. Several past works indicate that model parameters and structure can be dependent on the modelling time step, and there is thus some rationale in investigating how a model behaves across various modelling time steps, to find solutions for improvements. Here we analyse the impact of data time step on the consistency of the internal fluxes of a rainfall-runoff model run at various time steps, by using a large data set of 240 catchments. To this end, fine time step hydro-climatic information at sub-hourly resolution is used as input of a parsimonious rainfall-runoff model (GR) that is run at eight different model time steps (from 6 minutes to one day). The initial structure of the tested model (i.e. the baseline) corresponds to the daily model GR4J (Perrin et al., 2003), adapted to be run at variable sub-daily time steps. The modelled fluxes considered are interception, actual evapotranspiration and intercatchment groundwater flows. Observations of these fluxes are not available, but the comparison of modelled fluxes at multiple time steps gives additional information for model identification. The joint analysis of flow simulation performance and consistency of internal fluxes at different time steps provides guidance to the identification of the model components that should be improved. Our analysis indicates that the baseline model structure is to be modified at sub-daily time steps to warrant the consistency and realism of the modelled fluxes. For the baseline model improvement, particular attention is devoted to the interception model component, whose output flux showed the strongest sensitivity to modelling time step. The dependency of the optimal model complexity on time step is also analysed. References: Perrin, C., Michel, C., Andréassian, V., 2003. Improvement of a parsimonious model for streamflow simulation. Journal of Hydrology, 279(1-4): 275-289. DOI:10.1016/S0022-1694(03)00225-7
How many steps/day are enough? for children and adolescents
2011-01-01
Worldwide, public health physical activity guidelines include special emphasis on populations of children (typically 6-11 years) and adolescents (typically 12-19 years). Existing guidelines are commonly expressed in terms of frequency, time, and intensity of behaviour. However, the simple step output from both accelerometers and pedometers is gaining increased credibility in research and practice as a reasonable approximation of daily ambulatory physical activity volume. Therefore, the purpose of this article is to review existing child and adolescent objectively monitored step-defined physical activity literature to provide researchers, practitioners, and lay people who use accelerometers and pedometers with evidence-based translations of these public health guidelines in terms of steps/day. In terms of normative data (i.e., expected values), the updated international literature indicates that we can expect 1) among children, boys to average 12,000 to 16,000 steps/day and girls to average 10,000 to 13,000 steps/day; and, 2) adolescents to steadily decrease steps/day until approximately 8,000-9,000 steps/day are observed in 18-year olds. Controlled studies of cadence show that continuous MVPA walking produces 3,300-3,500 steps in 30 minutes or 6,600-7,000 steps in 60 minutes in 10-15 year olds. Limited evidence suggests that a total daily physical activity volume of 10,000-14,000 steps/day is associated with 60-100 minutes of MVPA in preschool children (approximately 4-6 years of age). Across studies, 60 minutes of MVPA in primary/elementary school children appears to be achieved, on average, within a total volume of 13,000 to 15,000 steps/day in boys and 11,000 to 12,000 steps/day in girls. For adolescents (both boys and girls), 10,000 to 11,700 may be associated with 60 minutes of MVPA. Translations of time- and intensity-based guidelines may be higher than existing normative data (e.g., in adolescents) and therefore will be more difficult to achieve (but not impossible nor contraindicated). Recommendations are preliminary and further research is needed to confirm and extend values for measured cadences, associated speeds, and MET values in young people; continue to accumulate normative data (expected values) for both steps/day and MVPA across ages and populations; and, conduct longitudinal and intervention studies in children and adolescents required to inform the shape of step-defined physical activity dose-response curves associated with various health parameters. PMID:21798014
Lee, Byoung-Hee
2016-01-01
[Purpose] This study investigated the effects of real-time feedback using infrared camera recognition technology-based augmented reality in gait training for children with cerebral palsy. [Subjects] Two subjects with cerebral palsy were recruited. [Methods] In this study, augmented reality based real-time feedback training was conducted for the subjects in two 30-minute sessions per week for four weeks. Spatiotemporal gait parameters were used to measure the effect of augmented reality-based real-time feedback training. [Results] Velocity, cadence, bilateral step and stride length, and functional ambulation improved after the intervention in both cases. [Conclusion] Although additional follow-up studies of the augmented reality based real-time feedback training are required, the results of this study demonstrate that it improved the gait ability of two children with cerebral palsy. These findings suggest a variety of applications of conservative therapeutic methods which require future clinical trials. PMID:27190489
NASA Astrophysics Data System (ADS)
Lee, Ji-Seok; Song, Ki-Won
2015-11-01
The objective of the present study is to systematically elucidate the time-dependent rheological behavior of concentrated xanthan gum systems in complicated step-shear flow fields. Using a strain-controlled rheometer (ARES), step-shear flow behaviors of a concentrated xanthan gum model solution have been experimentally investigated in interrupted shear flow fields with a various combination of different shear rates, shearing times and rest times, and step-incremental and step-reductional shear flow fields with various shearing times. The main findings obtained from this study are summarized as follows. (i) In interrupted shear flow fields, the shear stress is sharply increased until reaching the maximum stress at an initial stage of shearing times, and then a stress decay towards a steady state is observed as the shearing time is increased in both start-up shear flow fields. The shear stress is suddenly decreased immediately after the imposed shear rate is stopped, and then slowly decayed during the period of a rest time. (ii) As an increase in rest time, the difference in the maximum stress values between the two start-up shear flow fields is decreased whereas the shearing time exerts a slight influence on this behavior. (iii) In step-incremental shear flow fields, after passing through the maximum stress, structural destruction causes a stress decay behavior towards a steady state as an increase in shearing time in each step shear flow region. The time needed to reach the maximum stress value is shortened as an increase in step-increased shear rate. (iv) In step-reductional shear flow fields, after passing through the minimum stress, structural recovery induces a stress growth behavior towards an equilibrium state as an increase in shearing time in each step shear flow region. The time needed to reach the minimum stress value is lengthened as a decrease in step-decreased shear rate.
5 CFR 531.508 - Evaluation of quality step increase authority.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 5 Administrative Personnel 1 2010-01-01 2010-01-01 false Evaluation of quality step increase... REGULATIONS PAY UNDER THE GENERAL SCHEDULE Quality Step Increases § 531.508 Evaluation of quality step... grant quality step increases. The agency shall take any corrective action required by the Office. [60 FR...
Code of Federal Regulations, 2010 CFR
2010-10-01
... 48 Federal Acquisition Regulations System 1 2010-10-01 2010-10-01 false Step one. 14.503-1 Section... AND CONTRACT TYPES SEALED BIDDING Two-Step Sealed Bidding 14.503-1 Step one. (a) Requests for... use the two step method. (3) The requirements of the technical proposal. (4) The evaluation criteria...
NASA Technical Reports Server (NTRS)
Boyalakuntla, Kishore; Soni, Bharat K.; Thornburg, Hugh J.; Yu, Robert
1996-01-01
During the past decade, computational simulation of fluid flow around complex configurations has progressed significantly and many notable successes have been reported, however, unsteady time-dependent solutions are not easily obtainable. The present effort involves unsteady time dependent simulation of temporally deforming geometries. Grid generation for a complex configuration can be a time consuming process and temporally varying geometries necessitate the regeneration of such grids for every time step. Traditional grid generation techniques have been tried and demonstrated to be inadequate to such simulations. Non-Uniform Rational B-splines (NURBS) based techniques provide a compact and accurate representation of the geometry. This definition can be coupled with a distribution mesh for a user defined spacing. The present method greatly reduces cpu requirements for time dependent remeshing, facilitating the simulation of more complex unsteady problems. A thrust vectoring nozzle has been chosen to demonstrate the capability as it is of current interest in the aerospace industry for better maneuverability of fighter aircraft in close combat and in post stall regimes. This current effort is the first step towards multidisciplinary design optimization which involves coupling the aerodynamic heat transfer and structural analysis techniques. Applications include simulation of temporally deforming bodies and aeroelastic problems.
Molded, wafer level optics for long wave infra-red applications
NASA Astrophysics Data System (ADS)
Franks, John
2016-05-01
For many years, the Thermal Imaging market has been driven by the high volume consumer market. The first signs of this came with the launch of night vision systems for cars, first by Cadillac and Honda and then, more successfully by BMW, Daimler and Audi. For the first time, simple thermal imaging systems were being manufactured at the rate of more than 10,000 units a year. This step change in volumes enabled a step change in system costs, with thermal imaging moving into the consumer's price range. Today we see that the consumer awareness and the consumer market continues to increase with the launch of a number of consumer focused smart phone add-ons. This has brought a further step change in system costs, with the possibility to turn your mobile phone into a thermal imager for under $250. As the detector technology has matured, the pixel pitches have dropped from 50μm in 2002 to 12 μm or even 10μm in today's detectors. This dramatic shrinkage in size has had an equally dramatic effect on the optics required to produce the image on the detector. A moderate field of view that would have required a focal length of 40mm in 2002 now requires a focal length of 8mm. For wide field of view applications and small detector formats, focal lengths in the range 1mm to 5mm are becoming common. For lenses, the quantity manufactured, quality and costs will require a new approach to high volume Infra-Red (IR) manufacturing to meet customer expectations. This, taken with the SwaP-C requirements and the emerging requirement for very small lenses driven by the new detectors, suggests that wafer scale optics are part of the solution. Umicore can now present initial results from an intensive research and development program to mold and coat wafer level optics, using its chalcogenide glass, GASIR®.
Jalal, Hawre; O'Dell, James R; Bridges, S Louis; Cofield, Stacey; Curtis, Jeffrey R; Mikuls, Ted R; Moreland, Larry W; Michaud, Kaleb
2016-12-01
To evaluate the cost-effectiveness of all 4 interventions in the Treatment of Early Aggressive Rheumatoid Arthritis (TEAR) clinical trial: immediate triple (IT), immediate etanercept (IE), step-up triple (ST), and step-up etanercept (SE). Step-up interventions started with methotrexate and added either etanercept or sulfasalazine plus hydroxychloroquine to patients with persistent disease activity. We built a Markov cohort model that uses individual-level data from the TEAR trial, published literature, and supplemental clinical data. Costs were in US dollars, benefits in quality-adjusted life years (QALYs), perspective was societal, and the time horizon was 5 years. The immediate strategies were more efficacious than step-up strategies. SE and IE were more costly than ST and IT, primarily due to treatment cost differences. In addition, IT was the least expensive and most effective strategy when the time horizon was 1 and 2 years. When the time horizon was 5 years, IE was marginally more effective than IT (3.483 versus 3.476 QALYs), but IE was substantially more expensive than IT ($148,800 versus $52,600), producing an incremental cost-effectiveness ratio of $12.5 million per QALY. These results were robust to both one-way deterministic and joint probabilistic sensitivity analyses. IT was highly cost-effective in the majority of scenarios. Although IE was more effective in 5 years, a substantial reduction in the cost of biologic agents was required in order for IE to become cost-effective in early aggressive RA under willingness-to-pay thresholds that most health care settings may find acceptable. © 2016, American College of Rheumatology.
Towards Flange-to-Flange Turbopump Simulations for Liquid Rocket Engines
NASA Technical Reports Server (NTRS)
Kiris, Cetin; Williams, Robert
2000-01-01
The primary objective of this research is to support the design of liquid rocket systems for the Advanced Space Transportation System. Since the space launch systems in the near future are likely to rely on liquid rocket engines, increasing the efficiency and reliability of the engine components is an important task. One of the major problems in the liquid rocket engine is to understand fluid dynamics of fuel and oxidizer flows from the fuel tank to plume. Understanding the flow through the entire turbopump geometry through numerical simulation will be of significant value toward design. This will help to improve safety of future space missions. One of the milestones of this effort is to develop, apply and demonstrate the capability and accuracy of 3D CFD methods as efficient design analysis tools on high performance computer platforms. The development of the MPI and MLP versions of the INS3D code is currently underway. The serial version of INS3D code is a multidimensional incompressible Navier-Stokes solver based on overset grid technology. INS3D-MPI is based on the explicit massage-passing interface across processors and is primarily suited for distributed memory systems. INS3D-MLP is based on multi-level parallel method and is suitable for distributed-shared memory systems. For the entire turbopump simulations, moving boundary capability and an efficient time-accurate integration methods are build in the flow solver. To handle the geometric complexity and moving boundary problems, overset grid scheme is incorporated with the solver that new connectivity data will be obtained at each time step. The Chimera overlapped grid scheme allows subdomains move relative to each other, and provides a great flexibility when the boundary movement creates large displacements. The performance of the two time integration schemes for time-accurate computations is investigated. For an unsteady flow which requires small physical time step, the pressure projection method was found to be computationally efficient since it does not require any subiterations procedure. It was observed that the artificial compressibility method requires a fast convergence scheme at each physical time step in order to satisfy incompressibility condition. This was obtained by using a GMRES-ILU(0) solver in our computations. When a line-relaxation scheme was used, the time accuracy was degraded and time-accurate computations became very expensive. The current geometry for the LOX boost turbopump has various rotating and stationary components, such as inducer, stators, kicker, hydrolic turbine, where the flow is extremely unsteady. Figure 1 shows the geometry and computed surface pressure of the inducer. The inducer and the hydrolic turbine rotate in different rotational speed.
Halek, Margareta; Holle, Daniela; Bartholomeyczik, Sabine
2017-08-14
One of the most difficult issues for care staff is the manifestation of challenging behaviour among residents with dementia. The first step in managing this type of behaviour is analysing its triggers. A structured assessment instrument can facilitate this process and may improve carers' management of the situation. This paper describes the development of an instrument designed for this purpose and an evaluation of its content validity and its feasibility and practicability in nursing homes. The development process and evaluation of the content validity were based on Lynn's methodology (1998). A literature review (steps 1 + 2) provided the theoretical framework for the instrument and for item formation. Ten experts (step 3) evaluated the first version of the instrument (the Innovative dementia-oriented Assessment (IdA®)) regarding its relevance, clarity, meaningfulness and completeness; content validity indices at the scale-level (S-CVI) and item-level (I-CVI) were calculated. Health care workers (step 4) evaluated the second version in a workshop. Finally, the instrument was introduced to 17 units in 11 nursing homes in a field study (step 5), and 60 care staff members assessed its practicability and feasibility. The IdA® used the need-driven dementia-compromised behaviour (NDB) model as a theoretical framework. The literature review and expert-based panel supported the content validity of the IdA®. At the item level, 77% of the ratings had a CVI greater than or equal to 0.78. The majority of the question-ratings (84%, n = 154) and answer-ratings (69%, n = 122) showed valid results, with none below 0.50. The health care workers confirmed the understandability, completeness and plausibility of the IdA®. Steps 3 and 4 led to further item clarification. The carers in the study considered the instrument helpful for reflecting challenging behaviour and beneficial for the care of residents with dementia. Negative ratings referred to the time required and the lack of effect on residents´ behaviour. There was strong evidence supporting the content validity of the IdA®. Despite the substantial length and time requirement, the instrument was considered helpful for analysing challenging behaviour. Thus, further research on the psychometric qualities, implementation aspects and effectiveness of the IdA® in understanding challenging behaviour is needed.
Short bowel mucosal morphology, proliferation and inflammation at first and repeat STEP procedures.
Mutanen, Annika; Barrett, Meredith; Feng, Yongjia; Lohi, Jouko; Rabah, Raja; Teitelbaum, Daniel H; Pakarinen, Mikko P
2018-04-17
Although serial transverse enteroplasty (STEP) improves function of dilated short bowel, a significant proportion of patients require repeat surgery. To address underlying reasons for unsuccessful STEP, we compared small intestinal mucosal characteristics between initial and repeat STEP procedures in children with short bowel syndrome (SBS). Fifteen SBS children, who underwent 13 first and 7 repeat STEP procedures with full thickness small bowel samples at median age 1.5 years (IQR 0.7-3.7) were included. The specimens were analyzed histologically for mucosal morphology, inflammation and muscular thickness. Mucosal proliferation and apoptosis was analyzed with MIB1 and Tunel immunohistochemistry. Median small bowel length increased 42% by initial STEP and 13% by repeat STEP (p=0.05), while enteral caloric intake increased from 6% to 36% (p=0.07) during 14 (12-42) months between the procedures. Abnormal mucosal inflammation was frequently observed both at initial (69%) and additional STEP (86%, p=0.52) surgery. Villus height, crypt depth, enterocyte proliferation and apoptosis as well as muscular thickness were comparable at first and repeat STEP (p>0.05 for all). Patients, who required repeat STEP tended to be younger (p=0.057) with less apoptotic crypt cells (p=0.031) at first STEP. Absence of ileocecal valve associated with increased intraepithelial leukocyte count and reduced crypt cell proliferation index (p<0.05 for both). No adaptive mucosal hyperplasia or muscular alterations occurred between first and repeat STEP. Persistent inflammation and lacking mucosal growth may contribute to continuing bowel dysfunction in SBS children, who require repeat STEP procedure, especially after removal of the ileocecal valve. Level IV, retrospective study. Copyright © 2018 Elsevier Inc. All rights reserved.
Roch, Samuel; Brinker, Alexander
2017-04-18
The rising evidence of microplastic pollution impacts on aquatic organisms in both marine and freshwater ecosystems highlights a pressing need for adequate and comparable detection methods. Available tissue digestion protocols are time-consuming (>10 h) and/or require several procedural steps, during which materials can be lost and contaminants introduced. This novel approach comprises an accelerated digestion step using sodium hydroxide and nitric acid in combination to digest all organic material within 1 h plus an additional separation step using sodium iodide which can be used to reduce mineral residues in samples where necessary. This method yielded a microplastic recovery rate of ≥95%, and all tested polymer types were recovered with only minor changes in weight, size, and color with the exception of polyamide. The method was also shown to be effective on field samples from two benthic freshwater fish species, revealing a microplastic burden comparable to that indicated in the literature. As a consequence, the present method saves time, minimizes the loss of material and the risk of contamination, and facilitates the identification of plastic particles and fibers, thus providing an efficient method to detect and quantify microplastics in the gastrointestinal tract of fishes.
An adaptive time-stepping strategy for solving the phase field crystal model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Zhengru, E-mail: zrzhang@bnu.edu.cn; Ma, Yuan, E-mail: yuner1022@gmail.com; Qiao, Zhonghua, E-mail: zqiao@polyu.edu.hk
2013-09-15
In this work, we will propose an adaptive time step method for simulating the dynamics of the phase field crystal (PFC) model. The numerical simulation of the PFC model needs long time to reach steady state, and then large time-stepping method is necessary. Unconditionally energy stable schemes are used to solve the PFC model. The time steps are adaptively determined based on the time derivative of the corresponding energy. It is found that the use of the proposed time step adaptivity cannot only resolve the steady state solution, but also the dynamical development of the solution efficiently and accurately. Themore » numerical experiments demonstrate that the CPU time is significantly saved for long time simulations.« less
NASA Technical Reports Server (NTRS)
Mullikin, Richard L.
1987-01-01
Control of on-orbit operation of a spacecraft requires retention and application of special purpose, often unique, knowledge of equipment and procedures. Real-time distributed expert systems (RTDES) permit a modular approach to a complex application such as on-orbit spacecraft support. One aspect of a human-machine system that lends itself to the application of RTDES is the function of satellite/mission controllers - the next logical step toward the creation of truly autonomous spacecraft systems. This system application is described.
Systems engineering principles for the design of biomedical signal processing systems.
Faust, Oliver; Acharya U, Rajendra; Sputh, Bernhard H C; Min, Lim Choo
2011-06-01
Systems engineering aims to produce reliable systems which function according to specification. In this paper we follow a systems engineering approach to design a biomedical signal processing system. We discuss requirements capturing, specification definition, implementation and testing of a classification system. These steps are executed as formal as possible. The requirements, which motivate the system design, are based on diabetes research. The main requirement for the classification system is to be a reliable component of a machine which controls diabetes. Reliability is very important, because uncontrolled diabetes may lead to hyperglycaemia (raised blood sugar) and over a period of time may cause serious damage to many of the body systems, especially the nerves and blood vessels. In a second step, these requirements are refined into a formal CSP‖ B model. The formal model expresses the system functionality in a clear and semantically strong way. Subsequently, the proven system model was translated into an implementation. This implementation was tested with use cases and failure cases. Formal modeling and automated model checking gave us deep insight in the system functionality. This insight enabled us to create a reliable and trustworthy implementation. With extensive tests we established trust in the reliability of the implementation. Copyright © 2010 Elsevier Ireland Ltd. All rights reserved.
Advanced Platform Systems Technology study. Volume 4: Technology advancement program plan
NASA Technical Reports Server (NTRS)
1983-01-01
An overview study of the major technology definition tasks and subtasks along with their interfaces and interrelationships is presented. Although not specifically indicated in the diagram, iterations were required at many steps to finalize the results. The development of the integrated technology advancement plan was initiated by using the results of the previous two tasks, i.e., the trade studies and the preliminary cost and schedule estimates for the selected technologies. Descriptions for the development of each viable technology advancement was drawn from the trade studies. Additionally, a logic flow diagram depicting the steps in developing each technology element was developed along with descriptions for each of the major elements. Next, major elements of the logic flow diagrams were time phased, and that allowed the definition of a technology development schedule that was consistent with the space station program schedule when possible. Schedules show the major milestone including tests required as described in the logic flow diagrams.
Testing the Stability of 2-D Recursive QP, NSHP and General Digital Filters of Second Order
NASA Astrophysics Data System (ADS)
Rathinam, Ananthanarayanan; Ramesh, Rengaswamy; Reddy, P. Subbarami; Ramaswami, Ramaswamy
Several methods for testing stability of first quadrant quarter-plane two dimensional (2-D) recursive digital filters have been suggested in 1970's and 80's. Though Jury's row and column algorithms, row and column concatenation stability tests have been considered as highly efficient mapping methods. They still fall short of accuracy as they need infinite number of steps to conclude about the exact stability of the filters and also the computational time required is enormous. In this paper, we present procedurally very simple algebraic method requiring only two steps when applied to the second order 2-D quarter - plane filter. We extend the same method to the second order Non-Symmetric Half-plane (NSHP) filters. Enough examples are given for both these types of filters as well as some lower order general recursive 2-D digital filters. We applied our method to barely stable or barely unstable filter examples available in the literature and got the same decisions thus showing that our method is accurate enough.
Endoscopic ultrasound description of liver segmentation and anatomy.
Bhatia, Vikram; Hijioka, Susumu; Hara, Kazuo; Mizuno, Nobumasa; Imaoka, Hiroshi; Yamao, Kenji
2014-05-01
Endoscopic ultrasound (EUS) can demonstrate the detailed anatomy of the liver from the transgastric and transduodenal routes. Most of the liver segments can be imaged with EUS, except the right posterior segments. The intrahepatic vascular landmarks include the major hepatic veins, portal vein radicals, hepatic arterial branches, and the inferior vena cava, and the venosum and teres ligaments are other important intrahepatic landmarks. The liver hilum and gallbladder serve as useful surface landmarks. Deciphering liver segmentation and anatomy by EUS requires orienting the scan planes with these landmarkstructures, and is different from the static cross-sectional radiological images. Orientation during EUS requires appreciation of the numerous scan planes possible in real-time, and the direction of scanning from the stomach and duodenal bulb. We describe EUS imaging of the liver with a curved linear probe in a step-by-step approach, with the relevant anatomical details, potential applications, and pitfalls of this novel EUS application. © 2013 The Authors. Digestive Endoscopy © 2013 Japan Gastroenterological Endoscopy Society.
Enriching step-based product information models to support product life-cycle activities
NASA Astrophysics Data System (ADS)
Sarigecili, Mehmet Ilteris
The representation and management of product information in its life-cycle requires standardized data exchange protocols. Standard for Exchange of Product Model Data (STEP) is such a standard that has been used widely by the industries. Even though STEP-based product models are well defined and syntactically correct, populating product data according to these models is not easy because they are too big and disorganized. Data exchange specifications (DEXs) and templates provide re-organized information models required in data exchange of specific activities for various businesses. DEXs show us it would be possible to organize STEP-based product models in order to support different engineering activities at various stages of product life-cycle. In this study, STEP-based models are enriched and organized to support two engineering activities: materials information declaration and tolerance analysis. Due to new environmental regulations, the substance and materials information in products have to be screened closely by manufacturing industries. This requires a fast, unambiguous and complete product information exchange between the members of a supply chain. Tolerance analysis activity, on the other hand, is used to verify the functional requirements of an assembly considering the worst case (i.e., maximum and minimum) conditions for the part/assembly dimensions. Another issue with STEP-based product models is that the semantics of product data are represented implicitly. Hence, it is difficult to interpret the semantics of data for different product life-cycle phases for various application domains. OntoSTEP, developed at NIST, provides semantically enriched product models in OWL. In this thesis, we would like to present how to interpret the GD & T specifications in STEP for tolerance analysis by utilizing OntoSTEP.