Control Circuit For Two Stepping Motors
NASA Technical Reports Server (NTRS)
Ratliff, Roger; Rehmann, Kenneth; Backus, Charles
1990-01-01
Control circuit operates two independent stepping motors, one at a time. Provides following operating features: After selected motor stepped to chosen position, power turned off to reduce dissipation; Includes two up/down counters that remember at which one of eight steps each motor set. For selected motor, step indicated by illumination of one of eight light-emitting diodes (LED's) in ring; Selected motor advanced one step at time or repeatedly at rate controlled; Motor current - 30 mA at 90 degree positions, 60 mA at 45 degree positions - indicated by high or low intensity of LED that serves as motor-current monitor; Power-on reset feature provides trouble-free starts; To maintain synchronism between control circuit and motors, stepping of counters inhibited when motor power turned off.
Time-Delayed Two-Step Selective Laser Photodamage of Dye-Biomolecule Complexes
NASA Astrophysics Data System (ADS)
Andreoni, A.; Cubeddu, R.; de Silvestri, S.; Laporta, P.; Svelto, O.
1980-08-01
A scheme is proposed for laser-selective photodamage of biological molecules, based on time-delayed two-step photoionization of a dye molecule bound to the biomolecule. The validity of the scheme is experimentally demonstrated in the case of the dye Proflavine, bound to synthetic polynucleotides.
Optimal subinterval selection approach for power system transient stability simulation
Kim, Soobae; Overbye, Thomas J.
2015-10-21
Power system transient stability analysis requires an appropriate integration time step to avoid numerical instability as well as to reduce computational demands. For fast system dynamics, which vary more rapidly than what the time step covers, a fraction of the time step, called a subinterval, is used. However, the optimal value of this subinterval is not easily determined because the analysis of the system dynamics might be required. This selection is usually made from engineering experiences, and perhaps trial and error. This paper proposes an optimal subinterval selection approach for power system transient stability analysis, which is based on modalmore » analysis using a single machine infinite bus (SMIB) system. Fast system dynamics are identified with the modal analysis and the SMIB system is used focusing on fast local modes. An appropriate subinterval time step from the proposed approach can reduce computational burden and achieve accurate simulation responses as well. As a result, the performance of the proposed method is demonstrated with the GSO 37-bus system.« less
Index Fund Selections with GAs and Classifications Based on Turnover
NASA Astrophysics Data System (ADS)
Orito, Yukiko; Motoyama, Takaaki; Yamazaki, Genji
It is well known that index fund selections are important for the risk hedge of investment in a stock market. The`selection’means that for`stock index futures’, n companies of all ones in the market are selected. For index fund selections, Orito et al.(6) proposed a method consisting of the following two steps : Step 1 is to select N companies in the market with a heuristic rule based on the coefficient of determination between the return rate of each company in the market and the increasing rate of the stock price index. Step 2 is to construct a group of n companies by applying genetic algorithms to the set of N companies. We note that the rule of Step 1 is not unique. The accuracy of the results using their method depends on the length of time data (price data) in the experiments. The main purpose of this paper is to introduce a more`effective rule’for Step 1. The rule is based on turnover. The method consisting of Step 1 based on turnover and Step 2 is examined with numerical experiments for the 1st Section of Tokyo Stock Exchange. The results show that with our method, it is possible to construct the more effective index fund than the results of Orito et al.(6). The accuracy of the results using our method depends little on the length of time data (turnover data). The method especially works well when the increasing rate of the stock price index over a period can be viewed as a linear time series data.
Impaired Response Selection During Stepping Predicts Falls in Older People-A Cohort Study.
Schoene, Daniel; Delbaere, Kim; Lord, Stephen R
2017-08-01
Response inhibition, an important executive function, has been identified as a risk factor for falls in older people. This study investigated whether step tests that include different levels of response inhibition differ in their ability to predict falls and whether such associations are mediated by measures of attention, speed, and/or balance. A cohort study with a 12-month follow-up was conducted in community-dwelling older people without major cognitive and mobility impairments. Participants underwent 3 step tests: (1) choice stepping reaction time (CSRT) requiring rapid decision making and step initiation; (2) inhibitory choice stepping reaction time (iCSRT) requiring additional response inhibition and response-selection (go/no-go); and (3) a Stroop Stepping Test (SST) under congruent and incongruent conditions requiring conflict resolution. Participants also completed tests of processing speed, balance, and attention as potential mediators. Ninety-three of the 212 participants (44%) fell in the follow-up period. Of the step tests, only components of the iCSRT task predicted falls in this time with the relative risk per standard deviation for the reaction time (iCSRT-RT) = 1.23 (95%CI = 1.10-1.37). Multiple mediation analysis indicated that the iCSRT-RT was independently associated with falls and not mediated through slow processing speed, poor balance, or inattention. Combined stepping and response inhibition as measured in a go/no-go test stepping paradigm predicted falls in older people. This suggests that integrity of the response-selection component of a voluntary stepping response is crucial for minimizing fall risk. Copyright © 2017 AMDA – The Society for Post-Acute and Long-Term Care Medicine. Published by Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Iwamura, Koji; Kuwahara, Shinya; Tanimizu, Yoshitaka; Sugimura, Nobuhiro
Recently, new distributed architectures of manufacturing systems are proposed, aiming at realizing more flexible control structures of the manufacturing systems. Many researches have been carried out to deal with the distributed architectures for planning and control of the manufacturing systems. However, the human operators have not yet been discussed for the autonomous components of the distributed manufacturing systems. A real-time scheduling method is proposed, in this research, to select suitable combinations of the human operators, the resources and the jobs for the manufacturing processes. The proposed scheduling method consists of following three steps. In the first step, the human operators select their favorite manufacturing processes which they will carry out in the next time period, based on their preferences. In the second step, the machine tools and the jobs select suitable combinations for the next machining processes. In the third step, the automated guided vehicles and the jobs select suitable combinations for the next transportation processes. The second and third steps are carried out by using the utility value based method and the dispatching rule-based method proposed in the previous researches. Some case studies have been carried out to verify the effectiveness of the proposed method.
Morisse Pradier, H; Sénéchal, A; Philit, F; Tronc, F; Maury, J-M; Grima, R; Flamens, C; Paulus, S; Neidecker, J; Mornex, J-F
2016-02-01
Lung transplantation (LT) is now considered as an excellent treatment option for selected patients with end-stage pulmonary diseases, such as COPD, cystic fibrosis, idiopathic pulmonary fibrosis, and pulmonary arterial hypertension. The 2 goals of LT are to provide a survival benefit and to improve quality of life. The 3-step decision process leading to LT is discussed in this review. The first step is the selection of candidates, which requires a careful examination in order to check absolute and relative contraindications. The second step is the timing of listing for LT; it requires the knowledge of disease-specific prognostic factors available in international guidelines, and discussed in this paper. The third step is the choice of procedure: indications of heart-lung, single-lung, and bilateral-lung transplantation are described. In conclusion, this document provides guidelines to help pulmonologists in the referral and selection processes of candidates for transplantation in order to optimize the outcome of LT. Copyright © 2014 Elsevier Masson SAS. All rights reserved.
Menu driven heat treatment control of thin walled bodies
Kothmann, Richard E.; Booth, Jr., Russell R.; Grimm, Noel P.; Batenburg, Abram; Thomas, Vaughn M.
1992-01-01
A process for controlling the heating of a thin-walled body according to a predetermined temperature program by means of electrically controllable heaters, comprising: disposing the heaters adjacent one surface of the body such that each heater is in facing relation with a respective zone of the surface; supplying heat-generating power to each heater and monitoring the temperature at each surface zone; and for each zone: deriving (16,18,20), on the basis of the temperature values obtained in the monitoring step, estimated temperature values of the surface at successive time intervals each having a first selected duration; generating (28), on the basis of the estimated temperature values derived in each time interval, representations of the temperature, THSIFUT, which each surface zone will have, based on the level of power presently supplied to each heater, at a future time which is separated from the present time interval by a second selected duration; determining (30) the difference between THSIFUT and the desired temperature, FUTREFTVZL, at the future time which is separated from the present time interval by the second selected duration; providing (52) a representation indicating the power level which sould be supplied to each heater in order to reduce the difference obtained in the determining step; and adjusting the power level supplied to each heater by the supplying step in response to the value of the representation provided in the providing step.
Real-time color image processing for forensic fiber investigations
NASA Astrophysics Data System (ADS)
Paulsson, Nils
1995-09-01
This paper describes a system for automatic fiber debris detection based on color identification. The properties of the system are fast analysis and high selectivity, a necessity when analyzing forensic fiber samples. An ordinary investigation separates the material into well above 100,000 video images to analyze. The system is based on standard techniques such as CCD-camera, motorized sample table, and IBM-compatible PC/AT with add-on-boards for video frame digitalization and stepping motor control as the main parts. It is possible to operate the instrument at full video rate (25 image/s) with aid of the HSI-color system (hue- saturation-intensity) and software optimization. High selectivity is achieved by separating the analysis into several steps. The first step is fast direct color identification of objects in the analyzed video images and the second step analyzes detected objects with a more complex and time consuming stage of the investigation to identify single fiber fragments for subsequent analysis with more selective techniques.
Laine, R.M.; Hirschon, A.S.; Wilson, R.B. Jr.
1987-12-29
A process is described for the preparation of a multimetallic catalyst for the hydrodenitrogenation of an organic feedstock, which process comprises: (a) forming a precatalyst itself comprising: (1) a first metal compound selected from compounds of nickel, cobalt or mixtures thereof; (2) a second metal compound selected from compounds of chromium, molybdenum, tungsten, or mixtures thereof; and (3) an inorganic support; (b) heating the precatalyst of step (a) with a source of sulfide in a first non-oxidizing gas at a temperature and for a time effective to presulfide the precatalyst; (c) adding in a second non-oxidizing gas to the sulfided precatalyst of step (b) an organometallic transition metal moiety selected from compounds of iridium, rhodium, iron, ruthenium, tungsten or mixtures thereof for a time and at a temperature effective to chemically combine the metal components; and (d) optionally heating the chemically combined catalyst of step (b) in vacuum at a temperature and for a time effective to remove residual volatile organic materials. 12 figs.
Pareto genealogies arising from a Poisson branching evolution model with selection.
Huillet, Thierry E
2014-02-01
We study a class of coalescents derived from a sampling procedure out of N i.i.d. Pareto(α) random variables, normalized by their sum, including β-size-biasing on total length effects (β < α). Depending on the range of α we derive the large N limit coalescents structure, leading either to a discrete-time Poisson-Dirichlet (α, -β) Ξ-coalescent (α ε[0, 1)), or to a family of continuous-time Beta (2 - α, α - β)Λ-coalescents (α ε[1, 2)), or to the Kingman coalescent (α ≥ 2). We indicate that this class of coalescent processes (and their scaling limits) may be viewed as the genealogical processes of some forward in time evolving branching population models including selection effects. In such constant-size population models, the reproduction step, which is based on a fitness-dependent Poisson Point Process with scaling power-law(α) intensity, is coupled to a selection step consisting of sorting out the N fittest individuals issued from the reproduction step.
Aldridge Whitehead, Jennifer M; Wolf, Erik J; Scoville, Charles R; Wilken, Jason M
2014-10-01
Stair ascent can be difficult for individuals with transfemoral amputation because of the loss of knee function. Most individuals with transfemoral amputation use either a step-to-step (nonreciprocal, advancing one stair at a time) or skip-step strategy (nonreciprocal, advancing two stairs at a time), rather than a step-over-step (reciprocal) strategy, because step-to-step and skip-step allow the leading intact limb to do the majority of work. A new microprocessor-controlled knee (Ottobock X2(®)) uses flexion/extension resistance to allow step-over-step stair ascent. We compared self-selected stair ascent strategies between conventional and X2(®) prosthetic knees, examined between-limb differences, and differentiated stair ascent mechanics between X2(®) users and individuals without amputation. We also determined which factors are associated with differences in knee position during initial contact and swing within X2(®) users. Fourteen individuals with transfemoral amputation participated in stair ascent sessions while using conventional and X2(®) knees. Ten individuals without amputation also completed a stair ascent session. Lower-extremity stair ascent joint angles, moment, and powers and ground reaction forces were calculated using inverse dynamics during self-selected strategy and cadence and controlled cadence using a step-over-step strategy. One individual with amputation self-selected a step-over-step strategy while using a conventional knee, while 10 individuals self-selected a step-over-step strategy while using X2(®) knees. Individuals with amputation used greater prosthetic knee flexion during initial contact (32.5°, p = 0.003) and swing (68.2°, p = 0.001) with higher intersubject variability while using X2(®) knees compared to conventional knees (initial contact: 1.6°, swing: 6.2°). The increased prosthetic knee flexion while using X2(®) knees normalized knee kinematics to individuals without amputation during swing (88.4°, p = 0.179) but not during initial contact (65.7°, p = 0.002). Prosthetic knee flexion during initial contact and swing were positively correlated with prosthetic limb hip power during pull-up (r = 0.641, p = 0.046) and push-up/early swing (r = 0.993, p < 0.001), respectively. Participants with transfemoral amputation were more likely to self-select a step-over-step strategy similar to individuals without amputation while using X2(®) knees than conventional prostheses. Additionally, the increased prosthetic knee flexion used with X2(®) knees placed large power demands on the hip during pull-up and push-up/early swing. A modified strategy that uses less knee flexion can be used to allow step-over-step ascent in individuals with less hip strength.
NASA Astrophysics Data System (ADS)
Suzuki, Tomoya; Ohkura, Yuushi
2016-01-01
In order to examine the predictability and profitability of financial markets, we introduce three ideas to improve the traditional technical analysis to detect investment timings more quickly. Firstly, a nonlinear prediction model is considered as an effective way to enhance this detection power by learning complex behavioral patterns hidden in financial markets. Secondly, the bagging algorithm can be applied to quantify the confidence in predictions and compose new technical indicators. Thirdly, we also introduce how to select more profitable stocks to improve investment performance by the two-step selection: the first step selects more predictable stocks during the learning period, and then the second step adaptively and dynamically selects the most confident stock showing the most significant technical signal in each investment. Finally, some investment simulations based on real financial data show that these ideas are successful in overcoming complex financial markets.
NASA Astrophysics Data System (ADS)
Ficchì, Andrea; Perrin, Charles; Andréassian, Vazken
2016-07-01
Hydro-climatic data at short time steps are considered essential to model the rainfall-runoff relationship, especially for short-duration hydrological events, typically flash floods. Also, using fine time step information may be beneficial when using or analysing model outputs at larger aggregated time scales. However, the actual gain in prediction efficiency using short time-step data is not well understood or quantified. In this paper, we investigate the extent to which the performance of hydrological modelling is improved by short time-step data, using a large set of 240 French catchments, for which 2400 flood events were selected. Six-minute rain gauge data were available and the GR4 rainfall-runoff model was run with precipitation inputs at eight different time steps ranging from 6 min to 1 day. Then model outputs were aggregated at seven different reference time scales ranging from sub-hourly to daily for a comparative evaluation of simulations at different target time steps. Three classes of model performance behaviour were found for the 240 test catchments: (i) significant improvement of performance with shorter time steps; (ii) performance insensitivity to the modelling time step; (iii) performance degradation as the time step becomes shorter. The differences between these groups were analysed based on a number of catchment and event characteristics. A statistical test highlighted the most influential explanatory variables for model performance evolution at different time steps, including flow auto-correlation, flood and storm duration, flood hydrograph peakedness, rainfall-runoff lag time and precipitation temporal variability.
Yentes, Jennifer M; Rennard, Stephen I; Schmid, Kendra K; Blanke, Daniel; Stergiou, Nicholas
2017-06-01
Compared with control subjects, patients with chronic obstructive pulmonary disease (COPD) have an increased incidence of falls and demonstrate balance deficits and alterations in mediolateral trunk acceleration while walking. Measures of gait variability have been implicated as indicators of fall risk, fear of falling, and future falls. To investigate whether alterations in gait variability are found in patients with COPD as compared with healthy control subjects. Twenty patients with COPD (16 males; mean age, 63.6 ± 9.7 yr; FEV 1 /FVC, 0.52 ± 0.12) and 20 control subjects (9 males; mean age, 62.5 ± 8.2 yr) walked for 3 minutes on a treadmill while their gait was recorded. The amount (SD and coefficient of variation) and structure of variability (sample entropy, a measure of regularity) were quantified for step length, time, and width at three walking speeds (self-selected and ±20% of self-selected speed). Generalized linear mixed models were used to compare dependent variables. Patients with COPD demonstrated increased mean and SD step time across all speed conditions as compared with control subjects. They also walked with a narrower step width that increased with increasing speed, whereas the healthy control subjects walked with a wider step width that decreased as speed increased. Further, patients with COPD demonstrated less variability in step width, with decreased SD, compared with control subjects at all three speed conditions. No differences in regularity of gait patterns were found between groups. Patients with COPD walk with increased duration of time between steps, and this timing is more variable than that of control subjects. They also walk with a narrower step width in which the variability of the step widths from step to step is decreased. Changes in these parameters have been related to increased risk of falling in aging research. This provides a mechanism that could explain the increased prevalence of falls in patients with COPD.
hp-Adaptive time integration based on the BDF for viscous flows
NASA Astrophysics Data System (ADS)
Hay, A.; Etienne, S.; Pelletier, D.; Garon, A.
2015-06-01
This paper presents a procedure based on the Backward Differentiation Formulas of order 1 to 5 to obtain efficient time integration of the incompressible Navier-Stokes equations. The adaptive algorithm performs both stepsize and order selections to control respectively the solution accuracy and the computational efficiency of the time integration process. The stepsize selection (h-adaptivity) is based on a local error estimate and an error controller to guarantee that the numerical solution accuracy is within a user prescribed tolerance. The order selection (p-adaptivity) relies on the idea that low-accuracy solutions can be computed efficiently by low order time integrators while accurate solutions require high order time integrators to keep computational time low. The selection is based on a stability test that detects growing numerical noise and deems a method of order p stable if there is no method of lower order that delivers the same solution accuracy for a larger stepsize. Hence, it guarantees both that (1) the used method of integration operates inside of its stability region and (2) the time integration procedure is computationally efficient. The proposed time integration procedure also features a time-step rejection and quarantine mechanisms, a modified Newton method with a predictor and dense output techniques to compute solution at off-step points.
Murthy, Aditya; Ray, Supriya; Shorter, Stephanie M; Schall, Jeffrey D; Thompson, Kirk G
2009-05-01
The dynamics of visual selection and saccade preparation by the frontal eye field was investigated in macaque monkeys performing a search-step task combining the classic double-step saccade task with visual search. Reward was earned for producing a saccade to a color singleton. On random trials the target and one distractor swapped locations before the saccade and monkeys were rewarded for shifting gaze to the new singleton location. A race model accounts for the probabilities and latencies of saccades to the initial and final singleton locations and provides a measure of the duration of a covert compensation process-target-step reaction time. When the target stepped out of a movement field, noncompensated saccades to the original location were produced when movement-related activity grew rapidly to a threshold. Compensated saccades to the final location were produced when the growth of the original movement-related activity was interrupted within target-step reaction time and was replaced by activation of other neurons producing the compensated saccade. When the target stepped into a receptive field, visual neurons selected the new target location regardless of the monkeys' response. When the target stepped out of a receptive field most visual neurons maintained the representation of the original target location, but a minority of visual neurons showed reduced activity. Chronometric analyses of the neural responses to the target step revealed that the modulation of visually responsive neurons and movement-related neurons occurred early enough to shift attention and saccade preparation from the old to the new target location. These findings indicate that visual activity in the frontal eye field signals the location of targets for orienting, whereas movement-related activity instantiates saccade preparation.
Oudenhoven, Laura M; Boes, Judith M; Hak, Laura; Faber, Gert S; Houdijk, Han
2017-01-25
Running specific prostheses (RSP) are designed to replicate the spring-like behaviour of the human leg during running, by incorporating a real physical spring in the prosthesis. Leg stiffness is an important parameter in running as it is strongly related to step frequency and running economy. To be able to select a prosthesis that contributes to the required leg stiffness of the athlete, it needs to be known to what extent the behaviour of the prosthetic leg during running is dominated by the stiffness of the prosthesis or whether it can be regulated by adaptations of the residual joints. The aim of this study was to investigate whether and how athletes with an RSP could regulate leg stiffness during distance running at different step frequencies. Seven endurance runners with an unilateral transtibial amputation performed five running trials on a treadmill at a fixed speed, while different step frequencies were imposed (preferred step frequency (PSF) and -15%, -7.5%, +7.5% and +15% of PSF). Among others, step time, ground contact time, flight time, leg stiffness and joint kinetics were measured for both legs. In the intact leg, increasing step frequency was accompanied by a decrease in both contact and flight time, while in the prosthetic leg contact time remained constant and only flight time decreased. In accordance, leg stiffness increased in the intact leg, but not in the prosthetic leg. Although a substantial contribution of the residual leg to total leg stiffness was observed, this contribution did not change considerably with changing step frequency. Amputee athletes do not seem to be able to alter prosthetic leg stiffness to regulate step frequency during running. This invariant behaviour indicates that RSP stiffness has a large effect on total leg stiffness and therefore can have an important influence on running performance. Nevertheless, since prosthetic leg stiffness was considerably lower than stiffness of the RSP, compliance of the residual leg should not be ignored when selecting RSP stiffness. Copyright © 2016 Elsevier Ltd. All rights reserved.
NMR diffusion simulation based on conditional random walk.
Gudbjartsson, H; Patz, S
1995-01-01
The authors introduce here a new, very fast, simulation method for free diffusion in a linear magnetic field gradient, which is an extension of the conventional Monte Carlo (MC) method or the convolution method described by Wong et al. (in 12th SMRM, New York, 1993, p.10). In earlier NMR-diffusion simulation methods, such as the finite difference method (FD), the Monte Carlo method, and the deterministic convolution method, the outcome of the calculations depends on the simulation time step. In the authors' method, however, the results are independent of the time step, although, in the convolution method the step size has to be adequate for spins to diffuse to adjacent grid points. By always selecting the largest possible time step the computation time can therefore be reduced. Finally the authors point out that in simple geometric configurations their simulation algorithm can be used to reduce computation time in the simulation of restricted diffusion.
One-step selection of Vaccinia virus-binding DNA aptamers by MonoLEX
Nitsche, Andreas; Kurth, Andreas; Dunkhorst, Anna; Pänke, Oliver; Sielaff, Hendrik; Junge, Wolfgang; Muth, Doreen; Scheller, Frieder; Stöcklein, Walter; Dahmen, Claudia; Pauli, Georg; Kage, Andreas
2007-01-01
Background As a new class of therapeutic and diagnostic reagents, more than fifteen years ago RNA and DNA aptamers were identified as binding molecules to numerous small compounds, proteins and rarely even to complete pathogen particles. Most aptamers were isolated from complex libraries of synthetic nucleic acids by a process termed SELEX based on several selection and amplification steps. Here we report the application of a new one-step selection method (MonoLEX) to acquire high-affinity DNA aptamers binding Vaccinia virus used as a model organism for complex target structures. Results The selection against complete Vaccinia virus particles resulted in a 64-base DNA aptamer specifically binding to orthopoxviruses as validated by dot blot analysis, Surface Plasmon Resonance, Fluorescence Correlation Spectroscopy and real-time PCR, following an aptamer blotting assay. The same oligonucleotide showed the ability to inhibit in vitro infection of Vaccinia virus and other orthopoxviruses in a concentration-dependent manner. Conclusion The MonoLEX method is a straightforward procedure as demonstrated here for the identification of a high-affinity DNA aptamer binding Vaccinia virus. MonoLEX comprises a single affinity chromatography step, followed by subsequent physical segmentation of the affinity resin and a single final PCR amplification step of bound aptamers. Therefore, this procedure improves the selection of high affinity aptamers by reducing the competition between aptamers of different affinities during the PCR step, indicating an advantage for the single-round MonoLEX method. PMID:17697378
Step Detection Robust against the Dynamics of Smartphones
Lee, Hwan-hee; Choi, Suji; Lee, Myeong-jin
2015-01-01
A novel algorithm is proposed for robust step detection irrespective of step mode and device pose in smartphone usage environments. The dynamics of smartphones are decoupled into a peak-valley relationship with adaptive magnitude and temporal thresholds. For extracted peaks and valleys in the magnitude of acceleration, a step is defined as consisting of a peak and its adjacent valley. Adaptive magnitude thresholds consisting of step average and step deviation are applied to suppress pseudo peaks or valleys that mostly occur during the transition among step modes or device poses. Adaptive temporal thresholds are applied to time intervals between peaks or valleys to consider the time-varying pace of human walking or running for the correct selection of peaks or valleys. From the experimental results, it can be seen that the proposed step detection algorithm shows more than 98.6% average accuracy for any combination of step mode and device pose and outperforms state-of-the-art algorithms. PMID:26516857
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vandewouw, Marlee M., E-mail: marleev@mie.utoronto
Purpose: Continuous dose delivery in radiation therapy treatments has been shown to decrease total treatment time while improving the dose conformity and distribution homogeneity over the conventional step-and-shoot approach. The authors develop an inverse treatment planning method for Gamma Knife® Perfexion™ that continuously delivers dose along a path in the target. Methods: The authors’ method is comprised of two steps: find a path within the target, then solve a mixed integer optimization model to find the optimal collimator configurations and durations along the selected path. Robotic path-finding techniques, specifically, simultaneous localization and mapping (SLAM) using an extended Kalman filter, aremore » used to obtain a path that travels sufficiently close to selected isocentre locations. SLAM is novelly extended to explore a 3D, discrete environment, which is the target discretized into voxels. Further novel extensions are incorporated into the steering mechanism to account for target geometry. Results: The SLAM method was tested on seven clinical cases and compared to clinical, Hamiltonian path continuous delivery, and inverse step-and-shoot treatment plans. The SLAM approach improved dose metrics compared to the clinical plans and Hamiltonian path continuous delivery plans. Beam-on times improved over clinical plans, and had mixed performance compared to Hamiltonian path continuous plans. The SLAM method is also shown to be robust to path selection inaccuracies, isocentre selection, and dose distribution. Conclusions: The SLAM method for continuous delivery provides decreased total treatment time and increased treatment quality compared to both clinical and inverse step-and-shoot plans, and outperforms existing path methods in treatment quality. It also accounts for uncertainty in treatment planning by accommodating inaccuracies.« less
NASA Astrophysics Data System (ADS)
Kar, Soumen; Rao, V. V.
2018-07-01
In our first attempt to design a single phase R-SFCL in India, we have chosen the typical rating of a medium voltage level (3.3 kVrms, 200 Arms, 1Φ) R-SFCL. The step-by-step design procedure for the R-SFCL involves conductor selection, time dependent electro-thermal simulations and recovery time optimization after fault removal. In the numerical analysis, effective fault limitation for a fault current of 5 kA for the medium voltage level R-SFCL are simulated. Maximum normal state resistance and maximum temperature rise in the SFCL coil during current limitation are estimated using one-dimensional energy balance equation. Further, a cryogenic system is conceptually designed for aforesaid MV level R-SFCL by considering inner and outer vessel materials, wall-thickness and thermal insulation which can be used for R-SFCL system. Finally, the total thermal load is calculated for the designed R-SFCL cryostat to select a suitable cryo-refrigerator for LN2 re-condensation.
Knee point search using cascading top-k sorting with minimized time complexity.
Wang, Zheng; Tseng, Shian-Shyong
2013-01-01
Anomaly detection systems and many other applications are frequently confronted with the problem of finding the largest knee point in the sorted curve for a set of unsorted points. This paper proposes an efficient knee point search algorithm with minimized time complexity using the cascading top-k sorting when a priori probability distribution of the knee point is known. First, a top-k sort algorithm is proposed based on a quicksort variation. We divide the knee point search problem into multiple steps. And in each step an optimization problem of the selection number k is solved, where the objective function is defined as the expected time cost. Because the expected time cost in one step is dependent on that of the afterwards steps, we simplify the optimization problem by minimizing the maximum expected time cost. The posterior probability of the largest knee point distribution and the other parameters are updated before solving the optimization problem in each step. An example of source detection of DNS DoS flooding attacks is provided to illustrate the applications of the proposed algorithm.
Adaptive time stepping for fluid-structure interaction solvers
Mayr, M.; Wall, W. A.; Gee, M. W.
2017-12-22
In this work, a novel adaptive time stepping scheme for fluid-structure interaction (FSI) problems is proposed that allows for controlling the accuracy of the time-discrete solution. Furthermore, it eases practical computations by providing an efficient and very robust time step size selection. This has proven to be very useful, especially when addressing new physical problems, where no educated guess for an appropriate time step size is available. The fluid and the structure field, but also the fluid-structure interface are taken into account for the purpose of a posteriori error estimation, rendering it easy to implement and only adding negligible additionalmore » cost. The adaptive time stepping scheme is incorporated into a monolithic solution framework, but can straightforwardly be applied to partitioned solvers as well. The basic idea can be extended to the coupling of an arbitrary number of physical models. Accuracy and efficiency of the proposed method are studied in a variety of numerical examples ranging from academic benchmark tests to complex biomedical applications like the pulsatile blood flow through an abdominal aortic aneurysm. Finally, the demonstrated accuracy of the time-discrete solution in combination with reduced computational cost make this algorithm very appealing in all kinds of FSI applications.« less
Adaptive time stepping for fluid-structure interaction solvers
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mayr, M.; Wall, W. A.; Gee, M. W.
In this work, a novel adaptive time stepping scheme for fluid-structure interaction (FSI) problems is proposed that allows for controlling the accuracy of the time-discrete solution. Furthermore, it eases practical computations by providing an efficient and very robust time step size selection. This has proven to be very useful, especially when addressing new physical problems, where no educated guess for an appropriate time step size is available. The fluid and the structure field, but also the fluid-structure interface are taken into account for the purpose of a posteriori error estimation, rendering it easy to implement and only adding negligible additionalmore » cost. The adaptive time stepping scheme is incorporated into a monolithic solution framework, but can straightforwardly be applied to partitioned solvers as well. The basic idea can be extended to the coupling of an arbitrary number of physical models. Accuracy and efficiency of the proposed method are studied in a variety of numerical examples ranging from academic benchmark tests to complex biomedical applications like the pulsatile blood flow through an abdominal aortic aneurysm. Finally, the demonstrated accuracy of the time-discrete solution in combination with reduced computational cost make this algorithm very appealing in all kinds of FSI applications.« less
Implementation of Competency-Based Pharmacy Education (CBPE)
Koster, Andries; Schalekamp, Tom; Meijerman, Irma
2017-01-01
Implementation of competency-based pharmacy education (CBPE) is a time-consuming, complicated process, which requires agreement on the tasks of a pharmacist, commitment, institutional stability, and a goal-directed developmental perspective of all stakeholders involved. In this article the main steps in the development of a fully-developed competency-based pharmacy curriculum (bachelor, master) are described and tips are given for a successful implementation. After the choice for entering into CBPE is made and a competency framework is adopted (step 1), intended learning outcomes are defined (step 2), followed by analyzing the required developmental trajectory (step 3) and the selection of appropriate assessment methods (step 4). Designing the teaching-learning environment involves the selection of learning activities, student experiences, and instructional methods (step 5). Finally, an iterative process of evaluation and adjustment of individual courses, and the curriculum as a whole, is entered (step 6). Successful implementation of CBPE requires a system of effective quality management and continuous professional development as a teacher. In this article suggestions for the organization of CBPE and references to more detailed literature are given, hoping to facilitate the implementation of CBPE. PMID:28970422
Extraction of Qualitative Features from Sensor Data Using Windowed Fourier Transform
NASA Technical Reports Server (NTRS)
Amini, Abolfazl M.; Figueroa, Fenando
2003-01-01
In this paper, we use Matlab to model the health monitoring of a system through the information gathered from sensors. This implies assessment of the condition of the system components. Once a normal mode of operation is established any deviation from the normal behavior indicates a change. This change may be due to a malfunction of an element, a qualitative change, or a change due to a problem with another element in the network. For example, if one sensor indicates that the temperature in the tank has experienced a step change then a pressure sensor associated with the process in the tank should also experience a step change. The step up and step down as well as sensor disturbances are assumed to be exponential. An RC network is used to model the main process, which is step-up (charging), drift, and step-down (discharging). The sensor disturbances and spike are added while the system is in drift. The system is allowed to run for a period equal to three time constant of the main process before changes occur. Then each point of the signal is selected with a trailing data collected previously. Two trailing lengths of data are selected, one equal to two time constants of the main process and the other equal to two time constants of the sensor disturbance. Next, the DC is removed from each set of data and then the data are passed through a window followed by calculation of spectra for each set. In order to extract features the signal power, peak, and spectrum are plotted vs time. The results indicate distinct shapes corresponding to each process. The study is also carried out for a number of Gaussian distributed noisy cases.
Evaluation of a transfinite element numerical solution method for nonlinear heat transfer problems
NASA Technical Reports Server (NTRS)
Cerro, J. A.; Scotti, S. J.
1991-01-01
Laplace transform techniques have been widely used to solve linear, transient field problems. A transform-based algorithm enables calculation of the response at selected times of interest without the need for stepping in time as required by conventional time integration schemes. The elimination of time stepping can substantially reduce computer time when transform techniques are implemented in a numerical finite element program. The coupling of transform techniques with spatial discretization techniques such as the finite element method has resulted in what are known as transfinite element methods. Recently attempts have been made to extend the transfinite element method to solve nonlinear, transient field problems. This paper examines the theoretical basis and numerical implementation of one such algorithm, applied to nonlinear heat transfer problems. The problem is linearized and solved by requiring a numerical iteration at selected times of interest. While shown to be acceptable for weakly nonlinear problems, this algorithm is ineffective as a general nonlinear solution method.
LENMODEL: A forward model for calculating length distributions and fission-track ages in apatite
NASA Astrophysics Data System (ADS)
Crowley, Kevin D.
1993-05-01
The program LENMODEL is a forward model for annealing of fission tracks in apatite. It provides estimates of the track-length distribution, fission-track age, and areal track density for any user-supplied thermal history. The program approximates the thermal history, in which temperature is represented as a continuous function of time, by a series of isothermal steps of various durations. Equations describing the production of tracks as a function of time and annealing of tracks as a function of time and temperature are solved for each step. The step calculations are summed to obtain estimates for the entire thermal history. Computational efficiency is maximized by performing the step calculations backwards in model time. The program incorporates an intuitive and easy-to-use graphical interface. Thermal history is input to the program using a mouse. Model options are specified by selecting context-sensitive commands from a bar menu. The program allows for considerable selection of equations and parameters used in the calculations. The program was written for PC-compatible computers running DOS TM 3.0 and above (and Windows TM 3.0 or above) with VGA or SVGA graphics and a Microsoft TM-compatible mouse. Single copies of a runtime version of the program are available from the author by written request as explained in the last section of this paper.
Method of detecting system function by measuring frequency response
Morrison, John L.; Morrison, William H.; Christophersen, Jon P.; Motloch, Chester G.
2013-01-08
Methods of rapidly measuring an impedance spectrum of an energy storage device in-situ over a limited number of logarithmically distributed frequencies are described. An energy storage device is excited with a known input signal, and a response is measured to ascertain the impedance spectrum. An excitation signal is a limited time duration sum-of-sines consisting of a select number of frequencies. In one embodiment, magnitude and phase of each frequency of interest within the sum-of-sines is identified when the selected frequencies and sample rate are logarithmic integer steps greater than two. This technique requires a measurement with a duration of one period of the lowest frequency. In another embodiment, where selected frequencies are distributed in octave steps, the impedance spectrum can be determined using a captured time record that is reduced to a half-period of the lowest frequency.
Finley, James M.; Long, Andrew; Bastian, Amy J.; Torres-Oviedo, Gelsy
2014-01-01
Background Step length asymmetry (SLA) is a common hallmark of gait post-stroke. Though conventionally viewed as a spatial deficit, SLA can result from differences in where the feet are placed relative to the body (spatial strategy), the timing between foot-strikes (step time strategy), or the velocity of the body relative to the feet (step velocity strategy). Objective The goal of this study was to characterize the relative contributions of each of these strategies to SLA. Methods We developed an analytical model that parses SLA into independent step position, step time, and step velocity contributions. This model was validated by reproducing SLA values for twenty-five healthy participants when their natural symmetric gait was perturbed on a split-belt treadmill moving at either a 2:1 or 3:1 belt-speed ratio. We then applied the validated model to quantify step position, step time, and step velocity contributions to SLA in fifteen stroke survivors while walking at their self-selected speed. Results SLA was predicted precisely by summing the derived contributions, regardless of the belt-speed ratio. Although the contributions to SLA varied considerably across our sample of stroke survivors, the step position contribution tended to oppose the other two – possibly as an attempt to minimize the overall SLA. Conclusions Our results suggest that changes in where the feet are placed or changes in interlimb timing could be used as compensatory strategies to reduce overall SLA in stroke survivors. These results may allow clinicians and researchers to identify patient-specific gait abnormalities and personalize their therapeutic approaches accordingly. PMID:25589580
A Step-by-Step Framework on Discrete Events Simulation in Emergency Department; A Systematic Review.
Dehghani, Mahsa; Moftian, Nazila; Rezaei-Hachesu, Peyman; Samad-Soltani, Taha
2017-04-01
To systematically review the current literature of simulation in healthcare including the structured steps in the emergency healthcare sector by proposing a framework for simulation in the emergency department. For the purpose of collecting the data, PubMed and ACM databases were used between the years 2003 and 2013. The inclusion criteria were to select English-written articles available in full text with the closest objectives from among a total of 54 articles retrieved from the databases. Subsequently, 11 articles were selected for further analysis. The studies focused on the reduction of waiting time and patient stay, optimization of resources allocation, creation of crisis and maximum demand scenarios, identification of overcrowding bottlenecks, investigation of the impact of other systems on the existing system, and improvement of the system operations and functions. Subsequently, 10 simulation steps were derived from the relevant studies after an expert's evaluation. The 10-steps approach proposed on the basis of the selected studies provides simulation and planning specialists with a structured method for both analyzing problems and choosing best-case scenarios. Moreover, following this framework systematically enables the development of design processes as well as software implementation of simulation problems.
Asynchronous machine rotor speed estimation using a tabulated numerical approach
NASA Astrophysics Data System (ADS)
Nguyen, Huu Phuc; De Miras, Jérôme; Charara, Ali; Eltabach, Mario; Bonnet, Stéphane
2017-12-01
This paper proposes a new method to estimate the rotor speed of the asynchronous machine by looking at the estimation problem as a nonlinear optimal control problem. The behavior of the nonlinear plant model is approximated off-line as a prediction map using a numerical one-step time discretization obtained from simulations. At each time-step, the speed of the induction machine is selected satisfying the dynamic fitting problem between the plant output and the predicted output, leading the system to adopt its dynamical behavior. Thanks to the limitation of the prediction horizon to a single time-step, the execution time of the algorithm can be completely bounded. It can thus easily be implemented and embedded into a real-time system to observe the speed of the real induction motor. Simulation results show the performance and robustness of the proposed estimator.
Houdijk, Han; van Ooijen, Mariëlle W; Kraal, Jos J; Wiggerts, Henri O; Polomski, Wojtek; Janssen, Thomas W J; Roerdink, Melvyn
2012-11-01
Gait adaptability, including the ability to avoid obstacles and to take visually guided steps, is essential for safe movement through a cluttered world. This aspect of walking ability is important for regaining independent mobility but is difficult to assess in clinical practice. The objective of this study was to investigate the validity of an instrumented treadmill with obstacles and stepping targets projected on the belt's surface for assessing prosthetic gait adaptability. This was an observational study. A control group of people who were able bodied (n=12) and groups of people with transtibial (n=12) and transfemoral (n=12) amputations participated. Participants walked at a self-selected speed on an instrumented treadmill with projected visual obstacles and stepping targets. Gait adaptability was evaluated in terms of anticipatory and reactive obstacle avoidance performance (for obstacles presented 4 steps and 1 step ahead, respectively) and accuracy of stepping on regular and irregular patterns of stepping targets. In addition, several clinical tests were administered, including timed walking tests and reports of incidence of falls and fear of falling. Obstacle avoidance performance and stepping accuracy were significantly lower in the groups with amputations than in the control group. Anticipatory obstacle avoidance performance was moderately correlated with timed walking test scores. Reactive obstacle avoidance performance and stepping accuracy performance were not related to timed walking tests. Gait adaptability scores did not differ in groups stratified by incidence of falls or fear of falling. Because gait adaptability was affected by walking speed, differences in self-selected walking speed may have diminished differences in gait adaptability between groups. Gait adaptability can be validly assessed by use of an instrumented treadmill with a projected visual context. When walking speed is taken into account, this assessment provides unique, quantitative information about walking ability in people with a lower-limb amputation.
Engineering more stable, selectable marker-free autoluminescent mycobacteria by one step.
Yang, Feng; Njire, Moses M; Liu, Jia; Wu, Tian; Wang, Bangxing; Liu, Tianzhou; Cao, Yuanyuan; Liu, Zhiyong; Wan, Junting; Tu, Zhengchao; Tan, Yaoju; Tan, Shouyong; Zhang, Tianyu
2015-01-01
In our previous study, we demonstrated that the use of the autoluminescent Mycobacterium tuberculosis as a reporter strain had the potential to drastically reduce the time, effort, animals and costs consumed in evaluation of the activities of drugs and vaccines in live mice. However, the strains were relatively unstable and lost reporter with time without selection. The kanamycin selection marker used wasn't the best choice as it provides resistance to amino glycosides which are an important class of second line drugs used in tuberculosis treatment. In addition, the marker could limit utility of the strains for screening of new potential drugs or evaluating drug combinations for tuberculosis treatment. Limited selection marker genes for mycobacterial genetic manipulation is a major drawback for such a marker-containing strain in many research fields. Therefore, selectable marker-free, more stable autoluminescent mycobacteria are highly needed. After trying several strategies, we created such mycobacterial strains successfully by using an integrative vector and removing both the resistance maker and integrase genes by Xer site-specific recombination in one step. The corresponding plasmid vectors developed in this study could be very convenient in constructing other selectable marker-free, more stable reporter mycobacteria with diverse applications.
Mihiretu, Gezahegn T; Brodin, Malin; Chimphango, Annie F; Øyaas, Karin; Hoff, Bård H; Görgens, Johann F
2017-10-01
The viability of single-step microwave-induced pressurized hot water conditions for co-production of xylan-based biopolymers and bioethanol from aspenwood sawdust and sugarcane trash was investigated. Extraction of hemicelluloses was conducted using microwave-assisted pressurized hot water system. The effects of temperature and time on extraction yield and enzymatic digestibility of resulting solids were determined. Temperatures between 170-200°C for aspenwood and 165-195°C for sugarcane trash; retention times between 8-22min for both feedstocks, were selected for optimization purpose. Maximum xylan extraction yields of 66 and 50%, and highest cellulose digestibilities of 78 and 74%, were attained for aspenwood and sugarcane trash respectively. Monomeric xylose yields for both feedstocks were below 7%, showing that the xylan extracts were predominantly in non-monomeric form. Thus, single-step microwave-assisted hot water method is viable biorefinery approach to extract xylan from lignocelluloses while rendering the solid residues sufficiently digestible for ethanol production. Copyright © 2017 Elsevier Ltd. All rights reserved.
Novel embryo selection techniques to increase embryo implantation in IVF attempts.
Sigalos, George Α; Triantafyllidou, Olga; Vlahos, Nikos F
2016-11-01
The final success of an IVF attempt depends on several steps and decisions taken during the ovarian stimulation, the oocyte retrieval, the embryo culture and the embryo transfer. The final selection of the embryos most likely to implant is the final step in this process and the responsibility of the lab. Apart from strict morphologic criteria that historically have been used in embryo selection, additional information on genetic, metabolomic and morphokinetic characteristics of the embryo is recently combined to morphology to select the embryo most likely to produce a pregnancy. In this manuscript, we review the most recent information on the current methods used for embryo selection presenting the predictive capability of each one. A literature search was performed on Pubmed, Medline and Cochrane Database of Systematic Reviews for published studies using appropriate key words and phrases with no limits placed on time. It seems that the combination of morphologic criteria in conjunction to embryo kinetics as documented by time-lapse technology provides the most reliable information on embryo quality. Blastocyst biopsy with subsequent comprehensive chromosome analysis allows the selection of the euploid embryos with the higher implantation potential. Embryo time-lapse imaging and blastocyst biopsy combined to comprehensive chromosome analysis are the most promising technologies to increase pregnancy rates and reduce the possibility of multiple pregnancies. However, further studies will demonstrate the capability of routinely using these technologies to significantly improve IVF outcomes.
Adaptive Finite Element Methods for Continuum Damage Modeling
NASA Technical Reports Server (NTRS)
Min, J. B.; Tworzydlo, W. W.; Xiques, K. E.
1995-01-01
The paper presents an application of adaptive finite element methods to the modeling of low-cycle continuum damage and life prediction of high-temperature components. The major objective is to provide automated and accurate modeling of damaged zones through adaptive mesh refinement and adaptive time-stepping methods. The damage modeling methodology is implemented in an usual way by embedding damage evolution in the transient nonlinear solution of elasto-viscoplastic deformation problems. This nonlinear boundary-value problem is discretized by adaptive finite element methods. The automated h-adaptive mesh refinements are driven by error indicators, based on selected principal variables in the problem (stresses, non-elastic strains, damage, etc.). In the time domain, adaptive time-stepping is used, combined with a predictor-corrector time marching algorithm. The time selection is controlled by required time accuracy. In order to take into account strong temperature dependency of material parameters, the nonlinear structural solution a coupled with thermal analyses (one-way coupling). Several test examples illustrate the importance and benefits of adaptive mesh refinements in accurate prediction of damage levels and failure time.
Kostanyan, Artak E; Erastov, Andrey A
2015-08-07
In the steady state (SS) multiple dual mode (MDM) counter-current chromatography (CCC), at the beginning of the first step of every cycle the sample dissolved in one of the phases is continuously fed into a CCC device over a constant time, not exceeding the run time of the first step. After a certain number of cycles, the steady state regime is achieved, where concentrations vary over time during each cycle, however, the concentration profiles of solutes eluted with both phases remain constant in all subsequent cycles. The objective of this work was to develop analytical expressions to describe the SS MDM CCC separation processes, which can be helpful to simulate and design these processes and select a suitable compromise between the productivity and the selectivity in the preparative and production CCC separations. Experiments carried out using model mixtures of compounds from the GUESSmix with solvent system hexane/ethyl acetate/methanol/water demonstrated a reasonable agreement between the predictions of the theory and the experimental results. Copyright © 2015 Elsevier B.V. All rights reserved.
Wang, Fen; Yu, Junxia; Xiong, Wanli; Xu, Yuanlai; Chi, Ru-An
2018-01-01
For selective leaching and highly effective recovery of heavy metals from a metallurgical sludge, a two-step leaching method was designed based on the distribution analysis of the chemical fractions of the loaded heavy metal. Hydrochloric acid (HCl) was used as a leaching agent in the first step to leach the relatively labile heavy metals and then ethylenediamine tetraacetic acid (EDTA) was applied to leach the residual metals according to their different fractional distribution. Using the two-step leaching method, 82.89% of Cd, 55.73% of Zn, 10.85% of Cu, and 0.25% of Pb were leached in the first step by 0.7 M HCl at a contact time of 240 min, and the leaching efficiencies for Cd, Zn, Cu, and Pb were elevated up to 99.76, 91.41, 71.85, and 94.06%, by subsequent treatment with 0.2 M EDTA at 480 min, respectively. Furthermore, HCl leaching induced fractional redistribution, which might increase the mobility of the remaining metals and then facilitate the following metal removal by EDTA. The facilitation was further confirmed by the comparison to the one-step leaching method with single HCl or single EDTA, respectively. These results suggested that the designed two-step leaching method by HCl and EDTA could be used for selective leaching and effective recovery of heavy metals from the metallurgical sludge or heavy metal-contaminated solid media.
Feature Selection Using Information Gain for Improved Structural-Based Alert Correlation
Siraj, Maheyzah Md; Zainal, Anazida; Elshoush, Huwaida Tagelsir; Elhaj, Fatin
2016-01-01
Grouping and clustering alerts for intrusion detection based on the similarity of features is referred to as structurally base alert correlation and can discover a list of attack steps. Previous researchers selected different features and data sources manually based on their knowledge and experience, which lead to the less accurate identification of attack steps and inconsistent performance of clustering accuracy. Furthermore, the existing alert correlation systems deal with a huge amount of data that contains null values, incomplete information, and irrelevant features causing the analysis of the alerts to be tedious, time-consuming and error-prone. Therefore, this paper focuses on selecting accurate and significant features of alerts that are appropriate to represent the attack steps, thus, enhancing the structural-based alert correlation model. A two-tier feature selection method is proposed to obtain the significant features. The first tier aims at ranking the subset of features based on high information gain entropy in decreasing order. The second tier extends additional features with a better discriminative ability than the initially ranked features. Performance analysis results show the significance of the selected features in terms of the clustering accuracy using 2000 DARPA intrusion detection scenario-specific dataset. PMID:27893821
Real-Time Occlusion Handling in Augmented Reality Based on an Object Tracking Approach
Tian, Yuan; Guan, Tao; Wang, Cheng
2010-01-01
To produce a realistic augmentation in Augmented Reality, the correct relative positions of real objects and virtual objects are very important. In this paper, we propose a novel real-time occlusion handling method based on an object tracking approach. Our method is divided into three steps: selection of the occluding object, object tracking and occlusion handling. The user selects the occluding object using an interactive segmentation method. The contour of the selected object is then tracked in the subsequent frames in real-time. In the occlusion handling step, all the pixels on the tracked object are redrawn on the unprocessed augmented image to produce a new synthesized image in which the relative position between the real and virtual object is correct. The proposed method has several advantages. First, it is robust and stable, since it remains effective when the camera is moved through large changes of viewing angles and volumes or when the object and the background have similar colors. Second, it is fast, since the real object can be tracked in real-time. Last, a smoothing technique provides seamless merging between the augmented and virtual object. Several experiments are provided to validate the performance of the proposed method. PMID:22319278
High-throughput screening of chromatographic separations: IV. Ion-exchange.
Kelley, Brian D; Switzer, Mary; Bastek, Patrick; Kramarczyk, Jack F; Molnar, Kathleen; Yu, Tianning; Coffman, Jon
2008-08-01
Ion-exchange (IEX) chromatography steps are widely applied in protein purification processes because of their high capacity, selectivity, robust operation, and well-understood principles. Optimization of IEX steps typically involves resin screening and selection of the pH and counterion concentrations of the load, wash, and elution steps. Time and material constraints associated with operating laboratory columns often preclude evaluating more than 20-50 conditions during early stages of process development. To overcome this limitation, a high-throughput screening (HTS) system employing a robotic liquid handling system and 96-well filterplates was used to evaluate various operating conditions for IEX steps for monoclonal antibody (mAb) purification. A screening study for an adsorptive cation-exchange step evaluated eight different resins. Sodium chloride concentrations defining the operating boundaries of product binding and elution were established at four different pH levels for each resin. Adsorption isotherms were measured for 24 different pH and salt combinations for a single resin. An anion-exchange flowthrough step was then examined, generating data on mAb adsorption for 48 different combinations of pH and counterion concentration for three different resins. The mAb partition coefficients were calculated and used to estimate the characteristic charge of the resin-protein interaction. Host cell protein and residual Protein A impurity levels were also measured, providing information on selectivity within this operating window. The HTS system shows promise for accelerating process development of IEX steps, enabling rapid acquisition of large datasets addressing the performance of the chromatography step under many different operating conditions. (c) 2008 Wiley Periodicals, Inc.
Genetic parameters and prediction of breeding values in switchgrass bred for bioenergy
USDA-ARS?s Scientific Manuscript database
Estimating genetic parameters is an essential step in breeding by recurrent selection to maximize genetic gains over time. This study evaluated the effects of selection on genetic variation across two successive cycles (C1 and C2) of a ‘Summer’x‘Kanlow’ switchgrass (Panicum virgatum L.) population. ...
Genetic demixing and evolution in linear stepping stone models
NASA Astrophysics Data System (ADS)
Korolev, K. S.; Avlund, Mikkel; Hallatschek, Oskar; Nelson, David R.
2010-04-01
Results for mutation, selection, genetic drift, and migration in a one-dimensional continuous population are reviewed and extended. The population is described by a continuous limit of the stepping stone model, which leads to the stochastic Fisher-Kolmogorov-Petrovsky-Piscounov equation with additional terms describing mutations. Although the stepping stone model was first proposed for population genetics, it is closely related to “voter models” of interest in nonequilibrium statistical mechanics. The stepping stone model can also be regarded as an approximation to the dynamics of a thin layer of actively growing pioneers at the frontier of a colony of micro-organisms undergoing a range expansion on a Petri dish. The population tends to segregate into monoallelic domains. This segregation slows down genetic drift and selection because these two evolutionary forces can only act at the boundaries between the domains; the effects of mutation, however, are not significantly affected by the segregation. Although fixation in the neutral well-mixed (or “zero-dimensional”) model occurs exponentially in time, it occurs only algebraically fast in the one-dimensional model. An unusual sublinear increase is also found in the variance of the spatially averaged allele frequency with time. If selection is weak, selective sweeps occur exponentially fast in both well-mixed and one-dimensional populations, but the time constants are different. The relatively unexplored problem of evolutionary dynamics at the edge of an expanding circular colony is studied as well. Also reviewed are how the observed patterns of genetic diversity can be used for statistical inference and the differences are highlighted between the well-mixed and one-dimensional models. Although the focus is on two alleles or variants, q -allele Potts-like models of gene segregation are considered as well. Most of the analytical results are checked with simulations and could be tested against recent spatial experiments on range expansions of inoculations of Escherichia coli and Saccharomyces cerevisiae.
A Step-by-Step Framework on Discrete Events Simulation in Emergency Department; A Systematic Review
Dehghani, Mahsa; Moftian, Nazila; Rezaei-Hachesu, Peyman; Samad-Soltani, Taha
2017-01-01
Objective: To systematically review the current literature of simulation in healthcare including the structured steps in the emergency healthcare sector by proposing a framework for simulation in the emergency department. Methods: For the purpose of collecting the data, PubMed and ACM databases were used between the years 2003 and 2013. The inclusion criteria were to select English-written articles available in full text with the closest objectives from among a total of 54 articles retrieved from the databases. Subsequently, 11 articles were selected for further analysis. Results: The studies focused on the reduction of waiting time and patient stay, optimization of resources allocation, creation of crisis and maximum demand scenarios, identification of overcrowding bottlenecks, investigation of the impact of other systems on the existing system, and improvement of the system operations and functions. Subsequently, 10 simulation steps were derived from the relevant studies after an expert’s evaluation. Conclusion: The 10-steps approach proposed on the basis of the selected studies provides simulation and planning specialists with a structured method for both analyzing problems and choosing best-case scenarios. Moreover, following this framework systematically enables the development of design processes as well as software implementation of simulation problems. PMID:28507994
ERIC Educational Resources Information Center
Ohio Board of Regents, 2013
2013-01-01
This Sixth Condition Report represents a snapshot of a moment in time. It focuses on critical enabling conditions and initial implementation steps for a strategically chosen subset of the action steps embedded in the Task Force's full slate of recommendations. As such, this report serves four essential purposes: (1) It identifies a selective set…
NASA Astrophysics Data System (ADS)
Collery, Véronique
2014-05-01
Using different kind of activities is a good way to motivate students. Depending on the class i suggest them to carry out experiments and write a report of them, to prepare a video (about 3 minutes long) presenting some experiments on a theme they choose among selected topics (working in groups of 2 or 3), to do an oral presentation, to debate on a topic, to participate in scientific breakfast/tea-time. The scientific breakfast or tea-time is an opportunity for students to meet great researchers and to exchange with them friendly sharing a breakfast or a tea-time. For example, to prepare the video, the lesson consists of three steps for a total length of three or four hours.The first step is the selection of the theme and the selection of 2 or 3 impressive, funny, original, visual experiments. The second step is trying out the experiments and the writing of the script. The third step is the making of the video. During the last step the students are supposed to watch and to grade the video. For example, to impulse a debate in a class of 16-year-old students I use a part of the movie 'Appolo 13' (chapter 28). This activity is a new approach of the theme 'the gravitational force' the students learnt in their Physic's curriculum. It's a quite difficult phenomenon to visualize. The lesson consists of three steps for a total length of two hours. During the pre-task phase, students are supposed to do a matching activity to introduce scientific words and their definitions. In the task phase, an extract of the movie APOLLO 13 is shown in order to stimulate students' listening and comprehension. To help them exchange around gravity, and space flights, students are engaged in a CLIL game. In the post-task phase, the pairs join together to form 2 groups. These groups correspond the two options the controllers in Houston make to get the astronauts back home safely. A debate is requested to argue each point of view.
NASA Technical Reports Server (NTRS)
Janus, J. Mark; Whitfield, David L.
1990-01-01
Improvements are presented of a computer algorithm developed for the time-accurate flow analysis of rotating machines. The flow model is a finite volume method utilizing a high-resolution approximate Riemann solver for interface flux definitions. The numerical scheme is a block LU implicit iterative-refinement method which possesses apparent unconditional stability. Multiblock composite gridding is used to orderly partition the field into a specified arrangement of blocks exhibiting varying degrees of similarity. Block-block relative motion is achieved using local grid distortion to reduce grid skewness and accommodate arbitrary time step selection. A general high-order numerical scheme is applied to satisfy the geometric conservation law. An even-blade-count counterrotating unducted fan configuration is chosen for a computational study comparing solutions resulting from altering parameters such as time step size and iteration count. The solutions are compared with measured data.
A modular approach to intensity-modulated arc therapy optimization with noncoplanar trajectories
NASA Astrophysics Data System (ADS)
Papp, Dávid; Bortfeld, Thomas; Unkelbach, Jan
2015-07-01
Utilizing noncoplanar beam angles in volumetric modulated arc therapy (VMAT) has the potential to combine the benefits of arc therapy, such as short treatment times, with the benefits of noncoplanar intensity modulated radiotherapy (IMRT) plans, such as improved organ sparing. Recently, vendors introduced treatment machines that allow for simultaneous couch and gantry motion during beam delivery to make noncoplanar VMAT treatments possible. Our aim is to provide a reliable optimization method for noncoplanar isocentric arc therapy plan optimization. The proposed solution is modular in the sense that it can incorporate different existing beam angle selection and coplanar arc therapy optimization methods. Treatment planning is performed in three steps. First, a number of promising noncoplanar beam directions are selected using an iterative beam selection heuristic; these beams serve as anchor points of the arc therapy trajectory. In the second step, continuous gantry/couch angle trajectories are optimized using a simple combinatorial optimization model to define a beam trajectory that efficiently visits each of the anchor points. Treatment time is controlled by limiting the time the beam needs to trace the prescribed trajectory. In the third and final step, an optimal arc therapy plan is found along the prescribed beam trajectory. In principle any existing arc therapy optimization method could be incorporated into this step; for this work we use a sliding window VMAT algorithm. The approach is demonstrated using two particularly challenging cases. The first one is a lung SBRT patient whose planning goals could not be satisfied with fewer than nine noncoplanar IMRT fields when the patient was treated in the clinic. The second one is a brain tumor patient, where the target volume overlaps with the optic nerves and the chiasm and it is directly adjacent to the brainstem. Both cases illustrate that the large number of angles utilized by isocentric noncoplanar VMAT plans can help improve dose conformity, homogeneity, and organ sparing simultaneously using the same beam trajectory length and delivery time as a coplanar VMAT plan.
Convergent Validity of the Arab Teens Lifestyle Study (ATLS) Physical Activity Questionnaire
Al-Hazzaa, Hazzaa M.; Al-Sobayel, Hana I.; Musaiger, Abdulrahman O.
2011-01-01
The Arab Teens Lifestyle Study (ATLS) is a multicenter project for assessing the lifestyle habits of Arab adolescents. This study reports on the convergent validity of the physical activity questionnaire used in ATLS against an electronic pedometer. Participants were 39 males and 36 females randomly selected from secondary schools, with a mean age of 16.1 ± 1.1 years. ATLS self-reported questionnaire was validated against the electronic pedometer for three consecutive weekdays. Mean steps counts were 6,866 ± 3,854 steps/day with no significant gender difference observed. Questionnaire results showed no significant gender differences in time spent on total or moderate-intensity activities. However, males spent significantly more time than females on vigorous-intensity activity. The correlation of steps counts with total time spent on all activities by the questionnaire was 0.369. Relationship of steps counts was higher with vigorous-intensity (r = 0.338) than with moderate-intensity activity (r = 0.265). Pedometer steps counts showed higher correlations with time spent on walking (r = 0.350) and jogging (r = 0.383) than with the time spent on other activities. Active participants, based on pedometer assessment, were also most active by the questionnaire. It appears that ATLS questionnaire is a valid instrument for assessing habitual physical activity among Arab adolescents. PMID:22016718
Convergent validity of the Arab Teens Lifestyle Study (ATLS) physical activity questionnaire.
Al-Hazzaa, Hazzaa M; Al-Sobayel, Hana I; Musaiger, Abdulrahman O
2011-09-01
The Arab Teens Lifestyle Study (ATLS) is a multicenter project for assessing the lifestyle habits of Arab adolescents. This study reports on the convergent validity of the physical activity questionnaire used in ATLS against an electronic pedometer. Participants were 39 males and 36 females randomly selected from secondary schools, with a mean age of 16.1 ± 1.1 years. ATLS self-reported questionnaire was validated against the electronic pedometer for three consecutive weekdays. Mean steps counts were 6,866 ± 3,854 steps/day with no significant gender difference observed. Questionnaire results showed no significant gender differences in time spent on total or moderate-intensity activities. However, males spent significantly more time than females on vigorous-intensity activity. The correlation of steps counts with total time spent on all activities by the questionnaire was 0.369. Relationship of steps counts was higher with vigorous-intensity (r = 0.338) than with moderate-intensity activity (r = 0.265). Pedometer steps counts showed higher correlations with time spent on walking (r = 0.350) and jogging (r = 0.383) than with the time spent on other activities. Active participants, based on pedometer assessment, were also most active by the questionnaire. It appears that ATLS questionnaire is a valid instrument for assessing habitual physical activity among Arab adolescents.
Real time detection of ESKAPE pathogens by a nitroreductase-triggered fluorescence turn-on probe.
Xu, Shengnan; Wang, Qinghua; Zhang, Qingyang; Zhang, Leilei; Zuo, Limin; Jiang, Jian-Dong; Hu, Hai-Yu
2017-10-18
The identification of bacterial pathogens is the critical first step in conquering infection diseases. A novel turn-on fluorescent probe for the selective sensing of nitroreductase (NTR) activity and its initial applications in rapid, real-time detection and identification of ESKAPE pathogens have been reported.
Methods for producing silicon carbide architectural preforms
NASA Technical Reports Server (NTRS)
DiCarlo, James A. (Inventor); Yun, Hee (Inventor)
2010-01-01
Methods are disclosed for producing architectural preforms and high-temperature composite structures containing high-strength ceramic fibers with reduced preforming stresses within each fiber, with an in-situ grown coating on each fiber surface, with reduced boron within the bulk of each fiber, and with improved tensile creep and rupture resistance properties for each fiber. The methods include the steps of preparing an original sample of a preform formed from a pre-selected high-strength silicon carbide ceramic fiber type, placing the original sample in a processing furnace under a pre-selected preforming stress state and thermally treating the sample in the processing furnace at a pre-selected processing temperature and hold time in a processing gas having a pre-selected composition, pressure, and flow rate. For the high-temperature composite structures, the method includes additional steps of depositing a thin interphase coating on the surface of each fiber and forming a ceramic or carbon-based matrix within the sample.
Mutational Effects and Population Dynamics During Viral Adaptation Challenge Current Models
Miller, Craig R.; Joyce, Paul; Wichman, Holly A.
2011-01-01
Adaptation in haploid organisms has been extensively modeled but little tested. Using a microvirid bacteriophage (ID11), we conducted serial passage adaptations at two bottleneck sizes (104 and 106), followed by fitness assays and whole-genome sequencing of 631 individual isolates. Extensive genetic variation was observed including 22 beneficial, several nearly neutral, and several deleterious mutations. In the three large bottleneck lines, up to eight different haplotypes were observed in samples of 23 genomes from the final time point. The small bottleneck lines were less diverse. The small bottleneck lines appeared to operate near the transition between isolated selective sweeps and conditions of complex dynamics (e.g., clonal interference). The large bottleneck lines exhibited extensive interference and less stochasticity, with multiple beneficial mutations establishing on a variety of backgrounds. Several leapfrog events occurred. The distribution of first-step adaptive mutations differed significantly from the distribution of second-steps, and a surprisingly large number of second-step beneficial mutations were observed on a highly fit first-step background. Furthermore, few first-step mutations appeared as second-steps and second-steps had substantially smaller selection coefficients. Collectively, the results indicate that the fitness landscape falls between the extremes of smooth and fully uncorrelated, violating the assumptions of many current mutational landscape models. PMID:21041559
Adsorption process to recover hydrogen from feed gas mixtures having low hydrogen concentration
Golden, Timothy Christopher; Weist, Jr., Edward Landis; Hufton, Jeffrey Raymond; Novosat, Paul Anthony
2010-04-13
A process for selectively separating hydrogen from at least one more strongly adsorbable component in a plurality of adsorption beds to produce a hydrogen-rich product gas from a low hydrogen concentration feed with a high recovery rate. Each of the plurality of adsorption beds subjected to a repetitive cycle. The process comprises an adsorption step for producing the hydrogen-rich product from a feed gas mixture comprising 5% to 50% hydrogen, at least two pressure equalization by void space gas withdrawal steps, a provide purge step resulting in a first pressure decrease, a blowdown step resulting in a second pressure decrease, a purge step, at least two pressure equalization by void space gas introduction steps, and a repressurization step. The second pressure decrease is at least 2 times greater than the first pressure decrease.
Two Independent Contributions to Step Variability during Over-Ground Human Walking
Collins, Steven H.; Kuo, Arthur D.
2013-01-01
Human walking exhibits small variations in both step length and step width, some of which may be related to active balance control. Lateral balance is thought to require integrative sensorimotor control through adjustment of step width rather than length, contributing to greater variability in step width. Here we propose that step length variations are largely explained by the typical human preference for step length to increase with walking speed, which itself normally exhibits some slow and spontaneous fluctuation. In contrast, step width variations should have little relation to speed if they are produced more for lateral balance. As a test, we examined hundreds of overground walking steps by healthy young adults (N = 14, age < 40 yrs.). We found that slow fluctuations in self-selected walking speed (2.3% coefficient of variation) could explain most of the variance in step length (59%, P < 0.01). The residual variability not explained by speed was small (1.5% coefficient of variation), suggesting that step length is actually quite precise if not for the slow speed fluctuations. Step width varied over faster time scales and was independent of speed fluctuations, with variance 4.3 times greater than that for step length (P < 0.01) after accounting for the speed effect. That difference was further magnified by walking with eyes closed, which appears detrimental to control of lateral balance. Humans appear to modulate fore-aft foot placement in precise accordance with slow fluctuations in walking speed, whereas the variability of lateral foot placement appears more closely related to balance. Step variability is separable in both direction and time scale into balance- and speed-related components. The separation of factors not related to balance may reveal which aspects of walking are most critical for the nervous system to control. PMID:24015308
Rochester, Lynn; Baker, Katherine; Nieuwboer, Alice; Burn, David
2011-02-15
Independence of certain gait characteristics from dopamine replacement therapies highlights its complex pathophysiology in Parkinson's disease (PD). We explored the effect of two different cue strategies on gait characteristics in relation to their response to dopaminergic medications. Fifty people with PD (age 69.22 ± 6.6 years) were studied. Participants walked with and without cues presented in a randomized order. Cue strategies were: (1) internal cue (attention to increase step length) and (2) external cue (auditory cue with instruction to take large step to the beat). Testing was carried out two times at home (on and off medication). Gait was measured using a Stride Analyzer (B&L Engineering). Gait outcomes were walking speed, stride length, step frequency, and coefficient of variation (CV) of stride time and double limb support duration (DLS). Walking speed, stride length, and stride time CV improved on dopaminergic medications, whereas step frequency and DLS CV did not. Internal and external cues increased stride time and walking speed (on and off dopaminergic medications). Only the external cue significantly improved stride time CV and DLS CV, whereas the internal cue had no effect (on and off dopaminergic medications). Internal and external cues selectively modify gait characteristics in relation to the type of gait disturbance and its dopa-responsiveness. Although internal (attention) and external cues target dopaminergic gait dysfunction (stride length), only external cues target stride to stride fluctuations in gait. Despite an overlap with dopaminergic pathways, external cues may effectively address nondopaminergic gait dysfunction and potentially increase mobility and reduce gait instability and falls. Copyright © 2010 Movement Disorder Society.
Puttini, Stefania; Ouvrard-Pascaud, Antoine; Palais, Gael; Beggah, Ahmed T; Gascard, Philippe; Cohen-Tannoudji, Michel; Babinet, Charles; Blot-Chabaud, Marcel; Jaisser, Frederic
2005-03-16
Functional genomic analysis is a challenging step in the so-called post-genomic field. Identification of potential targets using large-scale gene expression analysis requires functional validation to identify those that are physiologically relevant. Genetically modified cell models are often used for this purpose allowing up- or down-expression of selected targets in a well-defined and if possible highly differentiated cell type. However, the generation of such models remains time-consuming and expensive. In order to alleviate this step, we developed a strategy aimed at the rapid and efficient generation of genetically modified cell lines with conditional, inducible expression of various target genes. Efficient knock-in of various constructs, called targeted transgenesis, in a locus selected for its permissibility to the tet inducible system, was obtained through the stimulation of site-specific homologous recombination by the meganuclease I-SceI. Our results demonstrate that targeted transgenesis in a reference inducible locus greatly facilitated the functional analysis of the selected recombinant cells. The efficient screening strategy we have designed makes possible automation of the transfection and selection steps. Furthermore, this strategy could be applied to a variety of highly differentiated cells.
Ultra-fast consensus of discrete-time multi-agent systems with multi-step predictive output feedback
NASA Astrophysics Data System (ADS)
Zhang, Wenle; Liu, Jianchang
2016-04-01
This article addresses the ultra-fast consensus problem of high-order discrete-time multi-agent systems based on a unified consensus framework. A novel multi-step predictive output mechanism is proposed under a directed communication topology containing a spanning tree. By predicting the outputs of a network several steps ahead and adding this information into the consensus protocol, it is shown that the asymptotic convergence factor is improved by a power of q + 1 compared to the routine consensus. The difficult problem of selecting the optimal control gain is solved well by introducing a variable called convergence step. In addition, the ultra-fast formation achievement is studied on the basis of this new consensus protocol. Finally, the ultra-fast consensus with respect to a reference model and robust consensus is discussed. Some simulations are performed to illustrate the effectiveness of the theoretical results.
Schulze, M; Kuster, C; Schäfer, J; Jung, M; Grossfeld, R
2018-03-01
The processing of ejaculates is a fundamental step for the fertilizing capacity of boar spermatozoa. The aim of the present study was to identify factors that affect quality of boar semen doses. The production process during 1 day of semen processing in 26 European boar studs was monitored. In each boar stud, nine to 19 randomly selected ejaculates from 372 Pietrain boars were analyzed for sperm motility, acrosome and plasma membrane integrity, mitochondrial activity and thermo-resistance (TRT). Each ejaculate was monitored for production time and temperature for each step in semen processing using the special programmed software SEQU (version 1.7, Minitüb, Tiefenbach, Germany). The dilution of ejaculates with a short-term extender was completed in one step in 10 AI centers (n = 135 ejaculates), in two steps in 11 AI centers (n = 158 ejaculates) and in three steps in five AI centers (n = 79 ejaculates). Results indicated there was a greater semen quality with one-step isothermal dilution compared with the multi-step dilution of AI semen doses (total motility TRT d7: 71.1 ± 19.2%, 64.6 ± 20.0%, 47.1 ± 27.1%; one-step compared with two-step compared with the three-step dilution; P < .05). There was a marked advantage when using the one-step isothermal dilution regarding time management, preservation suitability, stability and stress resistance. One-step dilution caused significant lower holding times of raw ejaculates and reduced the possible risk of making mistakes due to a lower number of processing steps. These results lead to refined recommendations for boar semen processing. Copyright © 2018 Elsevier B.V. All rights reserved.
Calculating Time-Integral Quantities in Depletion Calculations
Isotalo, Aarno
2016-06-02
A method referred to as tally nuclides is presented for accurately and efficiently calculating the time-step averages and integrals of any quantities that are weighted sums of atomic densities with constant weights during the step. The method allows all such quantities to be calculated simultaneously as a part of a single depletion solution with existing depletion algorithms. Some examples of the results that can be extracted include step-average atomic densities and macroscopic reaction rates, the total number of fissions during the step, and the amount of energy released during the step. Furthermore, the method should be applicable with several depletionmore » algorithms, and the integrals or averages should be calculated with an accuracy comparable to that reached by the selected algorithm for end-of-step atomic densities. The accuracy of the method is demonstrated in depletion calculations using the Chebyshev rational approximation method. Here, we demonstrate how the ability to calculate energy release in depletion calculations can be used to determine the accuracy of the normalization in a constant-power burnup calculation during the calculation without a need for a reference solution.« less
... One Choose A Body Area: Eyes Step Two Select A Symptom: Itchy eyes Step Three Possible Issues: ... conjunctivitis Mold allergy Pet allergy Nose Step Two Select A Symptom: Nasal congestion Step Three Possible Issues: ...
Konik, Anita; Kuklewicz, Stanisław; Rosłoniec, Ewelina; Zając, Marcin; Spannbauer, Anna; Nowobilski, Roman; Mika, Piotr
2016-01-01
The purpose of the study was to evaluate selected temporal and spatial gait parameters in patients with intermittent claudication after completion of 12-week supervised treadmill walking training. The study included 36 patients (26 males and 10 females) aged: mean 64 (SD 7.7) with intermittent claudication. All patients were tested on treadmill (Gait Trainer, Biodex). Before the programme and after its completion, the following gait biomechanical parameters were tested: step length (cm), step cycle (cycle/s), leg support time (%), coefficient of step variation (%) as well as pain-free walking time (PFWT) and maximal walking time (MWT) were measured. Training was conducted in accordance with the current TASC II guidelines. After 12 weeks of training, patients showed significant change in gait biomechanics consisting in decreased frequency of step cycle (p < 0.05) and extended step length (p < 0.05). PFWT increased by 96% (p < 0.05). MWT increased by 100% (p < 0.05). After completing the training, patients' gait was more regular, which was expressed via statistically significant decrease of coefficient of variation (p < 0.05) for both legs. No statistically significant relation between the post-training improvement of PFWT and MWT and step length increase and decreased frequency of step cycle was observed (p > 0.05). Twelve-week treadmill walking training programme may lead to significant improvement of temporal and spatial gait parameters in patients with intermittent claudication. Twelve-week treadmill walking training programme may lead to significant improvement of pain-free walking time and maximum walking time in patients with intermittent claudication.
One step screening of retroviral producer clones by real time quantitative PCR.
Towers, G J; Stockholm, D; Labrousse-Najburg, V; Carlier, F; Danos, O; Pagès, J C
1999-01-01
Recombinant retroviruses are obtained from either stably or transiently transfected retrovirus producer cells. In the case of stably producing lines, a large number of clones must be screened in order to select the one with the highest titre. The multi-step selection of high titre producing clones is time consuming and expensive. We have taken advantage of retroviral endogenous reverse transcription to develop a quantitative PCR assay on crude supernatant from producing clones. We used Taqman PCR technology, which, by using fluorescence measurement at each cycle of amplification, allows PCR product quantification. Fluorescence results from specific degradation of a probe oligonucleotide by the Taq polymerase 3'-5' exonuclease activity. Primers and probe sequences were chosen to anneal to the viral strong stop species, which is the first DNA molecule synthesised during reverse transcription. The protocol consists of a single real time PCR, using as template filtered viral supernatant without any other pre-treatment. We show that the primers and probe described allow quantitation of serially diluted plasmid to as few as 15 plasmid molecules. We then test 200 GFP-expressing retroviral-producing clones either by FACS analysis of infected cells or by using the quantitative PCR. We confirm that the Taqman protocol allows the detection of virus in supernatant and selection of high titre clones. Furthermore, we can determine infectious titre by quantitative PCR on genomic DNA from infected cells, using an additional set of primers and probe to albumin to normalise for the genomic copy number. We demonstrate that real time quantitative PCR can be used as a powerful and reliable single step, high throughput screen for high titre retroviral producer clones.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Barbee, T. W.; Schena, D.
This was a collaborative effort between Lawrence Livermore National Security, LLC as manager and operator of Lawrence Livermore National Laboratory (LLNL) and TroyCap LLC, to develop manufacturing steps for commercial production of nano-structure capacitors. The technical objective of this project was to demonstrate high deposition rates of selected dielectric materials which are 2 to 5 times larger than typical using current technology.
Nonenzymatic Wearable Sensor for Electrochemical Analysis of Perspiration Glucose.
Zhu, Xiaofei; Ju, Yinhui; Chen, Jian; Liu, Deye; Liu, Hong
2018-05-25
We report a nonenzymatic wearable sensor for electrochemical analysis of perspiration glucose. Multipotential steps are applied on a Au electrode, including a high negative pretreatment potential step for proton reduction which produces a localized alkaline condition, a moderate potential step for electrocatalytic oxidation of glucose under the alkaline condition, and a positive potential step to clean and reactivate the electrode surface for the next detection. Fluorocarbon-based materials were coated on the Au electrode for improving the selectivity and robustness of the sensor. A fully integrated wristband is developed for continuous real-time monitoring of perspiration glucose during physical activities, and uploading the test result to a smartphone app via Bluetooth.
Radiative transfer and spectroscopic databases: A line-sampling Monte Carlo approach
NASA Astrophysics Data System (ADS)
Galtier, Mathieu; Blanco, Stéphane; Dauchet, Jérémi; El Hafi, Mouna; Eymet, Vincent; Fournier, Richard; Roger, Maxime; Spiesser, Christophe; Terrée, Guillaume
2016-03-01
Dealing with molecular-state transitions for radiative transfer purposes involves two successive steps that both reach the complexity level at which physicists start thinking about statistical approaches: (1) constructing line-shaped absorption spectra as the result of very numerous state-transitions, (2) integrating over optical-path domains. For the first time, we show here how these steps can be addressed simultaneously using the null-collision concept. This opens the door to the design of Monte Carlo codes directly estimating radiative transfer observables from spectroscopic databases. The intermediate step of producing accurate high-resolution absorption spectra is no longer required. A Monte Carlo algorithm is proposed and applied to six one-dimensional test cases. It allows the computation of spectrally integrated intensities (over 25 cm-1 bands or the full IR range) in a few seconds, regardless of the retained database and line model. But free parameters need to be selected and they impact the convergence. A first possible selection is provided in full detail. We observe that this selection is highly satisfactory for quite distinct atmospheric and combustion configurations, but a more systematic exploration is still in progress.
Wang, Xianghong; Jiang, Daiming; Yang, Daichang
2015-01-01
The selection of homozygous lines is a crucial step in the characterization of newly generated transgenic plants. This is particularly time- and labor-consuming when transgenic stacking is required. Here, we report a fast and accurate method based on quantitative real-time PCR with a rice gene RBE4 as a reference gene for selection of homozygous lines when using multiple transgenic stacking in rice. Use of this method allowed can be used to determine the stacking of up to three transgenes within four generations. Selection accuracy reached 100 % for a single locus and 92.3 % for two loci. This method confers distinct advantages over current transgenic research methodologies, as it is more accurate, rapid, and reliable. Therefore, this protocol could be used to efficiently select homozygous plants and to expedite time- and labor-consuming processes normally required for multiple transgene stacking. This protocol was standardized for determination of multiple gene stacking in molecular breeding via marker-assisted selection.
Lee, Sang Cheol
2017-12-01
A cost-effective five-step sugar purification process involving simultaneous removal and recovery of fermentation inhibitors from biomass hydrolysates was first proposed here. Only the three separation steps (PB, PC and PD) in the process were investigated here. Furfural was selectively removed up to 98.4% from a simulated five-component hydrolysate in a cross-current three-stage extraction system with n-hexane. Most of acetic acid in a simulated four-component hydrolysate was selectively removed by emulsion liquid membrane, and it could be concentrated in the stripping solution up to 4.5 times its initial concentration in the feed solution. 5-Hydroxymethylfurfural was selectively removed from a simulated three-component hydrolysate in batch and continuous fixed-bed column adsorption systems with L-493 adsorbent. Also, 5-hydroxymethylfurfural could be concentrated to about 9 times its feed concentration in the continuous adsorption system through a fixed-bed column desorption experiment with aqueous ethanol solution. These results have shown that the proposed purification process was valid. Copyright © 2017 Elsevier Ltd. All rights reserved.
Olivera, André Rodrigues; Roesler, Valter; Iochpe, Cirano; Schmidt, Maria Inês; Vigo, Álvaro; Barreto, Sandhi Maria; Duncan, Bruce Bartholow
2017-01-01
Type 2 diabetes is a chronic disease associated with a wide range of serious health complications that have a major impact on overall health. The aims here were to develop and validate predictive models for detecting undiagnosed diabetes using data from the Longitudinal Study of Adult Health (ELSA-Brasil) and to compare the performance of different machine-learning algorithms in this task. Comparison of machine-learning algorithms to develop predictive models using data from ELSA-Brasil. After selecting a subset of 27 candidate variables from the literature, models were built and validated in four sequential steps: (i) parameter tuning with tenfold cross-validation, repeated three times; (ii) automatic variable selection using forward selection, a wrapper strategy with four different machine-learning algorithms and tenfold cross-validation (repeated three times), to evaluate each subset of variables; (iii) error estimation of model parameters with tenfold cross-validation, repeated ten times; and (iv) generalization testing on an independent dataset. The models were created with the following machine-learning algorithms: logistic regression, artificial neural network, naïve Bayes, K-nearest neighbor and random forest. The best models were created using artificial neural networks and logistic regression. -These achieved mean areas under the curve of, respectively, 75.24% and 74.98% in the error estimation step and 74.17% and 74.41% in the generalization testing step. Most of the predictive models produced similar results, and demonstrated the feasibility of identifying individuals with highest probability of having undiagnosed diabetes, through easily-obtained clinical data.
1985-06-01
and ptosis 7. epicanthal folds 8. cleft lip or cleft palate 9. hirsuitism APPENDIX 2 PROCESSION PLAN Stage Activit Time Required Phase I step 1 Select...thin upper lip , and/or flattening of the maxillary area II. FETAL ALCOHOL EFFECTS: Any congenital abnormality seen in children as a result of maternal
Kimberly F. Wallin; Daniel S. Ott; Alvin D. Yanchuk
2012-01-01
Abiotic and biotic stressors exert selective pressures on plants, and over evolutionary time lead to the development of specialized adaptations and specific responses to stresses (Safranyik and Carroll 2006, Wallin and Raffa 2002). In this way, the environment in which plants evolve shapes their life cycles, range, growth, reproduction, and defenses. Insects and...
Compartmentalized partnered replication for the directed evolution of genetic parts and circuits.
Abil, Zhanar; Ellefson, Jared W; Gollihar, Jimmy D; Watkins, Ella; Ellington, Andrew D
2017-12-01
Compartmentalized partnered replication (CPR) is an emulsion-based directed evolution method based on a robust and modular phenotype-genotype linkage. In contrast to other in vivo directed evolution approaches, CPR largely mitigates host fitness effects due to a relatively short expression time of the gene of interest. CPR is based on gene circuits in which the selection of a 'partner' function from a library leads to the production of a thermostable polymerase. After library preparation, bacteria produce partner proteins that can potentially lead to enhancement of transcription, translation, gene regulation, and other aspects of cellular metabolism that reinforce thermostable polymerase production. Individual cells are then trapped in water-in-oil emulsion droplets in the presence of primers and dNTPs, followed by the recovery of the partner genes via emulsion PCR. In this step, droplets with cells expressing partner proteins that promote polymerase production will produce higher copy numbers of the improved partner gene. The resulting partner genes can subsequently be recloned for the next round of selection. Here, we present a step-by-step guideline for the procedure by providing examples of (i) selection of T7 RNA polymerases that recognize orthogonal promoters and (ii) selection of tRNA for enhanced amber codon suppression. A single round of CPR should take ∼3-5 d, whereas a whole directed evolution can be performed in 3-10 rounds, depending on selection efficiency.
Ivezic, Nenad; Potok, Thomas E.
2003-09-30
A method for automatically evaluating a manufacturing technique comprises the steps of: receiving from a user manufacturing process step parameters characterizing a manufacturing process; accepting from the user a selection for an analysis of a particular lean manufacturing technique; automatically compiling process step data for each process step in the manufacturing process; automatically calculating process metrics from a summation of the compiled process step data for each process step; and, presenting the automatically calculated process metrics to the user. A method for evaluating a transition from a batch manufacturing technique to a lean manufacturing technique can comprise the steps of: collecting manufacturing process step characterization parameters; selecting a lean manufacturing technique for analysis; communicating the selected lean manufacturing technique and the manufacturing process step characterization parameters to an automatic manufacturing technique evaluation engine having a mathematical model for generating manufacturing technique evaluation data; and, using the lean manufacturing technique evaluation data to determine whether to transition from an existing manufacturing technique to the selected lean manufacturing technique.
NASA Technical Reports Server (NTRS)
Yun, Hee-Mann (Inventor); DiCarlo, James A. (Inventor)
2014-01-01
Methods are disclosed for producing architectural preforms and high-temperature composite structures containing high-strength ceramic fibers with reduced preforming stresses within each fiber, with an in-situ grown coating on each fiber surface, with reduced boron within the bulk of each fiber, and with improved tensile creep and rupture resistance properties tier each fiber. The methods include the steps of preparing an original sample of a preform formed from a pre-selected high-strength silicon carbide ceramic fiber type, placing the original sample in a processing furnace under a pre-selected preforming stress state and thermally treating the sample in the processing furnace at a pre-selected processing temperature and hold time in a processing gas having a pre-selected composition, pressure, and flow rate. For the high-temperature composite structures, the method includes additional steps of depositing a thin interphase coating on the surface of each fiber and forming a ceramic or carbon-based matrix within the sample.
Simplifying Facility and Event Scheduling: Saving Time and Money.
ERIC Educational Resources Information Center
Raasch, Kevin
2003-01-01
Describes a product called the Event Management System (EMS), a computer software program to manage facility and event scheduling. Provides example of the school district and university uses of EMS. Describes steps in selecting a scheduling-management system. (PKP)
Time Dependent Studies of Reactive Shocks in the Gas Phase
1978-11-16
which takes advantsge of time-stop splitting. The fluid dynamics time integration is performed by an explicit two step predictor - corrector technique...Nava Reearh l~oraoryARIA A WORK UNIT NUMBERS NasahRaington MC, raor 2037 NR Problem (1101-16Washngto, !) C , 2i176ONR Project RR024.02.41 Office of... self -consistently on their own characteristic time-scaies using the flux-corrected transport and selected asymptotic meothods, respectively. Results are
Partition-based discrete-time quantum walks
NASA Astrophysics Data System (ADS)
Konno, Norio; Portugal, Renato; Sato, Iwao; Segawa, Etsuo
2018-04-01
We introduce a family of discrete-time quantum walks, called two-partition model, based on two equivalence-class partitions of the computational basis, which establish the notion of local dynamics. This family encompasses most versions of unitary discrete-time quantum walks driven by two local operators studied in literature, such as the coined model, Szegedy's model, and the 2-tessellable staggered model. We also analyze the connection of those models with the two-step coined model, which is driven by the square of the evolution operator of the standard discrete-time coined walk. We prove formally that the two-step coined model, an extension of Szegedy model for multigraphs, and the two-tessellable staggered model are unitarily equivalent. Then, selecting one specific model among those families is a matter of taste not generality.
Wester, T; Borg, H; Naji, H; Stenström, P; Westbacke, G; Lilja, H E
2014-09-01
Serial transverse enteroplasty (STEP) was first described in 2003 as a method for lengthening and tapering of the bowel in short bowel syndrome. The aim of this multicentre study was to review the outcome of a Swedish cohort of children who underwent STEP. All children who had a STEP procedure at one of the four centres of paediatric surgery in Sweden between September 2005 and January 2013 were included in this observational cohort study. Demographic details, and data from the time of STEP and at follow-up were collected from the case records and analysed. Twelve patients had a total of 16 STEP procedures; four children underwent a second STEP. The first STEP was performed at a median age of 5·8 (range 0·9-19·0) months. There was no death at a median follow-up of 37·2 (range 3·0-87·5) months and no child had small bowel transplantation. Seven of the 12 children were weaned from parenteral nutrition at a median of 19·5 (range 2·3-42·9) months after STEP. STEP is a useful procedure for selected patients with short bowel syndrome and seems to facilitate weaning from parenteral nutrition. At mid-term follow-up a majority of the children had achieved enteral autonomy. The study is limited by the small sample size and lack of a control group. © 2014 The Authors. BJS published by John Wiley & Sons Ltd on behalf of BJS Society Ltd.
NASA Astrophysics Data System (ADS)
Hanumagowda, B. N.; Raju, B. T.; Santhosh Kumar, J.; Vasanth, K. R.
2018-04-01
In this paper, the effect of PDV on the couple stress squeeze film lubrication between porous circular stepped plates is presented. Keeping the base of Christensen’s stochastic theory modified Reynolds equation is derived. Reynolds equation, fluid film pressure, squeeze film time and load carrying capacity are solved using standard perturbation technique. The results are tabulated and presented graphically for selected physical parameters and found that the squeeze effect is depleted in a porous bearing compared to its nonporous and increasing permeability has an adverse effect on the pressure, load carrying capacity and time of approach.
NASA Astrophysics Data System (ADS)
Mustafa, Mohammad Razif Bin; Dhahi, Th S.; Ehfaed, Nuri. A. K. H.; Adam, Tijjani; Hashim, U.; Azizah, N.; Mohammed, Mohammed; Noriman, N. Z.
2017-09-01
The nano structure based on silicon can be surface modified to be used as label-free biosensors that allow real-time measurements. The silicon nanowire surface was functionalized using 3-aminopropyltrimethoxysilane (APTES), which functions as a facilitator to immobilize biomolecules on the silicon nanowire surface. The process is simple, economical; this will pave the way for point-of-care applications. However, the surface modification and subsequent detection mechanism still not clear. Thus, study proposed step by step process of silicon nano surface modification and its possible in specific and selective target detection of Supra-genome 21 Mers Salmonella. The device captured the molecule with precisely; the approach took the advantages of strong binding chemistry created between APTES and biomolecule. The results indicated how modifications of the nanowires provide sensing capability with strong surface chemistries that can lead to specific and selective target detection.
NASA Technical Reports Server (NTRS)
Humphreys, E. A.
1981-01-01
A computerized, analytical methodology was developed to study damage accumulation during low velocity lateral impact of layered composite plates. The impact event was modeled as perfectly plastic with complete momentum transfer to the plate structure. A transient dynamic finite element approach was selected to predict the displacement time response of the plate structure. Composite ply and interlaminar stresses were computed at selected time intervals and subsequently evaluated to predict layer and interlaminar damage. The effects of damage on elemental stiffness were then incorporated back into the analysis for subsequent time steps. Damage predicted included fiber failure, matrix ply failure and interlaminar delamination.
Li, Huaqing; Chen, Guo; Huang, Tingwen; Dong, Zhaoyang; Zhu, Wei; Gao, Lan
2016-12-01
In this paper, we consider the event-triggered distributed average-consensus of discrete-time first-order multiagent systems with limited communication data rate and general directed network topology. In the framework of digital communication network, each agent has a real-valued state but can only exchange finite-bit binary symbolic data sequence with its neighborhood agents at each time step due to the digital communication channels with energy constraints. Novel event-triggered dynamic encoder and decoder for each agent are designed, based on which a distributed control algorithm is proposed. A scheme that selects the number of channel quantization level (number of bits) at each time step is developed, under which all the quantizers in the network are never saturated. The convergence rate of consensus is explicitly characterized, which is related to the scale of network, the maximum degree of nodes, the network structure, the scaling function, the quantization interval, the initial states of agents, the control gain and the event gain. It is also found that under the designed event-triggered protocol, by selecting suitable parameters, for any directed digital network containing a spanning tree, the distributed average consensus can be always achieved with an exponential convergence rate based on merely one bit information exchange between each pair of adjacent agents at each time step. Two simulation examples are provided to illustrate the feasibility of presented protocol and the correctness of the theoretical results.
Rain volume estimation over areas using satellite and radar data
NASA Technical Reports Server (NTRS)
Doneaud, A. A.; Vonderhaar, T. H.
1985-01-01
The feasibility of rain volume estimation over fixed and floating areas was investigated using rapid scan satellite data following a technique recently developed with radar data, called the Area Time Integral (ATI) technique. The radar and rapid scan GOES satellite data were collected during the Cooperative Convective Precipitation Experiment (CCOPE) and North Dakota Cloud Modification Project (NDCMP). Six multicell clusters and cells were analyzed to the present time. A two-cycle oscillation emphasizing the multicell character of the clusters is demonstrated. Three clusters were selected on each day, 12 June and 2 July. The 12 June clusters occurred during the daytime, while the 2 July clusters during the nighttime. A total of 86 time steps of radar and 79 time steps of satellite images were analyzed. There were approximately 12-min time intervals between radar scans on the average.
ERIC Educational Resources Information Center
Rook, Michael M.
2018-01-01
The author presents a three-step process for selecting participants for any study of a social phenomenon that occurs between people in locations and at times that are difficult to observe. The process is described with illustrative examples from a previous study of help giving in a community of learners. This paper includes a rationale for…
EM61-MK2 Response of Three Munitions Surrogates
2009-03-12
time-domain electromagnetic induction sensors, it produces a pulsed magnetic field (primary field) that induces a secondary field in metallic objects...selected and marked as potential metal targets. This initial list of anomalies is used as input to an analysis step that selects anomalies for digging...response of a metallic object to an Electromagnetic Induction sensor is most simply modeled as an induced dipole moment represented by a magnetic
Perréard, Camille; d'Orlyé, Fanny; Griveau, Sophie; Liu, Baohong; Bedioui, Fethi; Varenne, Anne
2017-10-01
There is a great demand for integrating sample treatment into μTASs. In this context, we developed a new sol-gel phase for extraction of trace compounds in complex matrices. For this purpose, the incorporation of aptamers in silica-based gel within PDMS/glass microfluidic channels was performed for the first time by a one-step sol-gel process. The effective gel attachment onto microchannel walls and aptamer incorporation in the polymerized gel were evaluated using fluorescence microscopy. A good gel stability and aptamer incorporation inside the microchannel was demonstrated upon rinsing and over storage time. The ability of gel-encapsulated aptamers to interact with its specific target (either sulforhodamine B as model fluorescent target, or diclofenac, a pain killer drug) was assessed too. The binding capacity of entrapped aptamers was quantified (in the micromolar range) and the selectivity of the interaction was evidenced. Preservation of aptamers binding affinity to target molecules was therefore demonstrated. Dissociation constant of the aptamer-target complex and interaction selectivity were evaluated similar to those in bulk solution. This opens the way to new selective on-chip SPE techniques for sample pretreatment. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Selecting information technology for physicians' practices: a cross-sectional study.
Eden, Karen Beekman
2002-04-05
Many physicians are transitioning from paper to electronic formats for billing, scheduling, medical charts, communications, etc. The primary objective of this research was to identify the relationship (if any) between the software selection process and the office staff's perceptions of the software's impact on practice activities. A telephone survey was conducted with office representatives of 407 physician practices in Oregon who had purchased information technology. The respondents, usually office managers, answered scripted questions about their selection process and their perceptions of the software after implementation. Multiple logistic regression revealed that software type, selection steps, and certain factors influencing the purchase were related to whether the respondents felt the software improved the scheduling and financial analysis practice activities. Specifically, practices that selected electronic medical record or practice management software, that made software comparisons, or that considered prior user testimony as important were more likely to have perceived improvements in the scheduling process than were other practices. Practices that considered value important, that did not consider compatibility important, that selected managed care software, that spent less than 10,000 dollars, or that provided learning time (most dramatic increase in odds ratio, 8.2) during implementation were more likely to perceive that the software had improved the financial analysis process than were other practices. Perhaps one of the most important predictors of improvement was providing learning time during implementation, particularly when the software involves several practice activities. Despite this importance, less than half of the practices reported performing this step.
Autonomous antenna tracking system for mobile symphonie ground stations
NASA Technical Reports Server (NTRS)
Ernsberger, K.; Lorch, G.; Waffenschmidt, E.
1982-01-01
The implementation of a satellite tracking and antenna control system is described. Due to the loss of inclination control for the symphonie satellites, it became necessary to equip the parabolic antennas of the mobile Symphonie ground station with tracking facilities. For the relatively low required tracking accuracy of 0.5 dB, a low cost, step track system was selected. The step track system developed for this purpose and tested over a long period of time in 7 ground stations is based on a search step method with subsequent parabola interpolation. As compared with the real search step method, the system has the advantage of a higher pointing angle resolution, and thus a higher tracking accuracy. When the pilot signal has been switched off for a long period of time, as for instance after the eclipse, the antenna is repointed towards the satellite by an automatically initiated spiral search scan. The function and design of the tracking system are detailed, while easy handling and tracking results.
Schuler, Friedrich; Schwemmer, Frank; Trotter, Martin; Wadle, Simon; Zengerle, Roland; von Stetten, Felix; Paust, Nils
2015-07-07
Aqueous microdroplets provide miniaturized reaction compartments for numerous chemical, biochemical or pharmaceutical applications. We introduce centrifugal step emulsification for the fast and easy production of monodisperse droplets. Homogenous droplets with pre-selectable diameters in a range from 120 μm to 170 μm were generated with coefficients of variation of 2-4% and zero run-in time or dead volume. The droplet diameter depends on the nozzle geometry (depth, width, and step size) and interfacial tensions only. Droplet size is demonstrated to be independent of the dispersed phase flow rate between 0.01 and 1 μl s(-1), proving the robustness of the centrifugal approach. Centrifugal step emulsification can easily be combined with existing centrifugal microfluidic unit operations, is compatible to scalable manufacturing technologies such as thermoforming or injection moulding and enables fast emulsification (>500 droplets per second and nozzle) with minimal handling effort (2-3 pipetting steps). The centrifugal microfluidic droplet generation was used to perform the first digital droplet recombinase polymerase amplification (ddRPA). It was used for absolute quantification of Listeria monocytogenes DNA concentration standards with a total analysis time below 30 min. Compared to digital droplet polymerase chain reaction (ddPCR), with processing times of about 2 hours, the overall processing time of digital analysis was reduced by more than a factor of 4.
Development of a Robust Identifier for NPPs Transients Combining ARIMA Model and EBP Algorithm
NASA Astrophysics Data System (ADS)
Moshkbar-Bakhshayesh, Khalil; Ghofrani, Mohammad B.
2014-08-01
This study introduces a novel identification method for recognition of nuclear power plants (NPPs) transients by combining the autoregressive integrated moving-average (ARIMA) model and the neural network with error backpropagation (EBP) learning algorithm. The proposed method consists of three steps. First, an EBP based identifier is adopted to distinguish the plant normal states from the faulty ones. In the second step, ARIMA models use integrated (I) process to convert non-stationary data of the selected variables into stationary ones. Subsequently, ARIMA processes, including autoregressive (AR), moving-average (MA), or autoregressive moving-average (ARMA) are used to forecast time series of the selected plant variables. In the third step, for identification the type of transients, the forecasted time series are fed to the modular identifier which has been developed using the latest advances of EBP learning algorithm. Bushehr nuclear power plant (BNPP) transients are probed to analyze the ability of the proposed identifier. Recognition of transient is based on similarity of its statistical properties to the reference one, rather than the values of input patterns. More robustness against noisy data and improvement balance between memorization and generalization are salient advantages of the proposed identifier. Reduction of false identification, sole dependency of identification on the sign of each output signal, selection of the plant variables for transients training independent of each other, and extendibility for identification of more transients without unfavorable effects are other merits of the proposed identifier.
Optimized Enrichment of Phosphoproteomes by Fe-IMAC Column Chromatography.
Ruprecht, Benjamin; Koch, Heiner; Domasinska, Petra; Frejno, Martin; Kuster, Bernhard; Lemeer, Simone
2017-01-01
Phosphorylation is among the most important post-translational modifications of proteins and has numerous regulatory functions across all domains of life. However, phosphorylation is often substoichiometric, requiring selective and sensitive methods to enrich phosphorylated peptides from complex cellular digests. Various methods have been devised for this purpose and we have recently described a Fe-IMAC HPLC column chromatography setup which is capable of comprehensive, reproducible, and selective enrichment of phosphopeptides out of complex peptide mixtures. In contrast to other formats such as StageTips or batch incubations using TiO 2 or Ti-IMAC beads, Fe-IMAC HPLC columns do not suffer from issues regarding incomplete phosphopeptide binding or elution and enrichment efficiency scales linearly with the amount of starting material. Here, we provide a step-by-step protocol for the entire phosphopeptide enrichment procedure including sample preparation (lysis, digestion, desalting), Fe-IMAC column chromatography (column setup, operation, charging), measurement by LC-MS/MS (nHPLC gradient, MS parameters) and data analysis (MaxQuant). To increase throughput, we have optimized several key steps such as the gradient time of the Fe-IMAC separation (15 min per enrichment), the number of consecutive enrichments possible between two chargings (>20) and the column recharging itself (<1 h). We show that the application of this protocol enables the selective (>90 %) identification of more than 10,000 unique phosphopeptides from 1 mg of HeLa digest within 2 h of measurement time (Q Exactive Plus).
ERIC Educational Resources Information Center
Matsuoka, Rieko; Poole, Gregory
2015-01-01
This paper examines the ways in which healthcare professionals interact with patients' family members, and/or colleagues. The data are from healthcare discourses at difficult times found in the manga series entitled Nurse AOI. As the first step, we selected several communication scenes for analysis in terms of politeness strategies. From these…
Systematic development of technical textiles
NASA Astrophysics Data System (ADS)
Beer, M.; Schrank, V.; Gloy, Y.-S.; Gries, T.
2016-07-01
Technical textiles are used in various fields of applications, ranging from small scale (e.g. medical applications) to large scale products (e.g. aerospace applications). The development of new products is often complex and time consuming, due to multiple interacting parameters. These interacting parameters are production process related and also a result of the textile structure and used material. A huge number of iteration steps are necessary to adjust the process parameter to finalize the new fabric structure. A design method is developed to support the systematic development of technical textiles and to reduce iteration steps. The design method is subdivided into six steps, starting from the identification of the requirements. The fabric characteristics vary depending on the field of application. If possible, benchmarks are tested. A suitable fabric production technology needs to be selected. The aim of the method is to support a development team within the technology selection without restricting the textile developer. After a suitable technology is selected, the transformation and correlation between input and output parameters follows. This generates the information for the production of the structure. Afterwards, the first prototype can be produced and tested. The resulting characteristics are compared with the initial product requirements.
Processes for producing low cost, high efficiency silicon solar cells
Rohatgi, Ajeet; Doshi, Parag; Tate, John Keith; Mejia, Jose; Chen, Zhizhang
1998-06-16
Processes which utilize rapid thermal processing (RTP) are provided for inexpensively producing high efficiency silicon solar cells. The RTP processes preserve minority carrier bulk lifetime .tau. and permit selective adjustment of the depth of the diffused regions, including emitter and back surface field (bsf), within the silicon substrate. In a first RTP process, an RTP step is utilized to simultaneously diffuse phosphorus and aluminum into the front and back surfaces, respectively, of a silicon substrate. Moreover, an in situ controlled cooling procedure preserves the carrier bulk lifetime .tau. and permits selective adjustment of the depth of the diffused regions. In a second RTP process, both simultaneous diffusion of the phosphorus and aluminum as well as annealing of the front and back contacts are accomplished during the RTP step. In a third RTP process, the RTP step accomplishes simultaneous diffusion of the phosphorus and aluminum, annealing of the contacts, and annealing of a double-layer antireflection/passivation coating SiN/SiO.sub.x. In a fourth RTP process, the process of applying front and back contacts is broken up into two separate respective steps, which enhances the efficiency of the cells, at a slight time expense. In a fifth RTP process, a second RTP step is utilized to fire and adhere the screen printed or evaporated contacts to the structure.
Image quality (IQ) guided multispectral image compression
NASA Astrophysics Data System (ADS)
Zheng, Yufeng; Chen, Genshe; Wang, Zhonghai; Blasch, Erik
2016-05-01
Image compression is necessary for data transportation, which saves both transferring time and storage space. In this paper, we focus on our discussion on lossy compression. There are many standard image formats and corresponding compression algorithms, for examples, JPEG (DCT -- discrete cosine transform), JPEG 2000 (DWT -- discrete wavelet transform), BPG (better portable graphics) and TIFF (LZW -- Lempel-Ziv-Welch). The image quality (IQ) of decompressed image will be measured by numerical metrics such as root mean square error (RMSE), peak signal-to-noise ratio (PSNR), and structural Similarity (SSIM) Index. Given an image and a specified IQ, we will investigate how to select a compression method and its parameters to achieve an expected compression. Our scenario consists of 3 steps. The first step is to compress a set of interested images by varying parameters and compute their IQs for each compression method. The second step is to create several regression models per compression method after analyzing the IQ-measurement versus compression-parameter from a number of compressed images. The third step is to compress the given image with the specified IQ using the selected compression method (JPEG, JPEG2000, BPG, or TIFF) according to the regressed models. The IQ may be specified by a compression ratio (e.g., 100), then we will select the compression method of the highest IQ (SSIM, or PSNR). Or the IQ may be specified by a IQ metric (e.g., SSIM = 0.8, or PSNR = 50), then we will select the compression method of the highest compression ratio. Our experiments tested on thermal (long-wave infrared) images (in gray scales) showed very promising results.
Effects of wide step walking on swing phase hip muscle forces and spatio-temporal gait parameters.
Bajelan, Soheil; Nagano, Hanatsu; Sparrow, Tony; Begg, Rezaul K
2017-07-01
Human walking can be viewed essentially as a continuum of anterior balance loss followed by a step that re-stabilizes balance. To secure balance an extended base of support can be assistive but healthy young adults tend to walk with relatively narrower steps compared to vulnerable populations (e.g. older adults and patients). It was, therefore, hypothesized that wide step walking may enhance dynamic balance at the cost of disturbed optimum coupling of muscle functions, leading to additional muscle work and associated reduction of gait economy. Young healthy adults may select relatively narrow steps for a more efficient gait. The current study focused on the effects of wide step walking on hip abductor and adductor muscles and spatio-temporal gait parameters. To this end, lower body kinematic data and ground reaction forces were obtained using an Optotrak motion capture system and AMTI force plates, respectively, while AnyBody software was employed for muscle force simulation. A single step of four healthy young male adults was captured during preferred walking and wide step walking. Based on preferred walking data, two parallel lines were drawn on the walkway to indicate 50% larger step width and participants targeted the lines with their heels as they walked. In addition to step width that defined walking conditions, other spatio-temporal gait parameters including step length, double support time and single support time were obtained. Average hip muscle forces during swing were modeled. Results showed that in wide step walking step length increased, Gluteus Minimus muscles were more active while Gracilis and Adductor Longus revealed considerably reduced forces. In conclusion, greater use of abductors and loss of adductor forces were found in wide step walking. Further validation is needed in future studies involving older adults and other pathological populations.
Chargemaster maintenance: think 'spring cleaning' all year round.
Barton, Shawn; Lancaster, Dani; Bieker, Mike
2008-11-01
Steps toward maintaining a standardized chargemaster include: Building a corporate chargemaster maintenance team. Developing a core research function. Designating hospital liaisons. Publishing timely reports on facility compliance. Using system codes to identify charges. Selecting chargemaster maintenance software. Developing a standard chargemaster data repository. Educating staff.
An arbitrary-order staggered time integrator for the linear acoustic wave equation
NASA Astrophysics Data System (ADS)
Lee, Jaejoon; Park, Hyunseo; Park, Yoonseo; Shin, Changsoo
2018-02-01
We suggest a staggered time integrator whose order of accuracy can arbitrarily be extended to solve the linear acoustic wave equation. A strategy to select the appropriate order of accuracy is also proposed based on the error analysis that quantitatively predicts the truncation error of the numerical solution. This strategy not only reduces the computational cost several times, but also allows us to flexibly set the modelling parameters such as the time step length, grid interval and P-wave speed. It is demonstrated that the proposed method can almost eliminate temporal dispersive errors during long term simulations regardless of the heterogeneity of the media and time step lengths. The method can also be successfully applied to the source problem with an absorbing boundary condition, which is frequently encountered in the practical usage for the imaging algorithms or the inverse problems.
Kostanyan, Artak E; Erastov, Andrey A; Shishilov, Oleg N
2014-06-20
The multiple dual mode (MDM) counter-current chromatography separation processes consist of a succession of two isocratic counter-current steps and are characterized by the shuttle (forward and back) transport of the sample in chromatographic columns. In this paper, the improved MDM method based on variable duration of alternating phase elution steps has been developed and validated. The MDM separation processes with variable duration of phase elution steps are analyzed. Basing on the cell model, analytical solutions are developed for impulse and non-impulse sample loading at the beginning of the column. Using the analytical solutions, a calculation program is presented to facilitate the simulation of MDM with variable duration of phase elution steps, which can be used to select optimal process conditions for the separation of a given feed mixture. Two options of the MDM separation are analyzed: 1 - with one-step solute elution: the separation is conducted so, that the sample is transferred forward and back with upper and lower phases inside the column until the desired separation of the components is reached, and then each individual component elutes entirely within one step; 2 - with multi-step solute elution, when the fractions of individual components are collected in over several steps. It is demonstrated that proper selection of the duration of individual cycles (phase flow times) can greatly increase the separation efficiency of CCC columns. Experiments were carried out using model mixtures of compounds from the GUESSmix with solvent systems hexane/ethyl acetate/methanol/water. The experimental results are compared to the predictions of the theory. A good agreement between theory and experiment has been demonstrated. Copyright © 2014 Elsevier B.V. All rights reserved.
Microcomputers: Software Evaluation. Evaluation Guides. Guide Number 17.
ERIC Educational Resources Information Center
Gray, Peter J.
This guide discusses three critical steps in selecting microcomputer software and hardware: setting the context, software evaluation, and managing microcomputer use. Specific topics addressed include: (1) conducting an informal task analysis to determine how the potential user's time is spent; (2) identifying tasks amenable to computerization and…
Byskov, M V; Nadeau, E; Johansson, B E O; Nørgaard, P
2015-06-01
Individual recording of rumination time (RT) is now possible in commercial dairy herds, through development of a microphone-based sensor, which is able to record RT by the sound of rumination activity. The objectives of this study were to examine the relationship between daily RT and intakes of different dietary fractions, the relationship between RT in minutes per kilogram of dry matter intake (DMI) and milk production, and to examine the variation in RT within and between mid-lactating dairy cows. Data from 3 production trials were used in which a total of 27 different diets were fed. The data contained 761, 290, and 203 daily recordings of RT, milk yield, milk components, DMI, and intake of dietary fractions recorded on 29, 26, and 24 Holstein and Swedish Red cows from trials 1, 2, and 3, respectively. The dietary fractions included forage neutral detergent fiber (NDF), concentrate NDF, crude protein, sugar, starch, and the remaining fraction represented by organic matter--(forage NDF+concentrate NDF+crude protein+sugar+starch). The relationship between the dietary fractions and RT was analyzed in 2 steps. In step 1, the dietary fractions, which were significantly related to RT, were selected and simultaneously checked for multicollinearity between the dietary components; in step 2, a multivariate model, including the effect of repeated measurements, the main effect of the selected dietary fractions from step 1, random effects of cow(trial) and trial, and information on breed, days in milk, and parity was used to analyze the relationship between RT and the selected dietary fractions. Relationships between RT in minutes per kilogram of DMI and milk yield and milk components were analyzed, using the same multivariate model as in step 2. Approximately 32% of the variation in daily RT could be explained by variations in intakes of the dietary fractions, whereas 48% of the total variation in RT was accounted for by individual variations between cows. Intakes of forage NDF and starch were positively related to daily RT, whereas intakes of sugar and the remaining fraction were negatively related to daily RT. Rumination time in minutes per kilogram of DMI was negatively related to milk yield and protein percentage, but positively related to milk fat percentage. Copyright © 2015 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Sabbatini, S.; Fratini, G.; Arriga, N.; Papale, D.
2012-04-01
Eddy Covariance (EC) is the only technologically available direct method to measure carbon and energy fluxes between ecosystems and atmosphere. However, uncertainties related to this method have not been exhaustively assessed yet, including those deriving from post-field data processing. The latter arise because there is no exact processing sequence established for any given situation, and the sequence itself is long and complex, with many processing steps and options available. However, the consistency and inter-comparability of flux estimates may be largely affected by the adoption of different processing sequences. The goal of our work is to quantify the uncertainty introduced in each processing step by the fact that different options are available, and to study how the overall uncertainty propagates throughout the processing sequence. We propose an easy-to-use methodology to assign a confidence level to the calculated fluxes of energy and mass, based on the adopted processing sequence, and on available information such as the EC system type (e.g. open vs. closed path), the climate and the ecosystem type. The proposed methodology synthesizes the results of a massive full-factorial experiment. We use one year of raw data from 15 European flux stations and process them so as to cover all possible combinations of the available options across a selection of the most relevant processing steps. The 15 sites have been selected to be representative of different ecosystems (forests, croplands and grasslands), climates (mediterranean, nordic, arid and humid) and instrumental setup (e.g. open vs. closed path). The software used for this analysis is EddyPro™ 3.0 (www.licor.com/eddypro). The critical processing steps, selected on the basis of the different options commonly used in the FLUXNET community, are: angle of attack correction; coordinate rotation; trend removal; time lag compensation; low- and high- frequency spectral correction; correction for air density fluctuations; and length of the flux averaging interval. We illustrate the results of the full-factorial combination relative to a subset of the selected sites with particular emphasis on the total uncertainty at different time scales and aggregations, as well as a preliminary analysis of the most critical steps for their contribution to the total uncertainties and their potential relation with site set-up characteristics and ecosystem type.
Yamada, Minoru; Aoyama, Tomoki; Nakamura, Masatoshi; Tanaka, Buichi; Nagai, Koutatsu; Tatematsu, Noriatsu; Uemura, Kazuki; Nakamura, Takashi; Tsuboyama, Tadao; Ichihashi, Noriaki
2011-01-01
The purpose of this study was to examine whether the Nintendo Wii Fit program could be used for fall risk assessment in healthy, community-dwelling older adults. Forty-five community-dwelling older women participated in this study. The "Basic Step" and "Ski Slalom" modules were selected from the Wii Fit game program. The following 5 physical performance tests were performed: the 10-m walk test under single- and dual-task conditions, the Timed Up and Go test under single- and dual-task conditions, and the Functional Reach test. Compared with the faller group, the nonfaller group showed a significant difference in the Basic Step (P < .001) and a nonsignificant difference in the Ski Slalom (P = .453). The discriminating criterion between the 2 groups was a score of 111 points on the Basic Step (P < .001). The Basic Step showed statistically significant, moderate correlations between the dual-task lag of walking (r = -.547) and the dual-task lag of the Timed Up and Go test (r = -.688). These results suggest that game-based fall risk assessment using the Basic Step has a high generality and is useful in community-dwelling older adults. Copyright © 2011 Mosby, Inc. All rights reserved.
Lean six sigma methodologies improve clinical laboratory efficiency and reduce turnaround times.
Inal, Tamer C; Goruroglu Ozturk, Ozlem; Kibar, Filiz; Cetiner, Salih; Matyar, Selcuk; Daglioglu, Gulcin; Yaman, Akgun
2018-01-01
Organizing work flow is a major task of laboratory management. Recently, clinical laboratories have started to adopt methodologies such as Lean Six Sigma and some successful implementations have been reported. This study used Lean Six Sigma to simplify the laboratory work process and decrease the turnaround time by eliminating non-value-adding steps. The five-stage Six Sigma system known as define, measure, analyze, improve, and control (DMAIC) is used to identify and solve problems. The laboratory turnaround time for individual tests, total delay time in the sample reception area, and percentage of steps involving risks of medical errors and biological hazards in the overall process are measured. The pre-analytical process in the reception area was improved by eliminating 3 h and 22.5 min of non-value-adding work. Turnaround time also improved for stat samples from 68 to 59 min after applying Lean. Steps prone to medical errors and posing potential biological hazards to receptionists were reduced from 30% to 3%. Successful implementation of Lean Six Sigma significantly improved all of the selected performance metrics. This quality-improvement methodology has the potential to significantly improve clinical laboratories. © 2017 Wiley Periodicals, Inc.
Aging effect on step adjustments and stability control in visually perturbed gait initiation.
Sun, Ruopeng; Cui, Chuyi; Shea, John B
2017-10-01
Gait adaptability is essential for fall avoidance during locomotion. It requires the ability to rapidly inhibit original motor planning, select and execute alternative motor commands, while also maintaining the stability of locomotion. This study investigated the aging effect on gait adaptability and dynamic stability control during a visually perturbed gait initiation task. A novel approach was used such that the anticipatory postural adjustment (APA) during gait initiation were used to trigger the unpredictable relocation of a foot-size stepping target. Participants (10 young adults and 10 older adults) completed visually perturbed gait initiation in three adjustment timing conditions (early, intermediate, late; all extracted from the stereotypical APA pattern) and two adjustment direction conditions (medial, lateral). Stepping accuracy, foot rotation at landing, and Margin of Dynamic Stability (MDS) were analyzed and compared across test conditions and groups using a linear mixed model. Stepping accuracy decreased as a function of adjustment timing as well as stepping direction, with older subjects exhibited a significantly greater undershoot in foot placement to late lateral stepping. Late adjustment also elicited a reaching-like movement (i.e. foot rotation prior to landing in order to step on the target), regardless of stepping direction. MDS measures in the medial-lateral and anterior-posterior direction revealed both young and older adults exhibited reduced stability in the adjustment step and subsequent steps. However, young adults returned to stable gait faster than older adults. These findings could be useful for future study of screening deficits in gait adaptability and preventing falls. Copyright © 2017 Elsevier B.V. All rights reserved.
Antibodies and Selection of Monoclonal Antibodies.
Hanack, Katja; Messerschmidt, Katrin; Listek, Martin
Monoclonal antibodies are universal binding molecules with a high specificity for their target and are indispensable tools in research, diagnostics and therapy. The biotechnological generation of monoclonal antibodies was enabled by the hybridoma technology published in 1975 by Köhler and Milstein. Today monoclonal antibodies are used in a variety of applications as flow cytometry, magnetic cell sorting, immunoassays or therapeutic approaches. First step of the generation process is the immunization of the organism with appropriate antigen. After a positive immune response the spleen cells are isolated and fused with myeloma cells in order to generate stable, long-living antibody-producing cell lines - hybridoma cells. In the subsequent identification step the culture supernatants of all hybridoma cells are screened weekly for the production of the antibody of interest. Hybridoma cells producing the antibody of interest are cloned by limited dilution till a monoclonal hybridoma is found. This is a very time-consuming and laborious process and therefore different selection strategies were developed since 1975 in order to facilitate the generation of monoclonal antibodies. Apart from common automation of pipetting processes and ELISA testing there are some promising approaches to select the right monoclonal antibody very early in the process to reduce time and effort of the generation. In this chapter different selection strategies for antibody-producing hybridoma cells are presented and analysed regarding to their benefits compared to conventional limited dilution technology.
Summarizing health inequalities in a Balanced Scorecard. Methodological considerations.
Auger, Nathalie; Raynault, Marie-France
2006-01-01
The association between social determinants and health inequalities is well recognized. What are now needed are tools to assist in disseminating such information. This article describes how the Balanced Scorecard may be used for summarizing data on health inequalities. The process begins by selecting appropriate social groups and indicators, and is followed by the measurement of differences across person, place, or time. The next step is to decide whether to focus on absolute versus relative inequality. The last step is to determine the scoring method, including whether to address issues of depth of inequality.
ERIC Educational Resources Information Center
Doles, Daniel T.
In the constantly changing world of technology, migration is not only inevitable but many times necessary for survival, especially when the end result is simplicity for both users and IT support staff. This paper describes the migration at Franklin College (Indiana). It discusses the reasons for selecting Windows NT, the steps taken to complete…
Signs of the Times: Signage in the Library.
ERIC Educational Resources Information Center
Johnson, Carolyn
1993-01-01
Discusses the use of signs in libraries and lists 12 steps to create successful signage. Highlights include consistency, location, color, size, lettering, types of material, user needs, signage policy, planning, in-house fabrication versus vendors, and evaluation, A selected bibliography of 24 sources of information on library signage is included.…
Rapid gait termination: effects of age, walking surfaces and footwear characteristics.
Menant, Jasmine C; Steele, Julie R; Menz, Hylton B; Munro, Bridget J; Lord, Stephen R
2009-07-01
The aim of this study was to systematically investigate the influence of various walking surfaces and footwear characteristics on the ability to terminate gait rapidly in 10 young and 26 older people. Subjects walked at a self-selected speed in eight randomized shoe conditions (standard versus elevated heel, soft sole, hard sole, high-collar, flared sole, bevelled heel and tread sole) on three surfaces: control, irregular and wet. In response to an audible cue, subjects were required to stop as quickly as possible in three out of eight walking trials in each condition. Time to last foot contact, total stopping time, stopping distance, number of steps to stop, step length and step width post-cue and base of support length at total stop were calculated from kinematic data collected using two CODA scanner units. The older subjects took more time and a longer distance to last foot contact and were more frequently classified as using a three or more-steps stopping strategy compared to the young subjects. The wet surface impeded gait termination, as indicated by greater total stopping time and stopping distance. Subjects required more time to terminate gait in the soft sole shoes compared to the standard shoes. In contrast, the high-collar shoes reduced total stopping time on the wet surface. These findings suggest that older adults have more difficulty terminating gait rapidly than their younger counterparts and that footwear is likely to influence whole-body stability during challenging postural tasks on wet surfaces.
Seven Steps to Responsible Software Selection. ERIC Digest.
ERIC Educational Resources Information Center
Komoski, P. Kenneth; Plotnick, Eric
Microcomputers in schools contribute significantly to the learning process, and software selection is taken as seriously as the selection of text books. The seven step process for responsible software selection are: (1) analyzing needs, including the differentiation between needs and objectives; (2) specification of requirements; (3) identifying…
Learning and study strategies correlate with medical students' performance in anatomical sciences.
Khalil, Mohammed K; Williams, Shanna E; Gregory Hawkins, H
2018-05-06
Much of the content delivered during medical students' preclinical years is assessed nationally by such testing as the United States Medical Licensing Examination ® (USMLE ® ) Step 1 and Comprehensive Osteopathic Medical Licensing Examination ® (COMPLEX-USA ® ) Step 1. Improvement of student study/learning strategies skills is associated with academic success in internal and external (USMLE Step 1) examinations. This research explores the strength of association between the Learning and Study Strategies Inventory (LASSI) scores and student performance in the anatomical sciences and USMLE Step 1 examinations. The LASSI inventory assesses learning and study strategies based on ten subscale measures. These subscales include three components of strategic learning: skill (Information processing, Selecting main ideas, and Test strategies), will (Anxiety, Attitude, and Motivation) and self-regulation (Concentration, Time management, Self-testing, and Study aid). During second year (M2) orientation, 180 students (Classes of 2016, 2017, and 2018) were administered the LASSI survey instrument. Pearson Product-Moment correlation analyses identified significant associations between five of the ten LASSI subscales (Anxiety, Information processing, Motivation, Selecting main idea, and Test strategies) and students' performance in the anatomical sciences and USMLE Step 1 examinations. Identification of students lacking these skills within the anatomical sciences curriculum allows targeted interventions, which not only maximize academic achievement in an aspect of an institution's internal examinations, but in the external measure of success represented by USMLE Step 1 scores. Anat Sci Educ 11: 236-242. © 2017 American Association of Anatomists. © 2017 American Association of Anatomists.
Da Rocha, Emmanuel S; Kunzler, Marcos R; Bobbert, Maarten F; Duysens, Jacques; Carpes, Felipe P
2018-06-01
Walking is one of the preferred exercises among elderly, but could a prolonged walking increase gait variability, a risk factor for a fall in the elderly? Here we determine whether 30 min of treadmill walking increases coefficient of variation of gait in elderly. Because gait responses to exercise depend on fitness level, we included 15 sedentary and 15 active elderly. Sedentary participants preferred a lower gait speed and made smaller steps than the actives. Step length coefficient of variation decreased ~16.9% by the end of the exercise in both the groups. Stride length coefficient of variation decreased ~9% after 10 minutes of walking, and sedentary elderly showed a slightly larger step width coefficient of variation (~2%) at 10 min than active elderly. Active elderly showed higher walk ratio (step length/cadence) than sedentary in all times of walking, but the times did not differ in both the groups. In conclusion, treadmill gait kinematics differ between sedentary and active elderly, but changes over time are similar in sedentary and active elderly. As a practical implication, 30 min of walking might be a good strategy of exercise for elderly, independently of the fitness level, because it did not increase variability in step and stride kinematics, which is considered a risk of fall in this population.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ghobadi, Kimia; Ghaffari, Hamid R.; Aleman, Dionne M.
2013-09-15
Purpose: The purpose of this work is to advance the two-step approach for Gamma Knife{sup ®} Perfexion™ (PFX) optimization to account for dose homogeneity and overlap between the planning target volume (PTV) and organs-at-risk (OARs).Methods: In the first step, a geometry-based algorithm is used to quickly select isocentre locations while explicitly accounting for PTV-OARs overlaps. In this approach, the PTV is divided into subvolumes based on the PTV-OARs overlaps and the distance of voxels to the overlaps. Only a few isocentres are selected in the overlap volume, and a higher number of isocentres are carefully selected among voxels that aremore » immediately close to the overlap volume. In the second step, a convex optimization is solved to find the optimal combination of collimator sizes and their radiation duration for each isocentre location.Results: This two-step approach is tested on seven clinical cases (comprising 11 targets) for which the authors assess coverage, OARs dose, and homogeneity index and relate these parameters to the overlap fraction for each case. In terms of coverage, the mean V{sub 99} for the gross target volume (GTV) was 99.8% while the V{sub 95} for the PTV averaged at 94.6%, thus satisfying the clinical objectives of 99% for GTV and 95% for PTV, respectively. The mean relative dose to the brainstem was 87.7% of the prescription dose (with maximum 108%), while on average, 11.3% of the PTV overlapped with the brainstem. The mean beam-on time per fraction per dose was 8.6 min with calibration dose rate of 3.5 Gy/min, and the computational time averaged at 205 min. Compared with previous work involving single-fraction radiosurgery, the resulting plans were more homogeneous with average homogeneity index of 1.18 compared to 1.47.Conclusions: PFX treatment plans with homogeneous dose distribution can be achieved by inverse planning using geometric isocentre selection and mathematical modeling and optimization techniques. The quality of the obtained treatment plans are clinically satisfactory while the homogeneity index is improved compared to conventional PFX plans.« less
Blanco, Elias; Foster, Christopher W; Cumba, Loanda R; do Carmo, Devaney R; Banks, Craig E
2016-04-25
In this paper the effect of solvent induced chemical surface enhancements upon graphitic screen-printed electrodes (SPEs) is explored. Previous literature has indicated that treating the working electrode of a SPE with the solvent N,N-dimethylformamide (DMF) offers improvements within the electroanalytical response, resulting in a 57-fold increment in the electrode surface area compared to their unmodified counterparts. The protocol involves two steps: (i) the SPE is placed into DMF for a selected time, and (ii) it is cured in an oven at a selected time and temperature. Beneficial electroanalytical outputs are reported to be due to the increased surface area attributed to the binder within the bulk surface of the SPEs dissolving out during the immersion step (step i). We revisit this exciting concept and explore these solvent induced chemical surface enhancements using edge- and basal-plane like SPEs and a new bespoke SPE, utilising the solvent DMF and explore, in detail, the parameters utilised in steps (i) and (ii). The electrochemical performance following steps (i) and (ii) is evaluated using the outer-sphere redox probe hexaammineruthenium(iii) chloride/0.1 M KCl, where it is found that the largest improvement is obtained using DMF with an immersion time of 10 minutes and a curing time of 30 minutes at 100 °C. Solvent induced chemical surface enhancement upon the electrochemical performance of SPEs is also benchmarked in terms of their electroanalytical sensing of NADH (dihydronicotinamide adenine dinucleotide reduced form) and capsaicin both of which are compared to their unmodified SPE counterparts. In both cases, it is apparent that a marginal improvement in the electroanalytical sensitivity (i.e. gradient of calibration plots) of 1.08-fold and 1.38-fold are found respectively. Returning to the original exciting concept, interestingly it was found that when a poor experimental technique was employed, only then significant increases within the working electrode area are evident. In this case, the insulating layer that defines the working electrode surface, which was not protected from the solvent (step (i)) creates cracks within the insulating layer exposing the underlying carbon connections and thus increasing the electrode area by an unknown quantity. We infer that the origin of the response reported within the literature, where an extreme increase in the electrochemical surface area (57-fold) was reported, is unlikely to be solely due to the binder dissolving but rather poor experimental control over step (i).
Gregorini, P; Waghorn, G C; Kuhn-Sherlock, B; Romera, A J; Macdonald, K A
2015-09-01
The aim of this study was to investigate and assess differences in the grazing pattern of 2 groups of mature dairy cows selected as calves for divergent residual feed intake (RFI). Sixteen Holstein-Friesian cows (471±31kg of body weight, 100 d in milk), comprising 8 cows selected as calves (6-8 mo old) for low (most efficient: CSCLowRFI) and 8 cows selected as calves for high (least efficient: CSCHighRFI) RFI, were used for the purpose of this study. Cows (n=16) were managed as a single group, and strip-grazed (24-h pasture allocation at 0800h) a perennial ryegrass sward for 31 d, with measurements taken during the last 21 d. All cows were equipped with motion sensors for the duration of the study, and jaw movements were measured for three 24-h periods during 3 random nonconsecutive days. Measurements included number of steps and jaw movements during grazing and rumination, plus fecal particle size distribution. Jaw movements were analyzed to identify bites, mastication (oral processing of ingesta) during grazing bouts, chewing during rumination, and to calculate grazing and rumination times for 24-h periods. Grazing and walking behavior were also analyzed in relation to the first meal of the day after the new pasture was allocated. Measured variables were subjected to multivariate analysis. Cows selected for low RFI as calves appeared to (a) prioritize grazing and rumination over idling; (b) take fewer steps, but with a higher proportion of grazing steps at the expense of nongrazing steps; and (c) increase the duration of the first meal and commenced their second meal earlier than CSCHighRFI. The CSCLowRFI had fewer jaw movements during eating (39,820 vs. 45,118 for CSCLowRFI and CSCHighRFI, respectively), more intense rumination (i.e., 5 more chews per bolus), and their feces had 30% less large particles than CSCHighRFI. These results suggest that CSCLowRFI concentrate their grazing activity to the time when fresh pasture is allocated, and graze more efficiently by walking and masticating less, hence they are more efficient grazers than CSCHighRFI. Copyright © 2015 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.
Kang, Ju Hui; Jung, Hyun Jeong; Jung, Mun Yhung
2016-08-31
We developed a new fast and selective analytical method for the determination of inorganic arsenic (iAs) in rice by a gas chromatography - tandem mass spectrometry (GC-MS/MS) in combination with one step derivatization of inorganic arsenic (iAs) with British Anti-Lewsite (BAL). Two step derivatization of iAs with BAL has been previously performed for the GC-MS analysis. In this paper, the quantitative one step derivatization condition was successfully established. The GC-MS/MS was carried out with a short nonpolar capillary column (0.25 mm × 10 m) under the conditions of fast oven temperature ramp rate (4 °C/s) and high linear velocity (108.8 cm/s) of the carrier gas. The established GC-MS/MS method showed an excellent linearity (r(2) > 0.999) in a tested range (0.2-100.0 μg L(-1)), ultra-low limit of detection (LOD, 0.08 pg), and high precision and accuracy. The GC-MS/MS technique showed far greater selectivity (22.5 fold higher signal to noise ratio in rice sample) on iAs than GC-MS method. The gas chromatographic running time was only 2.5 min with the iAs retention time of 1.98 min. The established method was successfully applied to quantify the iAs contents in polished rice. The mean iAs content in the Korean polished rice (n = 27) was 66.1 μg kg(-1) with the range of 37.5-125.0 μg kg(-1). This represents the first report on the GC-tandem mass spectrometry in combination with the one step derivatization with BAL for the iAs speciation in rice. This GC-MS/MS method would be a simple, useful and reliable measure for the iAs analysis in rice in the laboratories in which the expensive and element specific HPLC-ICP-MS is not available. Copyright © 2016 Elsevier B.V. All rights reserved.
Hossner, Ernst-Joachim; Ehrlenspiel, Felix
2010-01-01
The paralysis-by-analysis phenomenon, i.e., attending to the execution of one's movement impairs performance, has gathered a lot of attention over recent years (see Wulf, 2007, for a review). Explanations of this phenomenon, e.g., the hypotheses of constrained action (Wulf et al., 2001) or of step-by-step execution (Masters, 1992; Beilock et al., 2002), however, do not refer to the level of underlying mechanisms on the level of sensorimotor control. For this purpose, a “nodal-point hypothesis” is presented here with the core assumption that skilled motor behavior is internally based on sensorimotor chains of nodal points, that attending to intermediate nodal points leads to a muscular re-freezing of the motor system at exactly and exclusively these points in time, and that this re-freezing is accompanied by the disruption of compensatory processes, resulting in an overall decrease of motor performance. Two experiments, on lever sequencing and basketball free throws, respectively, are reported that successfully tested these time-referenced predictions, i.e., showing that muscular activity is selectively increased and compensatory variability selectively decreased at movement-related nodal points if these points are in the focus of attention. PMID:21833285
Mushroom-free selective epitaxial growth of Si, SiGe and SiGe:B raised sources and drains
NASA Astrophysics Data System (ADS)
Hartmann, J. M.; Benevent, V.; Barnes, J. P.; Veillerot, M.; Lafond, D.; Damlencourt, J. F.; Morvan, S.; Prévitali, B.; Andrieu, F.; Loubet, N.; Dutartre, D.
2013-05-01
We have evaluated various Cyclic Selective Epitaxial Growth/Etch (CSEGE) processes in order to grow "mushroom-free" Si and SiGe:B Raised Sources and Drains (RSDs) on each side of ultra-short gate length Extra-Thin Silicon-On-Insulator (ET-SOI) transistors. The 750 °C, 20 Torr Si CSEGE process we have developed (5 chlorinated growth steps with four HCl etch steps in-between) yielded excellent crystalline quality, typically 18 nm thick Si RSDs. Growth was conformal along the Si3N4 sidewall spacers, without any poly-Si mushrooms on top of unprotected gates. We have then evaluated on blanket 300 mm Si(001) wafers the feasibility of a 650 °C, 20 Torr SiGe:B CSEGE process (5 chlorinated growth steps with four HCl etch steps in-between, as for Si). As expected, the deposited thickness decreased as the total HCl etch time increased. This came hands in hands with unforeseen (i) decrease of the mean Ge concentration (from 30% down to 26%) and (ii) increase of the substitutional B concentration (from 2 × 1020 cm-3 up to 3 × 1020 cm-3). They were due to fluctuations of the Ge concentration and of the atomic B concentration [B] in such layers (drop of the Ge% and increase of [B] at etch step locations). Such blanket layers were a bit rougher than layers grown using a single epitaxy step, but nevertheless of excellent crystalline quality. Transposition of our CSEGE process on patterned ET-SOI wafers did not yield the expected results. HCl etch steps indeed helped in partly or totally removing the poly-SiGe:B mushrooms on top of the gates. This was however at the expense of the crystalline quality and 2D nature of the ˜45 nm thick Si0.7Ge0.3:B recessed sources and drains selectively grown on each side of the imperfectly protected poly-Si gates. The only solution we have so far identified that yields a lesser amount of mushrooms while preserving the quality of the S/D is to increase the HCl flow during growth steps.
Scattering Matrix Elements for the Nonadiabatic Collision
2010-12-01
orthogonality relationship expressed in (77). This technique, known as the Channel Packet Method (CPM), is laid out by Weeks and Tannor [2...time and energy are Fourier transform pairs, and share the same relationship as the coordinate/momentum pairs: max min 2E t t π ∆ = − (99) As...elements, will exibit ringing. Selection of an inappropriatly large time step introduces an erroneous phase shift in the correlation funtion . This
Yankson, Kweku K.; Steck, Todd R.
2009-01-01
We present a simple strategy for isolating and accurately enumerating target DNA from high-clay-content soils: desorption with buffers, an optional magnetic capture hybridization step, and quantitation via real-time PCR. With the developed technique, μg quantities of DNA were extracted from mg samples of pure kaolinite and a field clay soil. PMID:19633108
Processes for producing low cost, high efficiency silicon solar cells
Rohatgi, A.; Doshi, P.; Tate, J.K.; Mejia, J.; Chen, Z.
1998-06-16
Processes which utilize rapid thermal processing (RTP) are provided for inexpensively producing high efficiency silicon solar cells. The RTP processes preserve minority carrier bulk lifetime {tau} and permit selective adjustment of the depth of the diffused regions, including emitter and back surface field (bsf), within the silicon substrate. In a first RTP process, an RTP step is utilized to simultaneously diffuse phosphorus and aluminum into the front and back surfaces, respectively, of a silicon substrate. Moreover, an in situ controlled cooling procedure preserves the carrier bulk lifetime {tau} and permits selective adjustment of the depth of the diffused regions. In a second RTP process, both simultaneous diffusion of the phosphorus and aluminum as well as annealing of the front and back contacts are accomplished during the RTP step. In a third RTP process, the RTP step accomplishes simultaneous diffusion of the phosphorus and aluminum, annealing of the contacts, and annealing of a double-layer antireflection/passivation coating SiN/SiO{sub x}. In a fourth RTP process, the process of applying front and back contacts is broken up into two separate respective steps, which enhances the efficiency of the cells, at a slight time expense. In a fifth RTP process, a second RTP step is utilized to fire and adhere the screen printed or evaporated contacts to the structure. 28 figs.
Stochastic approaches for time series forecasting of boron: a case study of Western Turkey.
Durdu, Omer Faruk
2010-10-01
In the present study, a seasonal and non-seasonal prediction of boron concentrations time series data for the period of 1996-2004 from Büyük Menderes river in western Turkey are addressed by means of linear stochastic models. The methodology presented here is to develop adequate linear stochastic models known as autoregressive integrated moving average (ARIMA) and multiplicative seasonal autoregressive integrated moving average (SARIMA) to predict boron content in the Büyük Menderes catchment. Initially, the Box-Whisker plots and Kendall's tau test are used to identify the trends during the study period. The measurements locations do not show significant overall trend in boron concentrations, though marginal increasing and decreasing trends are observed for certain periods at some locations. ARIMA modeling approach involves the following three steps: model identification, parameter estimation, and diagnostic checking. In the model identification step, considering the autocorrelation function (ACF) and partial autocorrelation function (PACF) results of boron data series, different ARIMA models are identified. The model gives the minimum Akaike information criterion (AIC) is selected as the best-fit model. The parameter estimation step indicates that the estimated model parameters are significantly different from zero. The diagnostic check step is applied to the residuals of the selected ARIMA models and the results indicate that the residuals are independent, normally distributed, and homoscadastic. For the model validation purposes, the predicted results using the best ARIMA models are compared to the observed data. The predicted data show reasonably good agreement with the actual data. The comparison of the mean and variance of 3-year (2002-2004) observed data vs predicted data from the selected best models show that the boron model from ARIMA modeling approaches could be used in a safe manner since the predicted values from these models preserve the basic statistics of observed data in terms of mean. The ARIMA modeling approach is recommended for predicting boron concentration series of a river.
Zdziarski, Laura Ann; Chen, Cong; Horodyski, Marybeth; Vincent, Kevin R.; Vincent, Heather K.
2017-01-01
Objective To determine the differences in kinematic, cardiopulmonary, and metabolic responses between overweight and healthy weight runners at a self-selected and standard running speed. Design Comparative descriptive study. Setting Tertiary care institution, university-affiliated research laboratory. Participants Overweight runners (n = 21) were matched with runners of healthy weight (n = 42). Methods Participants ran at self-selected and standardized speeds (13.6 km/h). Sagittal plane joint kinematics were captured simultaneously with cardiopulmonary and metabolic measures using a motion capture system and portable gas analyzer, respectively. Main Outcome Measurements Spatiotemporal parameters (cadence, step width and length, center of gravity displacement, stance time) joint kinematics, oxygen cost, heart rate, ventilation and energy expenditure. Results At the self-selected speed, overweight individuals ran slower (8.5 ± 1.3 versus 10.0 ± 1.6 km/h) and had slower cadence (163 versus 169 steps/min; P < .05). The sagittal plane range of motion (ROM) for flexion-extension at the ankle, knee, hip, and anterior pelvic tilt were all less in overweight runners compared to healthy weight runners (all P < .05). At self-selected speed and 13.6 km/h, energy expenditure was higher in the overweight runners compared to their healthy weight counterparts (P < .05). At 13.6 km/h, only the frontal hip and pelvis ROM were higher in the overweight versus the healthy weight runners (P < .05), and energy expenditure, net energy cost, and minute ventilation were higher in the overweight runners compared to the healthy weight runners (P < .05). Conclusion At self-selected running speeds, the overweight runners demonstrated gait strategies (less joint ROM, less vertical displacement, and shorter step lengths) that resulted in cardiopulmonary and energetic responses similar to those of healthy weight individuals. PMID:26146194
Pan, Yupeng; Pan, Cheng-Ling; Zhang, Yufan; Li, Huaifeng; Min, Shixiong; Guo, Xunmun; Zheng, Bin; Chen, Hailong; Anders, Addison; Lai, Zhiping; Zheng, Junrong; Huang, Kuo-Wei
2016-05-06
An unsymmetrically protonated PN(3) -pincer complex in which ruthenium is coordinated by one nitrogen and two phosphorus atoms was employed for the selective generation of hydrogen from formic acid. Mechanistic studies suggest that the imine arm participates in the formic acid activation/deprotonation step. A long life time of 150 h with a turnover number over 1 million was achieved. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Analyte separation utilizing temperature programmed desorption of a preconcentrator mesh
Linker, Kevin L.; Bouchier, Frank A.; Theisen, Lisa; Arakaki, Lester H.
2007-11-27
A method and system for controllably releasing contaminants from a contaminated porous metallic mesh by thermally desorbing and releasing a selected subset of contaminants from a contaminated mesh by rapidly raising the mesh to a pre-determined temperature step or plateau that has been chosen beforehand to preferentially desorb a particular chemical specie of interest, but not others. By providing a sufficiently long delay or dwell period in-between heating pulses, and by selecting the optimum plateau temperatures, then different contaminant species can be controllably released in well-defined batches at different times to a chemical detector in gaseous communication with the mesh. For some detectors, such as an Ion Mobility Spectrometer (IMS), separating different species in time before they enter the IMS allows the detector to have an enhanced selectivity.
Thomson, James R; Kimmerer, Wim J; Brown, Larry R; Newman, Ken B; Mac Nally, Ralph; Bennett, William A; Feyrer, Frederick; Fleishman, Erica
2010-07-01
We examined trends in abundance of four pelagic fish species (delta smelt, longfin smelt, striped bass, and threadfin shad) in the upper San Francisco Estuary, California, USA, over 40 years using Bayesian change point models. Change point models identify times of abrupt or unusual changes in absolute abundance (step changes) or in rates of change in abundance (trend changes). We coupled Bayesian model selection with linear regression splines to identify biotic or abiotic covariates with the strongest associations with abundances of each species. We then refitted change point models conditional on the selected covariates to explore whether those covariates could explain statistical trends or change points in species abundances. We also fitted a multispecies change point model that identified change points common to all species. All models included hierarchical structures to model data uncertainties, including observation errors and missing covariate values. There were step declines in abundances of all four species in the early 2000s, with a likely common decline in 2002. Abiotic variables, including water clarity, position of the 2 per thousand isohaline (X2), and the volume of freshwater exported from the estuary, explained some variation in species' abundances over the time series, but no selected covariates could explain statistically the post-2000 change points for any species.
Means of determining extrusion temperatures
McDonald, Robert E.; Canonico, Domenic A.
1977-01-01
In an extrusion process comprising the steps of fabricating a metal billet, heating said billet for a predetermined time and at a selected temperature to increase its plasticity and then forcing said heated billet through a small orifice to produce a desired extruded object, the improvement comprising the steps of randomly inserting a plurality of small metallic thermal tabs at different cross sectional depths in said billet as a part of said fabricating step, and examining said extruded object at each thermal tab location for determining the crystal structure at each extruded thermal tab thus revealing the maximum temperature reached during extrusion in each respective tab location section of the extruded object, whereby the thermal profile of said extruded object during extrusion may be determined.
Using Dissimilarity Metrics to Identify Interesting Designs
NASA Technical Reports Server (NTRS)
Feather, Martin; Kiper, James
2006-01-01
A computer program helps to blend the power of automated-search software, which is able to generate large numbers of design solutions, with the insight of expert designers, who are able to identify preferred designs but do not have time to examine all the solutions. From among the many automated solutions to a given design problem, the program selects a smaller number of solutions that are worthy of scrutiny by the experts in the sense that they are sufficiently dissimilar from each other. The program makes the selection in an interactive process that involves a sequence of data-mining steps interspersed with visual displays of results of these steps to the experts. At crucial points between steps, the experts provide directives to guide the process. The program uses heuristic search techniques to identify nearly optimal design solutions and uses dissimilarity metrics defined by the experts to characterize the degree to which solutions are interestingly different. The search, data-mining, and visualization features of the program were derived from previously developed risk-management software used to support a risk-centric design methodology
Wang, Li; Yi, Yanhui; Wu, Chunfei; Guo, Hongchen
2017-01-01
Abstract The conversion of CO2 with CH4 into liquid fuels and chemicals in a single‐step catalytic process that bypasses the production of syngas remains a challenge. In this study, liquid fuels and chemicals (e.g., acetic acid, methanol, ethanol, and formaldehyde) were synthesized in a one‐step process from CO2 and CH4 at room temperature (30 °C) and atmospheric pressure for the first time by using a novel plasma reactor with a water electrode. The total selectivity to oxygenates was approximately 50–60 %, with acetic acid being the major component at 40.2 % selectivity, the highest value reported for acetic acid thus far. Interestingly, the direct plasma synthesis of acetic acid from CH4 and CO2 is an ideal reaction with 100 % atom economy, but it is almost impossible by thermal catalysis owing to the significant thermodynamic barrier. The combination of plasma and catalyst in this process shows great potential for manipulating the distribution of liquid chemical products in a given process. PMID:28842938
Optimal Signal Processing of Frequency-Stepped CW Radar Data
NASA Technical Reports Server (NTRS)
Ybarra, Gary A.; Wu, Shawkang M.; Bilbro, Griff L.; Ardalan, Sasan H.; Hearn, Chase P.; Neece, Robert T.
1995-01-01
An optimal signal processing algorithm is derived for estimating the time delay and amplitude of each scatterer reflection using a frequency-stepped CW system. The channel is assumed to be composed of abrupt changes in the reflection coefficient profile. The optimization technique is intended to maximize the target range resolution achievable from any set of frequency-stepped CW radar measurements made in such an environment. The algorithm is composed of an iterative two-step procedure. First, the amplitudes of the echoes are optimized by solving an overdetermined least squares set of equations. Then, a nonlinear objective function is scanned in an organized fashion to find its global minimum. The result is a set of echo strengths and time delay estimates. Although this paper addresses the specific problem of resolving the time delay between the first two echoes, the derivation is general in the number of echoes. Performance of the optimization approach is illustrated using measured data obtained from an HP-X510 network analyzer. It is demonstrated that the optimization approach offers a significant resolution enhancement over the standard processing approach that employs an IFFT. Degradation in the performance of the algorithm due to suboptimal model order selection and the effects of additive white Gaussion noise are addressed.
Optimal Signal Processing of Frequency-Stepped CW Radar Data
NASA Technical Reports Server (NTRS)
Ybarra, Gary A.; Wu, Shawkang M.; Bilbro, Griff L.; Ardalan, Sasan H.; Hearn, Chase P.; Neece, Robert T.
1995-01-01
An optimal signal processing algorithm is derived for estimating the time delay and amplitude of each scatterer reflection using a frequency-stepped CW system. The channel is assumed to be composed of abrupt changes in the reflection coefficient profile. The optimization technique is intended to maximize the target range resolution achievable from any set of frequency-stepped CW radar measurements made in such an environment. The algorithm is composed of an iterative two-step procedure. First, the amplitudes of the echoes are optimized by solving an overdetermined least squares set of equations. Then, a nonlinear objective function is scanned in an organized fashion to find its global minimum. The result is a set of echo strengths and time delay estimates. Although this paper addresses the specific problem of resolving the time delay between the two echoes, the derivation is general in the number of echoes. Performance of the optimization approach is illustrated using measured data obtained from an HP-851O network analyzer. It is demonstrated that the optimization approach offers a significant resolution enhancement over the standard processing approach that employs an IFFT. Degradation in the performance of the algorithm due to suboptimal model order selection and the effects of additive white Gaussion noise are addressed.
A Selection Method That Succeeds!
ERIC Educational Resources Information Center
Weitman, Catheryn J.
Provided a structural selection method is carried out, it is possible to find quality early childhood personnel. The hiring process involves five definite steps, each of which establishes a base for the next. A needs assessment formulating basic minimal qualifications is the first step. The second step involves review of current job descriptions…
NASA Astrophysics Data System (ADS)
Zhang, F.; Barriot, J. P.; Maamaatuaiahutapu, K.; Sichoix, L.; Xu, G., Sr.
2017-12-01
In order to better understand and predict the complex meteorological context of French Polynesia, we focus on the time evolution of Integrated Precipitable Water (PW) using Radiosoundings (RS) data from 1974 to 2017. In a first step, we make a comparison over selected months between the PW estimate reconstructed from raw two seconds acquisition and the PW estimate reconstructed from the highly compressed and undersampled Integrated Global Radiosonde Archive (IGRA). In a second step, we make a comparison with other techniques of PW acquisition (radio delays, temperature of sky, infrared bands absorption) in order to assess the intrinsic biases of RS acquisition. In a last step, we analyze the PW time series in our area validated at the light of the first and second step, w.r.t seasonality (dry season and wet season) and spatial location. During the wet season (November to April), the PW values are higher than the corresponding values observed during the dry season (May to October). The PW values are smaller with higher latitudes, but there are higher PW values in Tahiti than in other islands because of the presence of the South Pacific Convergence Zone (SPCZ) around Tahiti. All the PW time series show the same uptrend in French Polynesia in recent years. This study provides further evidence that the PW time series derived from RS can be assimilated in weather forecasting and climate warming models.
Velasco, Valeria; Sherwood, Julie S.; Rojas-García, Pedro P.; Logue, Catherine M.
2014-01-01
The aim of this study was to compare a real-time PCR assay, with a conventional culture/PCR method, to detect S. aureus, mecA and Panton-Valentine Leukocidin (PVL) genes in animals and retail meat, using a two-step selective enrichment protocol. A total of 234 samples were examined (77 animal nasal swabs, 112 retail raw meat, and 45 deli meat). The multiplex real-time PCR targeted the genes: nuc (identification of S. aureus), mecA (associated with methicillin resistance) and PVL (virulence factor), and the primary and secondary enrichment samples were assessed. The conventional culture/PCR method included the two-step selective enrichment, selective plating, biochemical testing, and multiplex PCR for confirmation. The conventional culture/PCR method recovered 95/234 positive S. aureus samples. Application of real-time PCR on samples following primary and secondary enrichment detected S. aureus in 111/234 and 120/234 samples respectively. For detection of S. aureus, the kappa statistic was 0.68–0.88 (from substantial to almost perfect agreement) and 0.29–0.77 (from fair to substantial agreement) for primary and secondary enrichments, using real-time PCR. For detection of mecA gene, the kappa statistic was 0–0.49 (from no agreement beyond that expected by chance to moderate agreement) for primary and secondary enrichment samples. Two pork samples were mecA gene positive by all methods. The real-time PCR assay detected the mecA gene in samples that were negative for S. aureus, but positive for Staphylococcus spp. The PVL gene was not detected in any sample by the conventional culture/PCR method or the real-time PCR assay. Among S. aureus isolated by conventional culture/PCR method, the sequence type ST398, and multi-drug resistant strains were found in animals and raw meat samples. The real-time PCR assay may be recommended as a rapid method for detection of S. aureus and the mecA gene, with further confirmation of methicillin-resistant S. aureus (MRSA) using the standard culture method. PMID:24849624
Velasco, Valeria; Sherwood, Julie S; Rojas-García, Pedro P; Logue, Catherine M
2014-01-01
The aim of this study was to compare a real-time PCR assay, with a conventional culture/PCR method, to detect S. aureus, mecA and Panton-Valentine Leukocidin (PVL) genes in animals and retail meat, using a two-step selective enrichment protocol. A total of 234 samples were examined (77 animal nasal swabs, 112 retail raw meat, and 45 deli meat). The multiplex real-time PCR targeted the genes: nuc (identification of S. aureus), mecA (associated with methicillin resistance) and PVL (virulence factor), and the primary and secondary enrichment samples were assessed. The conventional culture/PCR method included the two-step selective enrichment, selective plating, biochemical testing, and multiplex PCR for confirmation. The conventional culture/PCR method recovered 95/234 positive S. aureus samples. Application of real-time PCR on samples following primary and secondary enrichment detected S. aureus in 111/234 and 120/234 samples respectively. For detection of S. aureus, the kappa statistic was 0.68-0.88 (from substantial to almost perfect agreement) and 0.29-0.77 (from fair to substantial agreement) for primary and secondary enrichments, using real-time PCR. For detection of mecA gene, the kappa statistic was 0-0.49 (from no agreement beyond that expected by chance to moderate agreement) for primary and secondary enrichment samples. Two pork samples were mecA gene positive by all methods. The real-time PCR assay detected the mecA gene in samples that were negative for S. aureus, but positive for Staphylococcus spp. The PVL gene was not detected in any sample by the conventional culture/PCR method or the real-time PCR assay. Among S. aureus isolated by conventional culture/PCR method, the sequence type ST398, and multi-drug resistant strains were found in animals and raw meat samples. The real-time PCR assay may be recommended as a rapid method for detection of S. aureus and the mecA gene, with further confirmation of methicillin-resistant S. aureus (MRSA) using the standard culture method.
Sentiment analysis of feature ranking methods for classification accuracy
NASA Astrophysics Data System (ADS)
Joseph, Shashank; Mugauri, Calvin; Sumathy, S.
2017-11-01
Text pre-processing and feature selection are important and critical steps in text mining. Text pre-processing of large volumes of datasets is a difficult task as unstructured raw data is converted into structured format. Traditional methods of processing and weighing took much time and were less accurate. To overcome this challenge, feature ranking techniques have been devised. A feature set from text preprocessing is fed as input for feature selection. Feature selection helps improve text classification accuracy. Of the three feature selection categories available, the filter category will be the focus. Five feature ranking methods namely: document frequency, standard deviation information gain, CHI-SQUARE, and weighted-log likelihood -ratio is analyzed.
Change classification in SAR time series: a functional approach
NASA Astrophysics Data System (ADS)
Boldt, Markus; Thiele, Antje; Schulz, Karsten; Hinz, Stefan
2017-10-01
Change detection represents a broad field of research in SAR remote sensing, consisting of many different approaches. Besides the simple recognition of change areas, the analysis of type, category or class of the change areas is at least as important for creating a comprehensive result. Conventional strategies for change classification are based on supervised or unsupervised landuse / landcover classifications. The main drawback of such approaches is that the quality of the classification result directly depends on the selection of training and reference data. Additionally, supervised processing methods require an experienced operator who capably selects the training samples. This training step is not necessary when using unsupervised strategies, but nevertheless meaningful reference data must be available for identifying the resulting classes. Consequently, an experienced operator is indispensable. In this study, an innovative concept for the classification of changes in SAR time series data is proposed. Regarding the drawbacks of traditional strategies given above, it copes without using any training data. Moreover, the method can be applied by an operator, who does not have detailed knowledge about the available scenery yet. This knowledge is provided by the algorithm. The final step of the procedure, which main aspect is given by the iterative optimization of an initial class scheme with respect to the categorized change objects, is represented by the classification of these objects to the finally resulting classes. This assignment step is subject of this paper.
A General Method for Solving Systems of Non-Linear Equations
NASA Technical Reports Server (NTRS)
Nachtsheim, Philip R.; Deiss, Ron (Technical Monitor)
1995-01-01
The method of steepest descent is modified so that accelerated convergence is achieved near a root. It is assumed that the function of interest can be approximated near a root by a quadratic form. An eigenvector of the quadratic form is found by evaluating the function and its gradient at an arbitrary point and another suitably selected point. The terminal point of the eigenvector is chosen to lie on the line segment joining the two points. The terminal point found lies on an axis of the quadratic form. The selection of a suitable step size at this point leads directly to the root in the direction of steepest descent in a single step. Newton's root finding method not infrequently diverges if the starting point is far from the root. However, the current method in these regions merely reverts to the method of steepest descent with an adaptive step size. The current method's performance should match that of the Levenberg-Marquardt root finding method since they both share the ability to converge from a starting point far from the root and both exhibit quadratic convergence near a root. The Levenberg-Marquardt method requires storage for coefficients of linear equations. The current method which does not require the solution of linear equations requires more time for additional function and gradient evaluations. The classic trade off of time for space separates the two methods.
NASA Astrophysics Data System (ADS)
Syamsuri, B. S.; Anwar, S.; Sumarna, O.
2017-09-01
This research aims to develop oxidation-reduction reactions (redox) teaching material used the Four Steps Teaching Material Development (4S TMD) method consists of four steps: selection, structuring, characterization and didactical reduction. This paper is the first part of the development of teaching material that includes selection and structuring steps. At the selection step, the development of teaching material begins with the development concept of redox based on curriculum demands, then the development of fundamental concepts sourced from the international textbook, and last is the development of values or skills can be integrated with redox concepts. The results of this selection step are the subject matter of the redox concept and values can be integrated with it. In the structuring step was developed concept map that provide on the relationship between redox concepts; Macro structure that guide systematic on the writing of teaching material; And multiple representations which are the development of teaching material that connection between macroscopic, submicroscopic, and symbolic level representations. The result of the two steps in this first part of the study produced a draft of teaching material. Evaluation of the draft of teaching material is done by an expert lecturer in the field of chemical education to assess the feasibility of teaching material.
2017-01-01
Area-selective atomic layer deposition (ALD) is rapidly gaining interest because of its potential application in self-aligned fabrication schemes for next-generation nanoelectronics. Here, we introduce an approach for area-selective ALD that relies on the use of chemoselective inhibitor molecules in a three-step (ABC-type) ALD cycle. A process for area-selective ALD of SiO2 was developed comprising acetylacetone inhibitor (step A), bis(diethylamino)silane precursor (step B), and O2 plasma reactant (step C) pulses. Our results show that this process allows for selective deposition of SiO2 on GeO2, SiNx, SiO2, and WO3, in the presence of Al2O3, TiO2, and HfO2 surfaces. In situ Fourier transform infrared spectroscopy experiments and density functional theory calculations underline that the selectivity of the approach stems from the chemoselective adsorption of the inhibitor. The selectivity between different oxide starting surfaces and the compatibility with plasma-assisted or ozone-based ALD are distinct features of this approach. Furthermore, the approach offers the opportunity of tuning the substrate-selectivity by proper selection of inhibitor molecules. PMID:28850774
New insights into time series analysis. II - Non-correlated observations
NASA Astrophysics Data System (ADS)
Ferreira Lopes, C. E.; Cross, N. J. G.
2017-08-01
Context. Statistical parameters are used to draw conclusions in a vast number of fields such as finance, weather, industrial, and science. These parameters are also used to identify variability patterns on photometric data to select non-stochastic variations that are indicative of astrophysical effects. New, more efficient, selection methods are mandatory to analyze the huge amount of astronomical data. Aims: We seek to improve the current methods used to select non-stochastic variations on non-correlated data. Methods: We used standard and new data-mining parameters to analyze non-correlated data to find the best way to discriminate between stochastic and non-stochastic variations. A new approach that includes a modified Strateva function was performed to select non-stochastic variations. Monte Carlo simulations and public time-domain data were used to estimate its accuracy and performance. Results: We introduce 16 modified statistical parameters covering different features of statistical distribution such as average, dispersion, and shape parameters. Many dispersion and shape parameters are unbound parameters, I.e. equations that do not require the calculation of average. Unbound parameters are computed with single loop and hence decreasing running time. Moreover, the majority of these parameters have lower errors than previous parameters, which is mainly observed for distributions with few measurements. A set of non-correlated variability indices, sample size corrections, and a new noise model along with tests of different apertures and cut-offs on the data (BAS approach) are introduced. The number of mis-selections are reduced by about 520% using a single waveband and 1200% combining all wavebands. On the other hand, the even-mean also improves the correlated indices introduced in Paper I. The mis-selection rate is reduced by about 18% if the even-mean is used instead of the mean to compute the correlated indices in the WFCAM database. Even-statistics allows us to improve the effectiveness of both correlated and non-correlated indices. Conclusions: The selection of non-stochastic variations is improved by non-correlated indices. The even-averages provide a better estimation of mean and median for almost all statistical distributions analyzed. The correlated variability indices, which are proposed in the first paper of this series, are also improved if the even-mean is used. The even-parameters will also be useful for classifying light curves in the last step of this project. We consider that the first step of this project, where we set new techniques and methods that provide a huge improvement on the efficiency of selection of variable stars, is now complete. Many of these techniques may be useful for a large number of fields. Next, we will commence a new step of this project regarding the analysis of period search methods.
Demura, Tomohiro; Demura, Shin-ichi; Uchiyama, Masanobu; Sugiura, Hiroki
2014-01-01
Gait properties change with age because of a decrease in lower limb strength and visual acuity or knee joint disorders. Gait changes commonly result from these combined factors. This study aimed to examine the effects of knee extension strength, visual acuity, and knee joint pain on gait properties of for 181 healthy female older adults (age: 76.1 (5.7) years). Walking speed, cadence, stance time, swing time, double support time, step length, step width, walking angle, and toe angle were selected as gait parameters. Knee extension strength was measured by isometric dynamometry; and decreased visual acuity and knee joint pain were evaluated by subjective judgment whether or not such factors created a hindrance during walking. Among older adults without vision problems and knee joint pain that affected walking, those with superior knee extension strength had significantly greater walking speed and step length than those with inferior knee extension strength (P < .05). Persons with visual acuity problems had higher cadence and shorter stance time. In addition, persons with pain in both knees showed slower walking speed and longer stance time and double support time. A decrease of knee extension strength and visual acuity and knee joint pain are factors affecting gait in the female older adults. Decreased knee extension strength and knee joint pain mainly affect respective distance and time parameters of the gait.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Isotalo, Aarno
A method referred to as tally nuclides is presented for accurately and efficiently calculating the time-step averages and integrals of any quantities that are weighted sums of atomic densities with constant weights during the step. The method allows all such quantities to be calculated simultaneously as a part of a single depletion solution with existing depletion algorithms. Some examples of the results that can be extracted include step-average atomic densities and macroscopic reaction rates, the total number of fissions during the step, and the amount of energy released during the step. Furthermore, the method should be applicable with several depletionmore » algorithms, and the integrals or averages should be calculated with an accuracy comparable to that reached by the selected algorithm for end-of-step atomic densities. The accuracy of the method is demonstrated in depletion calculations using the Chebyshev rational approximation method. Here, we demonstrate how the ability to calculate energy release in depletion calculations can be used to determine the accuracy of the normalization in a constant-power burnup calculation during the calculation without a need for a reference solution.« less
Rashed-Ul Islam, S M; Jahan, Munira; Tabassum, Shahina
2015-01-01
Virological monitoring is the best predictor for the management of chronic hepatitis B virus (HBV) infections. Consequently, it is important to use the most efficient, rapid and cost-effective testing systems for HBV DNA quantification. The present study compared the performance characteristics of a one-step HBV polymerase chain reaction (PCR) vs the two-step HBV PCR method for quantification of HBV DNA from clinical samples. A total of 100 samples consisting of 85 randomly selected samples from patients with chronic hepatitis B (CHB) and 15 samples from apparently healthy individuals were enrolled in this study. Of the 85 CHB clinical samples tested, HBV DNA was detected from 81% samples by one-step PCR method with median HBV DNA viral load (VL) of 7.50 × 10 3 lU/ml. In contrast, 72% samples were detected by the two-step PCR system with median HBV DNA of 3.71 × 10 3 lU/ml. The one-step method showed strong linear correlation with two-step PCR method (r = 0.89; p < 0.0001). Both methods showed good agreement at Bland-Altman plot, with a mean difference of 0.61 log 10 IU/ml and limits of agreement of -1.82 to 3.03 log 10 IU/ml. The intra-assay and interassay coefficients of variation (CV%) of plasma samples (4-7 log 10 IU/ml) for the one-step PCR method ranged between 0.33 to 0.59 and 0.28 to 0.48 respectively, thus demonstrating a high level of concordance between the two methods. Moreover, elimination of the DNA extraction step in the one-step PCR kit allowed time-efficient and significant labor and cost savings for the quantification of HBV DNA in a resource limited setting. Rashed-Ul Islam SM, Jahan M, Tabassum S. Evaluation of a Rapid One-step Real-time PCR Method as a High-throughput Screening for Quantification of Hepatitis B Virus DNA in a Resource-limited Setting. Euroasian J Hepato-Gastroenterol 2015;5(1):11-15.
Jahan, Munira; Tabassum, Shahina
2015-01-01
Virological monitoring is the best predictor for the management of chronic hepatitis B virus (HBV) infections. Consequently, it is important to use the most efficient, rapid and cost-effective testing systems for HBV DNA quantification. The present study compared the performance characteristics of a one-step HBV polymerase chain reaction (PCR) vs the two-step HBV PCR method for quantification of HBV DNA from clinical samples. A total of 100 samples consisting of 85 randomly selected samples from patients with chronic hepatitis B (CHB) and 15 samples from apparently healthy individuals were enrolled in this study. Of the 85 CHB clinical samples tested, HBV DNA was detected from 81% samples by one-step PCR method with median HBV DNA viral load (VL) of 7.50 × 103 lU/ml. In contrast, 72% samples were detected by the two-step PCR system with median HBV DNA of 3.71 × 103 lU/ml. The one-step method showed strong linear correlation with two-step PCR method (r = 0.89; p < 0.0001). Both methods showed good agreement at Bland-Altman plot, with a mean difference of 0.61 log10 IU/ml and limits of agreement of -1.82 to 3.03 log10 IU/ml. The intra-assay and interassay coefficients of variation (CV%) of plasma samples (4-7 log10 IU/ml) for the one-step PCR method ranged between 0.33 to 0.59 and 0.28 to 0.48 respectively, thus demonstrating a high level of concordance between the two methods. Moreover, elimination of the DNA extraction step in the one-step PCR kit allowed time-efficient and significant labor and cost savings for the quantification of HBV DNA in a resource limited setting. How to cite this article Rashed-Ul Islam SM, Jahan M, Tabassum S. Evaluation of a Rapid One-step Real-time PCR Method as a High-throughput Screening for Quantification of Hepatitis B Virus DNA in a Resource-limited Setting. Euroasian J Hepato-Gastroenterol 2015;5(1):11-15. PMID:29201678
Arias, Karen S; Climent, Maria J; Corma, Avelino; Iborra, Sara
2014-01-01
A new class of biodegradable anionic surfactants with structures based on 5-alkoxymethylfuroate was prepared starting from 5-hydroxymethylfurfural (HMF), through a one-pot-two-steps process which involves the selective etherification of HMF with fatty alcohols using heterogeneous solid acid, followed by a highly selective oxidation of the formyl group with a gold catalyst. The etherification step was optimized using aluminosilicates as acid catalysts with different pore topologies (H-Beta, HY, Mordenite, ZSM-5, ITQ-2, and MCM-41), different active sites (Bronsted or Lewis) and different adsorption properties. It was shown that highly hydrophobic defect-free H-Beta zeolites with Si/Al ratios higher than 25 are excellent acid catalysts to perform the selective etherification of HMF with fatty alcohols, avoiding the competitive self-etherification of HMF. Moreover, the 5-alkoxymethylfurfural derivatives obtained can be selectively oxidized to the corresponding furoic salts in excellent yield using Au/CeO2 as catalyst and air as oxidant, at moderated temperatures. Both H-Beta zeolite and Au/CeO2 could be reused several times without loss of activity. Copyright © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Evaluation of new collision-pair selection models in DSMC
NASA Astrophysics Data System (ADS)
Akhlaghi, Hassan; Roohi, Ehsan
2017-10-01
The current paper investigates new collision-pair selection procedures in a direct simulation Monte Carlo (DSMC) method. Collision partner selection based on the random procedure from nearest neighbor particles and deterministic selection of nearest neighbor particles have already been introduced as schemes that provide accurate results in a wide range of problems. In the current research, new collision-pair selections based on the time spacing and direction of the relative movement of particles are introduced and evaluated. Comparisons between the new and existing algorithms are made considering appropriate test cases including fluctuations in homogeneous gas, 2D equilibrium flow, and Fourier flow problem. Distribution functions for number of particles and collisions in cell, velocity components, and collisional parameters (collision separation, time spacing, relative velocity, and the angle between relative movements of particles) are investigated and compared with existing analytical relations for each model. The capability of each model in the prediction of the heat flux in the Fourier problem at different cell numbers, numbers of particles, and time steps is examined. For new and existing collision-pair selection schemes, the effect of an alternative formula for the number of collision-pair selections and avoiding repetitive collisions are investigated via the prediction of the Fourier heat flux. The simulation results demonstrate the advantages and weaknesses of each model in different test cases.
JCDSA: a joint covariate detection tool for survival analysis on tumor expression profiles.
Wu, Yiming; Liu, Yanan; Wang, Yueming; Shi, Yan; Zhao, Xudong
2018-05-29
Survival analysis on tumor expression profiles has always been a key issue for subsequent biological experimental validation. It is crucial how to select features which closely correspond to survival time. Furthermore, it is important how to select features which best discriminate between low-risk and high-risk group of patients. Common features derived from the two aspects may provide variable candidates for prognosis of cancer. Based on the provided two-step feature selection strategy, we develop a joint covariate detection tool for survival analysis on tumor expression profiles. Significant features, which are not only consistent with survival time but also associated with the categories of patients with different survival risks, are chosen. Using the miRNA expression data (Level 3) of 548 patients with glioblastoma multiforme (GBM) as an example, miRNA candidates for prognosis of cancer are selected. The reliability of selected miRNAs using this tool is demonstrated by 100 simulations. Furthermore, It is discovered that significant covariates are not directly composed of individually significant variables. Joint covariate detection provides a viewpoint for selecting variables which are not individually but jointly significant. Besides, it helps to select features which are not only consistent with survival time but also associated with prognosis risk. The software is available at http://bio-nefu.com/resource/jcdsa .
Beyond space and time: advanced selection for seismological data
NASA Astrophysics Data System (ADS)
Trabant, C. M.; Van Fossen, M.; Ahern, T. K.; Casey, R. E.; Weertman, B.; Sharer, G.; Benson, R. B.
2017-12-01
Separating the available raw data from that useful for any given study is often a tedious step in a research project, particularly for first-order data quality problems such as broken sensors, incorrect response information, and non-continuous time series. With the ever increasing amounts of data available to researchers, this chore becomes more and more time consuming. To assist users in this pre-processing of data, the IRIS Data Management Center (DMC) has created a system called Research Ready Data Sets (RRDS). The RRDS system allows researchers to apply filters that constrain their data request using criteria related to signal quality, response correctness, and high resolution data availability. In addition to the traditional selection methods of stations at a geographic location for given time spans, RRDS will provide enhanced criteria for data selection based on many of the measurements available in the DMC's MUSTANG quality control system. This means that data may be selected based on background noise (tolerance relative to high and low noise Earth models), signal-to-noise ratio for earthquake arrivals, signal RMS, instrument response corrected signal correlation with Earth tides, time tear (gaps/overlaps) counts, timing quality (when reported in the raw data by the datalogger) and more. The new RRDS system is available as a web service designed to operate as a request filter. A request is submitted containing the traditional station and time constraints as well as data quality constraints. The request is then filtered and a report is returned that indicates 1) the request that would subsequently be submitted to a data access service, 2) a record of the quality criteria specified and 3) a record of the data rejected based on those criteria, including the relevant values. This service can be used to either filter a request prior to requesting the actual data or to explore which data match a set of enhanced criteria without downloading the data. We are optimistic this capability will reduce the initial data culling steps most researchers go through. Additionally, use of this service should reduce the amount of data transmitted from the DMC, easing the workload for our finite shared resources.
NASA Astrophysics Data System (ADS)
Jansen, H V; de Boer, M J; Unnikrishnan, S; Louwerse, M C; Elwenspoek, M C
2009-03-01
An intensive study has been performed to understand and tune deep reactive ion etch (DRIE) processes for optimum results with respect to the silicon etch rate, etch profile and mask etch selectivity (in order of priority) using state-of-the-art dual power source DRIE equipment. The research compares pulsed-mode DRIE processes (e.g. Bosch technique) and mixed-mode DRIE processes (e.g. cryostat technique). In both techniques, an inhibitor is added to fluorine-based plasma to achieve directional etching, which is formed out of an oxide-forming (O2) or a fluorocarbon (FC) gas (C4F8 or CHF3). The inhibitor can be introduced together with the etch gas, which is named a mixed-mode DRIE process, or the inhibitor can be added in a time-multiplexed manner, which will be termed a pulsed-mode DRIE process. Next, the most convenient mode of operation found in this study is highlighted including some remarks to ensure proper etching (i.e. step synchronization in pulsed-mode operation and heat control of the wafer). First of all, for the fabrication of directional profiles, pulsed-mode DRIE is far easier to handle, is more robust with respect to the pattern layout and has the potential of achieving much higher mask etch selectivity, whereas in a mixed-mode the etch rate is higher and sidewall scalloping is prohibited. It is found that both pulsed-mode CHF3 and C4F8 are perfectly suited to perform high speed directional etching, although they have the drawback of leaving the FC residue at the sidewalls of etched structures. They show an identical result when the flow of CHF3 is roughly 30 times the flow of C4F8, and the amount of gas needed for a comparable result decreases rapidly while lowering the temperature from room down to cryogenic (and increasing the etch rate). Moreover, lowering the temperature lowers the mask erosion rate substantially (and so the mask selectivity improves). The pulsed-mode O2 is FC-free but shows only tolerable anisotropic results at -120 °C. The downside of needing liquid nitrogen to perform cryogenic etching can be improved by using a new approach in which both the pulsed and mixed modes are combined into the so-called puffed mode. Alternatively, the use of tetra-ethyl-ortho-silicate (TEOS) as a silicon oxide precursor is proposed to enable sufficient inhibiting strength and improved profile control up to room temperature. Pulsed-mode processing, the second important aspect, is commonly performed in a cycle using two separate steps: etch and deposition. Sometimes, a three-step cycle is adopted using a separate step to clean the bottom of etching features. This study highlights an issue, known by the authors but not discussed before in the literature: the need for proper synchronization between gas and bias pulses to explore the benefit of three steps. The transport of gas from the mass flow controller towards the wafer takes time, whereas the application of bias to the wafer is relatively instantaneous. This delay causes a problem with respect to synchronization when decreasing the step time towards a value close to the gas residence time. It is proposed to upgrade the software with a delay time module for the bias pulses to be in pace with the gas pulses. If properly designed, the delay module makes it possible to switch on the bias exactly during the arrival of the gas for the bottom removal step and so it will minimize the ionic impact because now etch and deposition steps can be performed virtually without bias. This will increase the mask etch selectivity and lower the heat impact significantly. Moreover, the extra bottom removal step can be performed at (also synchronized!) low pressure and therefore opens a window for improved aspect ratios. The temperature control of the wafer, a third aspect of this study, at a higher etch rate and longer etch time, needs critical attention, because it drastically limits the DRIE performance. It is stressed that the exothermic reaction (high silicon loading) and ionic impact (due to metallic masks and/or exposed silicon) are the main sources of heat that might raise the wafer temperature uncontrollably, and they show the weakness of the helium backside technique using mechanical clamping. Electrostatic clamping, an alternative technique, should minimize this problem because it is less susceptible to heat transfer when its thermal resistance and the gap of the helium backside cavity are minimized; however, it is not a subject of the current study. Because oxygen-growth-based etch processes (due to their ultra thin inhibiting layer) rely more heavily on a constant wafer temperature than fluorocarbon-based processes, oxygen etches are more affected by temperature fluctuations and drifts during the etching. The fourth outcome of this review is a phenomenological model, which explains and predicts many features with respect to loading, flow and pressure behaviour in DRIE equipment including a diffusion zone. The model is a reshape of the flow model constructed by Mogab, who studied the loading effect in plasma etching. Despite the downside of needing a cryostat, it is shown that—when selecting proper conditions—a cryogenic two-step pulsed mode can be used as a successful technique to achieve high speed and selective plasma etching with an etch rate around 25 µm min-1 (<1% silicon load) with nearly vertical walls and resist etch selectivity beyond 1000. With the model in hand, it can be predicted that the etch rate can be doubled (50 µm min-1 at an efficiency of 33% for the fluorine generation from the SF6 feed gas) by minimizing the time the free radicals need to pass the diffusion zone. It is anticipated that this residence time can be reduced sufficiently by a proper inductive coupled plasma (ICP) source design (e.g. plasma shower head and concentrator). In order to preserve the correct profile at such high etch rates, the pressure during the bottom removal step should be minimized and, therefore, the synchronized three-step pulsed mode is believed to be essential to reach such high etch rates with sufficient profile control. In order to improve the etch rate even further, the ICP power should be enhanced; the upgrading of the turbopump seems not yet to be relevant because the throttle valve in the current study had to be used to restrict the turbo efficiency. In order to have a versatile list of state-of-the-art references, it has been decided to arrange it in subjects. The categories concerning plasma physics and applications are, for example, books, reviews, general topics, fluorine-based plasmas, plasma mixtures with oxygen at room temperature, wafer heat transfer and high aspect ratio trench (HART) etching. For readers 'new' to this field, it is advisable to study at least one (but rather more than one) of the reviews concerning plasma as found in the first 30 references. In many cases, a paper can be classified into more than one category. In such cases, the paper is directed to the subject most suited for the discussion of the current review. For example, many papers on heat transfer also treat cryogenic conditions and all the references dealing with highly anisotropic behaviour have been directed to the category HARTs. Additional pointers could get around this problem but have the disadvantage of creating a kind of written spaghetti. I hope that the adapted organization structure will help to have a quick look at and understanding of current developments in high aspect ratio plasma etching. Enjoy reading... Henri Jansen 18 June 2008
Bair, Woei-Nan; Prettyman, Michelle G; Beamer, Brock A; Rogers, Mark W
2016-07-01
Protective stepping evoked by externally applied lateral perturbations reveals balance deficits underlying falls. However, a lack of comprehensive information about the control of different stepping strategies in relation to the magnitude of perturbation limits understanding of balance control in relation to age and fall status. The aim of this study was to investigate different protective stepping strategies and their kinematic and behavioral control characteristics in response to different magnitudes of lateral waist-pulls between older fallers and non-fallers. Fifty-two community-dwelling older adults (16 fallers) reacted naturally to maintain balance in response to five magnitudes of lateral waist-pulls. The balance tolerance limit (BTL, waist-pull magnitude where protective steps transitioned from single to multiple steps), first step control characteristics (stepping frequency and counts, spatial-temporal kinematic, and trunk position at landing) of four naturally selected protective step types were compared between fallers and non-fallers at- and above-BTL. Fallers took medial-steps most frequently while non-fallers most often took crossover-back-steps. Only non-fallers varied their step count and first step control parameters by step type at the instants of step initiation (onset time) and termination (trunk position), while both groups modulated step execution parameters (single stance duration and step length) by step type. Group differences were generally better demonstrated above-BTL. Fallers primarily used a biomechanically less effective medial-stepping strategy that may be partially explained by reduced somato-sensation. Fallers did not modulate their step parameters by step type at first step initiation and termination, instances particularly vulnerable to instability, reflecting their limitations in balance control during protective stepping. Copyright © 2016. Published by Elsevier Ltd.
Preparing to take the USMLE Step 1: a survey on medical students' self-reported study habits.
Kumar, Andre D; Shah, Monisha K; Maley, Jason H; Evron, Joshua; Gyftopoulos, Alex; Miller, Chad
2015-05-01
The USA Medical Licensing Examination Step 1 is a computerised multiple-choice examination that tests the basic biomedical sciences. It is administered after the second year in a traditional four-year MD programme. Most Step 1 scores fall between 140 and 260, with a mean (SD) of 227 (22). Step 1 scores are an important selection criterion for residency choice. Little is known about which study habits are associated with a higher score. To identify which self-reported study habits correlate with a higher Step 1 score. A survey regarding Step 1 study habits was sent to third year medical students at Tulane University School of Medicine every year between 2009 and 2011. The survey was sent approximately 3 months after the examination. 256 out of 475 students (54%) responded. The mean (SD) Step 1 score was 229.5 (22.1). Students who estimated studying more than 8-11 h per day had higher scores (p<0.05), but there was no added benefit with additional study time. Those who reported studying <40 days achieved higher scores (p<0.05). Those who estimated completing >2000 practice questions also obtained higher scores (p<0.01). Students who reported studying in a group, spending the majority of study time on practice questions or taking >40 preparation days did not achieve higher scores. Certain self-reported study habits may correlate with a higher Step 1 score compared with others. Given the importance of achieving a high Step 1 score on residency choice, it is important to further identify which characteristics may lead to a higher score. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.
Solution procedure of dynamical contact problems with friction
NASA Astrophysics Data System (ADS)
Abdelhakim, Lotfi
2017-07-01
Dynamical contact is one of the common research topics because of its wide applications in the engineering field. The main goal of this work is to develop a time-stepping algorithm for dynamic contact problems. We propose a finite element approach for elastodynamics contact problems [1]. Sticking, sliding and frictional contact can be taken into account. Lagrange multipliers are used to enforce non-penetration condition. For the time discretization, we propose a scheme equivalent to the explicit Newmark scheme. Each time step requires solving a nonlinear problem similar to a static friction problem. The nonlinearity of the system of equation needs an iterative solution procedure based on Uzawa's algorithm [2][3]. The applicability of the algorithm is illustrated by selected sample numerical solutions to static and dynamic contact problems. Results obtained with the model have been compared and verified with results from an independent numerical method.
Clustering of financial time series with application to index and enhanced index tracking portfolio
NASA Astrophysics Data System (ADS)
Dose, Christian; Cincotti, Silvano
2005-09-01
A stochastic-optimization technique based on time series cluster analysis is described for index tracking and enhanced index tracking problems. Our methodology solves the problem in two steps, i.e., by first selecting a subset of stocks and then setting the weight of each stock as a result of an optimization process (asset allocation). Present formulation takes into account constraints on the number of stocks and on the fraction of capital invested in each of them, whilst not including transaction costs. Computational results based on clustering selection are compared to those of random techniques and show the importance of clustering in noise reduction and robust forecasting applications, in particular for enhanced index tracking.
Nanolaminate microfluidic device for mobility selection of particles
Surh, Michael P [Livermore, CA; Wilson, William D [Pleasanton, CA; Barbee, Jr., Troy W.; Lane, Stephen M [Oakland, CA
2006-10-10
A microfluidic device made from nanolaminate materials that are capable of electrophoretic selection of particles on the basis of their mobility. Nanolaminate materials are generally alternating layers of two materials (one conducting, one insulating) that are made by sputter coating a flat substrate with a large number of layers. Specific subsets of the conducting layers are coupled together to form a single, extended electrode, interleaved with other similar electrodes. Thereby, the subsets of conducting layers may be dynamically charged to create time-dependent potential fields that can trap or transport charge colloidal particles. The addition of time-dependence is applicable to all geometries of nanolaminate electrophoretic and electrochemical designs from sinusoidal to nearly step-like.
Sugarman, R.M.
1960-08-30
An oscilloscope is designed for displaying transient signal waveforms having random time and amplitude distributions. The oscilloscopc is a sampling device that selects for display a portion of only those waveforms having a particular range of amplitudes. For this purpose a pulse-height analyzer is provided to screen the pulses. A variable voltage-level shifter and a time-scale rampvoltage generator take the pulse height relative to the start of the waveform. The variable voltage shifter produces a voltage level raised one step for each sequential signal waveform to be sampled and this results in an unsmeared record of input signal waveforms. Appropriate delay devices permit each sample waveform to pass its peak amplitude before the circuit selects it for display.
Identifying or measuring selected substances or toxins in a subject using resonant raman signals
NASA Technical Reports Server (NTRS)
Borchert, Mark S. (Inventor); Lambert, James L. (Inventor)
2005-01-01
Methods and systems of the present invention identify the presence of and/or the concentration of a selected analyte in a subject by: (a) illuminating a selected region of the eye of a subject with an optical excitation beam, wherein the excitation beam wavelength is selected to generate a resonant Raman spectrum of the selected analyte with a signal strength that is at least 100 times greater than Raman spectrums generated by non-resonant wavelengths and/or relative to signals of normal constituents present in the selected region of the eye; (b) detecting a resonant Raman spectrum corresponding to the selected illuminated region of the eye; and (c) identifying the presence, absence and/or the concentration of the selected analyte in the subject based on said detecting step. The apparatus may also be configured to be able to obtain biometric data of the eye to identify (confirm the identity of) the subject.
An autonomous organic reaction search engine for chemical reactivity.
Dragone, Vincenza; Sans, Victor; Henson, Alon B; Granda, Jaroslaw M; Cronin, Leroy
2017-06-09
The exploration of chemical space for new reactivity, reactions and molecules is limited by the need for separate work-up-separation steps searching for molecules rather than reactivity. Herein we present a system that can autonomously evaluate chemical reactivity within a network of 64 possible reaction combinations and aims for new reactivity, rather than a predefined set of targets. The robotic system combines chemical handling, in-line spectroscopy and real-time feedback and analysis with an algorithm that is able to distinguish and select the most reactive pathways, generating a reaction selection index (RSI) without need for separate work-up or purification steps. This allows the automatic navigation of a chemical network, leading to previously unreported molecules while needing only to do a fraction of the total possible reactions without any prior knowledge of the chemistry. We show the RSI correlates with reactivity and is able to search chemical space using the most reactive pathways.
An autonomous organic reaction search engine for chemical reactivity
NASA Astrophysics Data System (ADS)
Dragone, Vincenza; Sans, Victor; Henson, Alon B.; Granda, Jaroslaw M.; Cronin, Leroy
2017-06-01
The exploration of chemical space for new reactivity, reactions and molecules is limited by the need for separate work-up-separation steps searching for molecules rather than reactivity. Herein we present a system that can autonomously evaluate chemical reactivity within a network of 64 possible reaction combinations and aims for new reactivity, rather than a predefined set of targets. The robotic system combines chemical handling, in-line spectroscopy and real-time feedback and analysis with an algorithm that is able to distinguish and select the most reactive pathways, generating a reaction selection index (RSI) without need for separate work-up or purification steps. This allows the automatic navigation of a chemical network, leading to previously unreported molecules while needing only to do a fraction of the total possible reactions without any prior knowledge of the chemistry. We show the RSI correlates with reactivity and is able to search chemical space using the most reactive pathways.
An autonomous organic reaction search engine for chemical reactivity
Dragone, Vincenza; Sans, Victor; Henson, Alon B.; Granda, Jaroslaw M.; Cronin, Leroy
2017-01-01
The exploration of chemical space for new reactivity, reactions and molecules is limited by the need for separate work-up-separation steps searching for molecules rather than reactivity. Herein we present a system that can autonomously evaluate chemical reactivity within a network of 64 possible reaction combinations and aims for new reactivity, rather than a predefined set of targets. The robotic system combines chemical handling, in-line spectroscopy and real-time feedback and analysis with an algorithm that is able to distinguish and select the most reactive pathways, generating a reaction selection index (RSI) without need for separate work-up or purification steps. This allows the automatic navigation of a chemical network, leading to previously unreported molecules while needing only to do a fraction of the total possible reactions without any prior knowledge of the chemistry. We show the RSI correlates with reactivity and is able to search chemical space using the most reactive pathways. PMID:28598440
Storytelling, behavior planning, and language evolution in context.
McBride, Glen
2014-01-01
An attempt is made to specify the structure of the hominin bands that began steps to language. Storytelling could evolve without need for language yet be strongly subject to natural selection and could provide a major feedback process in evolving language. A storytelling model is examined, including its effects on the evolution of consciousness and the possible timing of language evolution. Behavior planning is presented as a model of language evolution from storytelling. The behavior programming mechanism in both directions provide a model of creating and understanding behavior and language. Culture began with societies, then family evolution, family life in troops, but storytelling created a culture of experiences, a final step in the long process of achieving experienced adults by natural selection. Most language evolution occurred in conversations where evolving non-verbal feedback ensured mutual agreements on understanding. Natural language evolved in conversations with feedback providing understanding of changes.
Storytelling, behavior planning, and language evolution in context
McBride, Glen
2014-01-01
An attempt is made to specify the structure of the hominin bands that began steps to language. Storytelling could evolve without need for language yet be strongly subject to natural selection and could provide a major feedback process in evolving language. A storytelling model is examined, including its effects on the evolution of consciousness and the possible timing of language evolution. Behavior planning is presented as a model of language evolution from storytelling. The behavior programming mechanism in both directions provide a model of creating and understanding behavior and language. Culture began with societies, then family evolution, family life in troops, but storytelling created a culture of experiences, a final step in the long process of achieving experienced adults by natural selection. Most language evolution occurred in conversations where evolving non-verbal feedback ensured mutual agreements on understanding. Natural language evolved in conversations with feedback providing understanding of changes. PMID:25360123
Soós, Reka; Whiteman, Andrew D; Wilson, David C; Briciu, Cosmin; Nürnberger, Sofia; Oelz, Barbara; Gunsilius, Ellen; Schwehn, Ekkehard
2017-08-01
This is the second of two papers reporting the results of a major study considering 'operator models' for municipal solid waste management (MSWM) in emerging and developing countries. Part A documents the evidence base, while Part B presents a four-step decision support system for selecting an appropriate operator model in a particular local situation. Step 1 focuses on understanding local problems and framework conditions; Step 2 on formulating and prioritising local objectives; and Step 3 on assessing capacities and conditions, and thus identifying strengths and weaknesses, which underpin selection of the operator model. Step 4A addresses three generic questions, including public versus private operation, inter-municipal co-operation and integration of services. For steps 1-4A, checklists have been developed as decision support tools. Step 4B helps choose locally appropriate models from an evidence-based set of 42 common operator models ( coms); decision support tools here are a detailed catalogue of the coms, setting out advantages and disadvantages of each, and a decision-making flowchart. The decision-making process is iterative, repeating steps 2-4 as required. The advantages of a more formal process include avoiding pre-selection of a particular com known to and favoured by one decision maker, and also its assistance in identifying the possible weaknesses and aspects to consider in the selection and design of operator models. To make the best of whichever operator models are selected, key issues which need to be addressed include the capacity of the public authority as 'client', management in general and financial management in particular.
NASA Technical Reports Server (NTRS)
Rodgers, T. E.; Johnson, J. F.
1977-01-01
The logic and methodology for a preliminary grouping of Spacelab and mixed-cargo payloads is proposed in a form that can be readily coded into a computer program by NASA. The logic developed for this preliminary cargo grouping analysis is summarized. Principal input data include the NASA Payload Model, payload descriptive data, Orbiter and Spacelab capabilities, and NASA guidelines and constraints. The first step in the process is a launch interval selection in which the time interval for payload grouping is identified. Logic flow steps are then taken to group payloads and define flight configurations based on criteria that includes dedication, volume, area, orbital parameters, pointing, g-level, mass, center of gravity, energy, power, and crew time.
NASA Technical Reports Server (NTRS)
Grycewicz, Thomas J.; Tan, Bin; Isaacson, Peter J.; De Luccia, Frank J.; Dellomo, John
2016-01-01
In developing software for independent verification and validation (IVV) of the Image Navigation and Registration (INR) capability for the Geostationary Operational Environmental Satellite R Series (GOES-R) Advanced Baseline Imager (ABI), we have encountered an image registration artifact which limits the accuracy of image offset estimation at the subpixel scale using image correlation. Where the two images to be registered have the same pixel size, subpixel image registration preferentially selects registration values where the image pixel boundaries are close to lined up. Because of the shape of a curve plotting input displacement to estimated offset, we call this a stair-step artifact. When one image is at a higher resolution than the other, the stair-step artifact is minimized by correlating at the higher resolution. For validating ABI image navigation, GOES-R images are correlated with Landsat-based ground truth maps. To create the ground truth map, the Landsat image is first transformed to the perspective seen from the GOES-R satellite, and then is scaled to an appropriate pixel size. Minimizing processing time motivates choosing the map pixels to be the same size as the GOES-R pixels. At this pixel size image processing of the shift estimate is efficient, but the stair-step artifact is present. If the map pixel is very small, stair-step is not a problem, but image correlation is computation-intensive. This paper describes simulation-based selection of the scale for truth maps for registering GOES-R ABI images.
A methodology to event reconstruction from trace images.
Milliet, Quentin; Delémont, Olivier; Sapin, Eric; Margot, Pierre
2015-03-01
The widespread use of digital imaging devices for surveillance (CCTV) and entertainment (e.g., mobile phones, compact cameras) has increased the number of images recorded and opportunities to consider the images as traces or documentation of criminal activity. The forensic science literature focuses almost exclusively on technical issues and evidence assessment [1]. Earlier steps in the investigation phase have been neglected and must be considered. This article is the first comprehensive description of a methodology to event reconstruction using images. This formal methodology was conceptualised from practical experiences and applied to different contexts and case studies to test and refine it. Based on this practical analysis, we propose a systematic approach that includes a preliminary analysis followed by four main steps. These steps form a sequence for which the results from each step rely on the previous step. However, the methodology is not linear, but it is a cyclic, iterative progression for obtaining knowledge about an event. The preliminary analysis is a pre-evaluation phase, wherein potential relevance of images is assessed. In the first step, images are detected and collected as pertinent trace material; the second step involves organising and assessing their quality and informative potential. The third step includes reconstruction using clues about space, time and actions. Finally, in the fourth step, the images are evaluated and selected as evidence. These steps are described and illustrated using practical examples. The paper outlines how images elicit information about persons, objects, space, time and actions throughout the investigation process to reconstruct an event step by step. We emphasise the hypothetico-deductive reasoning framework, which demonstrates the contribution of images to generating, refining or eliminating propositions or hypotheses. This methodology provides a sound basis for extending image use as evidence and, more generally, as clues in investigation and crime reconstruction processes. Copyright © 2015 Forensic Science Society. Published by Elsevier Ireland Ltd. All rights reserved.
Hümmer, Christiane; Poppe, Carolin; Bunos, Milica; Stock, Belinda; Wingenfeld, Eva; Huppert, Volker; Stuth, Juliane; Reck, Kristina; Essl, Mike; Seifried, Erhard; Bonig, Halvard
2016-03-16
Automation of cell therapy manufacturing promises higher productivity of cell factories, more economical use of highly-trained (and costly) manufacturing staff, facilitation of processes requiring manufacturing steps at inconvenient hours, improved consistency of processing steps and other benefits. One of the most broadly disseminated engineered cell therapy products is immunomagnetically selected CD34+ hematopoietic "stem" cells (HSCs). As the clinical GMP-compliant automat CliniMACS Prodigy is being programmed to perform ever more complex sequential manufacturing steps, we developed a CD34+ selection module for comparison with the standard semi-automatic CD34 "normal scale" selection process on CliniMACS Plus, applicable for 600 × 10(6) target cells out of 60 × 10(9) total cells. Three split-validation processings with healthy donor G-CSF-mobilized apheresis products were performed; feasibility, time consumption and product quality were assessed. All processes proceeded uneventfully. Prodigy runs took about 1 h longer than CliniMACS Plus runs, albeit with markedly less hands-on operator time and therefore also suitable for less experienced operators. Recovery of target cells was the same for both technologies. Although impurities, specifically T- and B-cells, were 5 ± 1.6-fold and 4 ± 0.4-fold higher in the Prodigy products (p = ns and p = 0.013 for T and B cell depletion, respectively), T cell contents per kg of a virtual recipient receiving 4 × 10(6) CD34+ cells/kg was below 10 × 10(3)/kg even in the worst Prodigy product and thus more than fivefold below the specification of CD34+ selected mismatched-donor stem cell products. The products' theoretical clinical usability is thus confirmed. This split validation exercise of a relatively short and simple process exemplifies the potential of automatic cell manufacturing. Automation will further gain in attractiveness when applied to more complex processes, requiring frequent interventions or handling at unfavourable working hours, such as re-targeting of T-cells.
NASA Technical Reports Server (NTRS)
Getty, Stephanie; Brickerhoff, William; Cornish, Timothy; Ecelberger, Scott; Floyd, Melissa
2012-01-01
RATIONALE A miniature time-of-flight mass spectrometer has been adapted to demonstrate two-step laser desorption-ionization (LOI) in a compact instrument package for enhanced organics detection. Two-step LDI decouples the desorption and ionization processes, relative to traditional laser ionization-desorption, in order to produce low-fragmentation conditions for complex organic analytes. Tuning UV ionization laser energy allowed control ofthe degree of fragmentation, which may enable better identification of constituent species. METHODS A reflectron time-of-flight mass spectrometer prototype measuring 20 cm in length was adapted to a two-laser configuration, with IR (1064 nm) desorption followed by UV (266 nm) postionization. A relatively low ion extraction voltage of 5 kV was applied at the sample inlet. Instrument capabilities and performance were demonstrated with analysis of a model polycyclic aromatic hydrocarbon, representing a class of compounds important to the fields of Earth and planetary science. RESULTS L2MS analysis of a model PAH standard, pyrene, has been demonstrated, including parent mass identification and the onset o(tunable fragmentation as a function of ionizing laser energy. Mass resolution m/llm = 380 at full width at half-maximum was achieved which is notable for gas-phase ionization of desorbed neutrals in a highly-compact mass analyzer. CONCLUSIONS Achieving two-step laser mass spectrometry (L2MS) in a highly-miniature instrument enables a powerful approach to the detection and characterization of aromatic organics in remote terrestrial and planetary applications. Tunable detection of parent and fragment ions with high mass resolution, diagnostic of molecular structure, is possible on such a compact L2MS instrument. Selectivity of L2MS against low-mass inorganic salt interferences is a key advantage when working with unprocessed, natural samples, and a mechanism for the observed selectivity is presented.
Suin, Vanessa; Nazé, Florence; Francart, Aurélie; Lamoral, Sophie; De Craeye, Stéphane; Kalai, Michael; Van Gucht, Steven
2014-01-01
A generic two-step lyssavirus real-time reverse transcriptase polymerase chain reaction (qRT-PCR), based on a nested PCR strategy, was validated for the detection of different lyssavirus species. Primers with 17 to 30% of degenerate bases were used in both consecutive steps. The assay could accurately detect RABV, LBV, MOKV, DUVV, EBLV-1, EBLV-2, and ABLV. In silico sequence alignment showed a functional match with the remaining lyssavirus species. The diagnostic specificity was 100% and the sensitivity proved to be superior to that of the fluorescent antigen test. The limit of detection was ≤ 1 50% tissue culture infectious dose. The related vesicular stomatitis virus was not recognized, confirming the selectivity for lyssaviruses. The assay was applied to follow the evolution of rabies virus infection in the brain of mice from 0 to 10 days after intranasal inoculation. The obtained RNA curve corresponded well with the curves obtained by a one-step monospecific RABV-qRT-PCR, the fluorescent antigen test, and virus titration. Despite the presence of degenerate bases, the assay proved to be highly sensitive, specific, and reproducible.
Nazé, Florence; Francart, Aurélie; Lamoral, Sophie; De Craeye, Stéphane; Kalai, Michael
2014-01-01
A generic two-step lyssavirus real-time reverse transcriptase polymerase chain reaction (qRT-PCR), based on a nested PCR strategy, was validated for the detection of different lyssavirus species. Primers with 17 to 30% of degenerate bases were used in both consecutive steps. The assay could accurately detect RABV, LBV, MOKV, DUVV, EBLV-1, EBLV-2, and ABLV. In silico sequence alignment showed a functional match with the remaining lyssavirus species. The diagnostic specificity was 100% and the sensitivity proved to be superior to that of the fluorescent antigen test. The limit of detection was ≤1 50% tissue culture infectious dose. The related vesicular stomatitis virus was not recognized, confirming the selectivity for lyssaviruses. The assay was applied to follow the evolution of rabies virus infection in the brain of mice from 0 to 10 days after intranasal inoculation. The obtained RNA curve corresponded well with the curves obtained by a one-step monospecific RABV-qRT-PCR, the fluorescent antigen test, and virus titration. Despite the presence of degenerate bases, the assay proved to be highly sensitive, specific, and reproducible. PMID:24822188
Systemic safety project selection tool.
DOT National Transportation Integrated Search
2013-07-01
"The Systemic Safety Project Selection Tool presents a process for incorporating systemic safety planning into traditional safety management processes. The Systemic Tool provides a step-by-step process for conducting systemic safety analysis; conside...
Food Preparation. I: Food Facts for Home. II: Facts about Foodservice.
ERIC Educational Resources Information Center
Procter and Gamble Educational Services, Cincinnati, OH.
This package is intended for use in home economics classes focusing on nutrition and food preparation and service. The teaching guide is divided into two parts. The first centers on selected first-time facts on nutrition, meal planning, and basic food preparation skills. It includes modules on nutrition, meal management, initial steps in food…
Research on the range side lobe suppression method for modulated stepped frequency radar signals
NASA Astrophysics Data System (ADS)
Liu, Yinkai; Shan, Tao; Feng, Yuan
2018-05-01
The magnitude of time-domain range sidelobe of modulated stepped frequency radar affects the imaging quality of inverse synthetic aperture radar (ISAR). In this paper, the cause of high sidelobe in modulated stepped frequency radar imaging is analyzed first in real environment. Then, the chaos particle swarm optimization (CPSO) is used to select the amplitude and phase compensation factors according to the minimum sidelobe criterion. Finally, the compensated one-dimensional range images are obtained. Experimental results show that the amplitude-phase compensation method based on CPSO algorithm can effectively reduce the sidelobe peak value of one-dimensional range images, which outperforms the common sidelobe suppression methods and avoids the coverage of weak scattering points by strong scattering points due to the high sidelobes.
1990-08-01
the guidance in this report. 1-4. Scope This guidance covers selection of projects suitable for a One-Step or Two-Step approach, development of design...conducted, focus on resolving proposal deficiencies; prices are not "negotiated" in the common use of the term. A Request for Proposal (RFP) states project ...carefully examines experience and past performance in the design of similar projects and building types. Quality of
2013-05-31
j] (11) A MATLAB code was written for finding the displacement at each node for all time steps. Material selected for the study was steel with 1 m...some of the dislocations are annihilated or rearranged. Various stages in the recovery are, entanglement of dislocations, cell formation, annihilation...frequency domain using an in-house pro- gram written in MATLAB . A time-domain signal obtained from nonlinear measurement and its corresponding fast
Akbar, Jamshed; Iqbal, Shahid; Batool, Fozia; Karim, Abdul; Chan, Kim Wei
2012-01-01
Quantitative structure-retention relationships (QSRRs) have successfully been developed for naturally occurring phenolic compounds in a reversed-phase liquid chromatographic (RPLC) system. A total of 1519 descriptors were calculated from the optimized structures of the molecules using MOPAC2009 and DRAGON softwares. The data set of 39 molecules was divided into training and external validation sets. For feature selection and mapping we used step-wise multiple linear regression (SMLR), unsupervised forward selection followed by step-wise multiple linear regression (UFS-SMLR) and artificial neural networks (ANN). Stable and robust models with significant predictive abilities in terms of validation statistics were obtained with negation of any chance correlation. ANN models were found better than remaining two approaches. HNar, IDM, Mp, GATS2v, DISP and 3D-MoRSE (signals 22, 28 and 32) descriptors based on van der Waals volume, electronegativity, mass and polarizability, at atomic level, were found to have significant effects on the retention times. The possible implications of these descriptors in RPLC have been discussed. All the models are proven to be quite able to predict the retention times of phenolic compounds and have shown remarkable validation, robustness, stability and predictive performance. PMID:23203132
Applications of step-selection functions in ecology and conservation.
Thurfjell, Henrik; Ciuti, Simone; Boyce, Mark S
2014-01-01
Recent progress in positioning technology facilitates the collection of massive amounts of sequential spatial data on animals. This has led to new opportunities and challenges when investigating animal movement behaviour and habitat selection. Tools like Step Selection Functions (SSFs) are relatively new powerful models for studying resource selection by animals moving through the landscape. SSFs compare environmental attributes of observed steps (the linear segment between two consecutive observations of position) with alternative random steps taken from the same starting point. SSFs have been used to study habitat selection, human-wildlife interactions, movement corridors, and dispersal behaviours in animals. SSFs also have the potential to depict resource selection at multiple spatial and temporal scales. There are several aspects of SSFs where consensus has not yet been reached such as how to analyse the data, when to consider habitat covariates along linear paths between observations rather than at their endpoints, how many random steps should be considered to measure availability, and how to account for individual variation. In this review we aim to address all these issues, as well as to highlight weak features of this modelling approach that should be developed by further research. Finally, we suggest that SSFs could be integrated with state-space models to classify behavioural states when estimating SSFs.
Steps to consider for effective decision making when selecting and prioritizing eHealth services.
Vimarlund, Vivian; Davoody, Nadia; Koch, Sabine
2013-01-01
Making the best choice for an organization when selecting IT applications or eHealth services is not always easy as there are a lot of parameters to take into account. The aim of this paper is to explore some steps to support effective decision making when selecting and prioritizing eHealth services prior to implementation and/or procurement. The steps presented in this paper were identified by interviewing nine key stakeholders at Stockholm County Council. They are supposed to work as a guide for decision making and aim to identify objectives and expected effects, technical, organizational, and economic requirements, and opportunities important to consider before decisions are taken. The steps and their respective issues and variables are concretized in a number of templates to be filled in by decision makers when selecting and prioritizing eHealth services.
Method and system for radioisotope generation
Toth, James J.; Soderquist, Chuck Z.; Greenwood, Lawrence R.; Mattigod, Shas V.; Fryxell, Glen E.; O'Hara, Matthew J.
2014-07-15
A system and a process for producing selected isotopic daughter products from parent materials characterized by the steps of loading the parent material upon a sorbent having a functional group configured to selectively bind the parent material under designated conditions, generating the selected isotopic daughter products, and eluting said selected isotopic daughter products from the sorbent. In one embodiment, the process also includes the step of passing an eluent formed by the elution step through a second sorbent material that is configured to remove a preselected material from said eluent. In some applications a passage of the material through a third sorbent material after passage through the second sorbent material is also performed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sheng, Y; Li, T; Yoo, S
2016-06-15
Purpose: To enable near-real-time (<20sec) and interactive planning without compromising quality for whole breast RT treatment planning using tangential fields. Methods: Whole breast RT plans from 20 patients treated with single energy (SE, 6MV, 10 patients) or mixed energy (ME, 6/15MV, 10 patients) were randomly selected for model training. Additional 20 cases were used as validation cohort. The planning process for a new case consists of three fully automated steps:1. Energy Selection. A classification model automatically selects energy level. To build the energy selection model, principle component analysis (PCA) was applied to the digital reconstructed radiographs (DRRs) of training casesmore » to extract anatomy-energy relationship.2. Fluence Estimation. Once energy is selected, a random forest (RF) model generates the initial fluence. This model summarizes the relationship between patient anatomy’s shape based features and the output fluence. 3. Fluence Fine-tuning. This step balances the overall dose contribution throughout the whole breast tissue by automatically selecting reference points and applying centrality correction. Fine-tuning works at beamlet-level until the dose distribution meets clinical objectives. Prior to finalization, physicians can also make patient-specific trade-offs between target coverage and high-dose volumes.The proposed method was validated by comparing auto-plans with manually generated clinical-plans using Wilcoxon Signed-Rank test. Results: In 19/20 cases the model suggested the same energy combination as clinical-plans. The target volume coverage V100% was 78.1±4.7% for auto-plans, and 79.3±4.8% for clinical-plans (p=0.12). Volumes receiving 105% Rx were 69.2±78.0cc for auto-plans compared to 83.9±87.2cc for clinical-plans (p=0.13). The mean V10Gy, V20Gy of the ipsilateral lung was 24.4±6.7%, 18.6±6.0% for auto plans and 24.6±6.7%, 18.9±6.1% for clinical-plans (p=0.04, <0.001). Total computational time for auto-plans was < 20s. Conclusion: We developed an automated method that generates breast radiotherapy plans with accurate energy selection, similar target volume coverage, reduced hotspot volumes, and significant reduction in planning time, allowing for near-real-time planning.« less
Configuring Airspace Sectors with Approximate Dynamic Programming
NASA Technical Reports Server (NTRS)
Bloem, Michael; Gupta, Pramod
2010-01-01
In response to changing traffic and staffing conditions, supervisors dynamically configure airspace sectors by assigning them to control positions. A finite horizon airspace sector configuration problem models this supervisor decision. The problem is to select an airspace configuration at each time step while considering a workload cost, a reconfiguration cost, and a constraint on the number of control positions at each time step. Three algorithms for this problem are proposed and evaluated: a myopic heuristic, an exact dynamic programming algorithm, and a rollouts approximate dynamic programming algorithm. On problem instances from current operations with only dozens of possible configurations, an exact dynamic programming solution gives the optimal cost value. The rollouts algorithm achieves costs within 2% of optimal for these instances, on average. For larger problem instances that are representative of future operations and have thousands of possible configurations, excessive computation time prohibits the use of exact dynamic programming. On such problem instances, the rollouts algorithm reduces the cost achieved by the heuristic by more than 15% on average with an acceptable computation time.
NASA Astrophysics Data System (ADS)
Guo, Linjuan; Zu, Baiyi; Yang, Zheng; Cao, Hongyu; Zheng, Xuefang; Dou, Xincun
2014-01-01
For the first time, flexible PVP/pyrene/APTS/rGO fluorescent nanonets were designed and synthesized via a one-step electrospinning method to detect representative subsaturated nitroaromatic explosive vapor. The functional fluorescent nanonets, which were highly stable in air, showed an 81% quenching efficiency towards TNT vapor (~10 ppb) with an exposure time of 540 s at room temperature. The nice performance of the nanonets was ascribed to the synergistic effects induced by the specific adsorption properties of APTS, the fast charge transfer properties and the effective π-π interaction with pyrene and TNT of rGO. Compared to the analogues of TNT, the PVP/pyrene/APTS/rGO nanonets showed notable selectivity towards TNT and DNT vapors. The explored functionalization method opens up brand new insight into sensitive and selective detection of vapor phase nitroaromatic explosives.For the first time, flexible PVP/pyrene/APTS/rGO fluorescent nanonets were designed and synthesized via a one-step electrospinning method to detect representative subsaturated nitroaromatic explosive vapor. The functional fluorescent nanonets, which were highly stable in air, showed an 81% quenching efficiency towards TNT vapor (~10 ppb) with an exposure time of 540 s at room temperature. The nice performance of the nanonets was ascribed to the synergistic effects induced by the specific adsorption properties of APTS, the fast charge transfer properties and the effective π-π interaction with pyrene and TNT of rGO. Compared to the analogues of TNT, the PVP/pyrene/APTS/rGO nanonets showed notable selectivity towards TNT and DNT vapors. The explored functionalization method opens up brand new insight into sensitive and selective detection of vapor phase nitroaromatic explosives. Electronic supplementary information (ESI) available: Vapor pressure of TNT and its analogues, fluorescence quenching kinetics, fluorescence quenching efficiencies and additional SEM images. See DOI: 10.1039/c3nr04960d
Activity Monitors Step Count Accuracy in Community-Dwelling Older Adults.
Johnson, Marquell
2015-01-01
Objective: To examine the step count accuracy of activity monitors in community-dwelling older adults. Method : Twenty-nine participants aged 67.70 ± 6.07 participated. Three pedometers and the Actical accelerometer step count functions were compared with actual steps taken during a 200-m walk around an indoor track and during treadmill walking at three different speeds. Results : There was no statistical difference between activity monitors step counts and actual steps during self-selected pace walking. During treadmill walking at 0.67 m∙s -1 , all activity monitors step counts were significantly different from actual steps. During treadmill walking at 0.894m∙s -1 , the Omron HJ-112 pedometer step counts were not significantly different from actual steps. During treadmill walking at 1.12 m∙s -1 , the Yamax SW-200 pedometer steps were significantly different from actual steps. Discussion : Activity monitor selection should be deliberate when examining the walking behaviors of community-dwelling older adults, especially for those who walk at a slower pace.
Activity Monitors Step Count Accuracy in Community-Dwelling Older Adults
2015-01-01
Objective: To examine the step count accuracy of activity monitors in community-dwelling older adults. Method: Twenty-nine participants aged 67.70 ± 6.07 participated. Three pedometers and the Actical accelerometer step count functions were compared with actual steps taken during a 200-m walk around an indoor track and during treadmill walking at three different speeds. Results: There was no statistical difference between activity monitors step counts and actual steps during self-selected pace walking. During treadmill walking at 0.67 m∙s−1, all activity monitors step counts were significantly different from actual steps. During treadmill walking at 0.894m∙s−1, the Omron HJ-112 pedometer step counts were not significantly different from actual steps. During treadmill walking at 1.12 m∙s−1, the Yamax SW-200 pedometer steps were significantly different from actual steps. Discussion: Activity monitor selection should be deliberate when examining the walking behaviors of community-dwelling older adults, especially for those who walk at a slower pace. PMID:28138464
Toney, Megan E.; Chang, Young-Hui
2016-01-01
Human walking is a complex task, and we lack a complete understanding of how the neuromuscular system organizes its numerous muscles and joints to achieve consistent and efficient walking mechanics. Focused control of select influential task-level variables may simplify the higher-level control of steady state walking and reduce demand on the neuromuscular system. As trailing leg power generation and force application can affect the mechanical efficiency of step-to-step transitions, we investigated how joint torques are organized to control leg force and leg power during human walking. We tested whether timing of trailing leg force control corresponded with timing of peak leg power generation. We also applied a modified uncontrolled manifold analysis to test whether individual or coordinated joint torque strategies most contributed to leg force control. We found that leg force magnitude was adjusted from step-to-step to maintain consistent leg power generation. Leg force modulation was primarily determined by adjustments in the timing of peak ankle plantar-flexion torque, while knee torque was simultaneously covaried to dampen the effect of ankle torque on leg force. We propose a coordinated joint torque control strategy in which the trailing leg ankle acts as a motor to drive leg power production while trailing leg knee torque acts as a brake to refine leg power production. PMID:27334888
Assawamakin, Anunchai; Prueksaaroon, Supakit; Kulawonganunchai, Supasak; Shaw, Philip James; Varavithya, Vara; Ruangrajitpakorn, Taneth; Tongsima, Sissades
2013-01-01
Identification of suitable biomarkers for accurate prediction of phenotypic outcomes is a goal for personalized medicine. However, current machine learning approaches are either too complex or perform poorly. Here, a novel two-step machine-learning framework is presented to address this need. First, a Naïve Bayes estimator is used to rank features from which the top-ranked will most likely contain the most informative features for prediction of the underlying biological classes. The top-ranked features are then used in a Hidden Naïve Bayes classifier to construct a classification prediction model from these filtered attributes. In order to obtain the minimum set of the most informative biomarkers, the bottom-ranked features are successively removed from the Naïve Bayes-filtered feature list one at a time, and the classification accuracy of the Hidden Naïve Bayes classifier is checked for each pruned feature set. The performance of the proposed two-step Bayes classification framework was tested on different types of -omics datasets including gene expression microarray, single nucleotide polymorphism microarray (SNParray), and surface-enhanced laser desorption/ionization time-of-flight (SELDI-TOF) proteomic data. The proposed two-step Bayes classification framework was equal to and, in some cases, outperformed other classification methods in terms of prediction accuracy, minimum number of classification markers, and computational time.
A new time-frequency method for identification and classification of ball bearing faults
NASA Astrophysics Data System (ADS)
Attoui, Issam; Fergani, Nadir; Boutasseta, Nadir; Oudjani, Brahim; Deliou, Adel
2017-06-01
In order to fault diagnosis of ball bearing that is one of the most critical components of rotating machinery, this paper presents a time-frequency procedure incorporating a new feature extraction step that combines the classical wavelet packet decomposition energy distribution technique and a new feature extraction technique based on the selection of the most impulsive frequency bands. In the proposed procedure, firstly, as a pre-processing step, the most impulsive frequency bands are selected at different bearing conditions using a combination between Fast-Fourier-Transform FFT and Short-Frequency Energy SFE algorithms. Secondly, once the most impulsive frequency bands are selected, the measured machinery vibration signals are decomposed into different frequency sub-bands by using discrete Wavelet Packet Decomposition WPD technique to maximize the detection of their frequency contents and subsequently the most useful sub-bands are represented in the time-frequency domain by using Short Time Fourier transform STFT algorithm for knowing exactly what the frequency components presented in those frequency sub-bands are. Once the proposed feature vector is obtained, three feature dimensionality reduction techniques are employed using Linear Discriminant Analysis LDA, a feedback wrapper method and Locality Sensitive Discriminant Analysis LSDA. Lastly, the Adaptive Neuro-Fuzzy Inference System ANFIS algorithm is used for instantaneous identification and classification of bearing faults. In order to evaluate the performances of the proposed method, different testing data set to the trained ANFIS model by using different conditions of healthy and faulty bearings under various load levels, fault severities and rotating speed. The conclusion resulting from this paper is highlighted by experimental results which prove that the proposed method can serve as an intelligent bearing fault diagnosis system.
Thomas, Minta; De Brabanter, Kris; De Moor, Bart
2014-05-10
DNA microarrays are potentially powerful technology for improving diagnostic classification, treatment selection, and prognostic assessment. The use of this technology to predict cancer outcome has a history of almost a decade. Disease class predictors can be designed for known disease cases and provide diagnostic confirmation or clarify abnormal cases. The main input to this class predictors are high dimensional data with many variables and few observations. Dimensionality reduction of these features set significantly speeds up the prediction task. Feature selection and feature transformation methods are well known preprocessing steps in the field of bioinformatics. Several prediction tools are available based on these techniques. Studies show that a well tuned Kernel PCA (KPCA) is an efficient preprocessing step for dimensionality reduction, but the available bandwidth selection method for KPCA was computationally expensive. In this paper, we propose a new data-driven bandwidth selection criterion for KPCA, which is related to least squares cross-validation for kernel density estimation. We propose a new prediction model with a well tuned KPCA and Least Squares Support Vector Machine (LS-SVM). We estimate the accuracy of the newly proposed model based on 9 case studies. Then, we compare its performances (in terms of test set Area Under the ROC Curve (AUC) and computational time) with other well known techniques such as whole data set + LS-SVM, PCA + LS-SVM, t-test + LS-SVM, Prediction Analysis of Microarrays (PAM) and Least Absolute Shrinkage and Selection Operator (Lasso). Finally, we assess the performance of the proposed strategy with an existing KPCA parameter tuning algorithm by means of two additional case studies. We propose, evaluate, and compare several mathematical/statistical techniques, which apply feature transformation/selection for subsequent classification, and consider its application in medical diagnostics. Both feature selection and feature transformation perform well on classification tasks. Due to the dynamic selection property of feature selection, it is hard to define significant features for the classifier, which predicts classes of future samples. Moreover, the proposed strategy enjoys a distinctive advantage with its relatively lesser time complexity.
Monte, Andrea; Muollo, Valentina; Nardello, Francesca; Zamparo, Paola
2017-02-01
The purpose of this study was to investigate the changes in selected biomechanical variables in 80-m maximal sprint runs while imposing changes in step frequency (SF) and to investigate if these adaptations differ based on gender and training level. A total of 40 athletes (10 elite men and 10 women, 10 intermediate men and 10 women) participated in this study; they were requested to perform 5 trials at maximal running speed (RS): at the self-selected frequency (SF s ) and at SF ±15% and ±30%SF s . Contact time (CT) and flight time (FT) as well as step length (SL) decreased with increasing SF, while k vert increased with it. At SF s , k leg was the lowest (a 20% decrease at ±30%SF s ), while RS was the largest (a 12% decrease at ±30%SF s ). Only small changes (1.5%) in maximal vertical force (F max ) were observed as a function of SF, but maximum leg spring compression (ΔL) was largest at SF s and decreased by about 25% at ±30%SF s . Significant differences in F max , Δy, k leg and k vert were observed as a function of skill and gender (P < 0.001). Our results indicate that RS is optimised at SF s and that, while k vert follows the changes in SF, k leg is lowest at SF s .
Gait impairment precedes clinical symptoms in spinocerebellar ataxia type 6.
Rochester, Lynn; Galna, Brook; Lord, Sue; Mhiripiri, Dadirayi; Eglon, Gail; Chinnery, Patrick F
2014-02-01
Spinocerebellar ataxia type 6 (SCA6) is an inherited ataxia with no established treatment. Gait ataxia is a prominent feature causing substantial disability. Understanding the evolution of the gait disturbance is a key step in developing treatment strategies. We studied 9 gait variables in 24 SCA6 (6 presymptomatic; 18 symptomatic) and 24 controls and correlated gait with clinical severity (presymptomatic and symptomatic). Discrete gait characteristics precede symptoms in SCA6 with significantly increased variability of step width and step time, whereas a more global gait deficit was evident in symptomatic individuals. Gait characteristics discriminated between presymptomatic and symptomatic individuals and were selectively associated with disease severity. This is the largest study to include a detailed characterization of gait in SCA6, including presymptomatic subjects, allowing changes across the disease spectrum to be compared. Selective gait disturbance is already present in SCA6 before clinical symptoms appear and gait characteristics are also sensitive to disease progression. Early gait disturbance likely reflects primary pathology distinct from secondary changes. These findings open the opportunity for early evaluation and sensitive measures of therapeutic efficacy using instrumented gait analysis which may have broader relevance for all degenerative ataxias. © 2013 Movement Disorder Society.
Wang, Li; Yi, Yanhui; Wu, Chunfei; Guo, Hongchen; Tu, Xin
2017-10-23
The conversion of CO 2 with CH 4 into liquid fuels and chemicals in a single-step catalytic process that bypasses the production of syngas remains a challenge. In this study, liquid fuels and chemicals (e.g., acetic acid, methanol, ethanol, and formaldehyde) were synthesized in a one-step process from CO 2 and CH 4 at room temperature (30 °C) and atmospheric pressure for the first time by using a novel plasma reactor with a water electrode. The total selectivity to oxygenates was approximately 50-60 %, with acetic acid being the major component at 40.2 % selectivity, the highest value reported for acetic acid thus far. Interestingly, the direct plasma synthesis of acetic acid from CH 4 and CO 2 is an ideal reaction with 100 % atom economy, but it is almost impossible by thermal catalysis owing to the significant thermodynamic barrier. The combination of plasma and catalyst in this process shows great potential for manipulating the distribution of liquid chemical products in a given process. © 2017 The Authors. Published by Wiley-VCH Verlag GmbH & Co. KGaA.
Electronic-carrier-controlled photochemical etching process in semiconductor device fabrication
Ashby, C.I.H.; Myers, D.R.; Vook, F.L.
1988-06-16
An electronic-carrier-controlled photochemical etching process for carrying out patterning and selective removing of material in semiconductor device fabrication includes the steps of selective ion implanting, photochemical dry etching, and thermal annealing, in that order. In the selective ion implanting step, regions of the semiconductor material in a desired pattern are damaged and the remainder of the regions of the material not implanted are left undamaged. The rate of recombination of electrons and holes is increased in the damaged regions of the pattern compared to undamaged regions. In the photochemical dry etching step which follows ion implanting step, the material in the undamaged regions of the semiconductor are removed substantially faster than in the damaged regions representing the pattern, leaving the ion-implanted, damaged regions as raised surface structures on the semiconductor material. After completion of photochemical dry etching step, the thermal annealing step is used to restore the electrical conductivity of the damaged regions of the semiconductor material.
Electronic-carrier-controlled photochemical etching process in semiconductor device fabrication
Ashby, Carol I. H.; Myers, David R.; Vook, Frederick L.
1989-01-01
An electronic-carrier-controlled photochemical etching process for carrying out patterning and selective removing of material in semiconductor device fabrication includes the steps of selective ion implanting, photochemical dry etching, and thermal annealing, in that order. In the selective ion implanting step, regions of the semiconductor material in a desired pattern are damaged and the remainder of the regions of the material not implanted are left undamaged. The rate of recombination of electrons and holes is increased in the damaged regions of the pattern compared to undamaged regions. In the photochemical dry etching step which follows ion implanting step, the material in the undamaged regions of the semiconductor are removed substantially faster than in the damaged regions representing the pattern, leaving the ion-implanted, damaged regions as raised surface structures on the semiconductor material. After completion of photochemical dry etching step, the thermal annealing step is used to restore the electrical conductivity of the damaged regions of the semiconductor material.
Shaheen, Nusrat; Lu, Yanzhen; Geng, Ping; Shao, Qian; Wei, Yun
2017-03-01
Two-step high speed countercurrent chromatography method, following normal phase and elution-extrusion mode of operation by using selected solvent systems, was introduced for phenolic compounds separation. Phenolic compounds including gallic acid, ethyl gallate, ethyl digallate and ellagic acid were separated from the ethanol extract of mango (Mangifera indica L.) flowers for the first time. In the first step, gallic acid of 3.7mg and ethyl gallate of 3.9mg with the purities of 98.87% and 99.55%, respectively, were isolated by using hexane-ethylacetate-methanol-water (4:6:4:6, v/v) in normal phase high speed countercurrent chromatography from 200mg of crude extract, while ethyl digallate and ellagic acid were collected in the form of mixture fraction. In the second step, further purification of the mixture was carried out with the help of another selected solvent system of dichloromethane-methanol-water (4:3:2, v/v) following elusion-extrusion mode of operation. Ethyl digallate of 3.8mg and ellagic acid of 5.7mg were separated well with high purities of 98.68% and 99.71%, respectively. The separated phenolic compounds were identified and confirmed by HPLC, UPLC-QTOF/ESI-MS, 1 H and 13 C NMR spectrometric analysis. Copyright © 2017 Elsevier B.V. All rights reserved.
Selective thermal transformation of old computer printed circuit boards to Cu-Sn based alloy.
Shokri, Ali; Pahlevani, Farshid; Cole, Ivan; Sahajwalla, Veena
2017-09-01
This study investigates, verifies and determines the optimal parameters for the selective thermal transformation of problematic electronic waste (e-waste) to produce value-added copper-tin (Cu-Sn) based alloys; thereby demonstrating a novel new pathway for the cost-effective recovery of resources from one of the world's fastest growing and most challenging waste streams. Using outdated computer printed circuit boards (PCBs), a ubiquitous component of e-waste, we investigated transformations across a range of temperatures and time frames. Results indicate a two-step heat treatment process, using a low temperature step followed by a high temperature step, can be used to produce and separate off, first, a lead (Pb) based alloy and, subsequently, a Cu-Sn based alloy. We also found a single-step heat treatment process at a moderate temperature of 900 °C can be used to directly transform old PCBs to produce a Cu-Sn based alloy, while capturing the Pb and antimony (Sb) as alloying elements to prevent the emission of these low melting point elements. These results demonstrate old computer PCBs, large volumes of which are already within global waste stockpiles, can be considered a potential source of value-added metal alloys, opening up a new opportunity for utilizing e-waste to produce metal alloys in local micro-factories. Copyright © 2017 Elsevier Ltd. All rights reserved.
Switchable Chiral Selection of Aspartic Acids by Dynamic States of Brushite
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jiang, Wenge; Pan, Haihua; Zhang, Zhisen
Here, we show the chiral recognition and separation of aspartic acid (Asp) enantiomers by achiral brushite due to the asymmetries of their dynamical steps in its nonequilibrium states. Growing brushite has a higher adsorption affinity to d-Asp, while l-Asp is predominant on the dissolving brushite surface. Microstructural characterization reveals that chiral selection is mainly attributed to brushite [101] steps, which exhibit two different configurations during crystal growth and dissolution, respectively, with each preferring a distinct enantiomer due to this asymmetry. Because these transition step configurations have different stabilities, they subsequently result in asymmetric adsorption. Furthermore, by varying free energy barriersmore » through solution thermodynamic driving force (i.e., supersaturation), the dominant nonequilibrium intermediate states can be switched and chiral selection regulated. This finding highlights that the dynamic steps can be vital for chiral selection, which may provide a potential pathway for chirality generation through the dynamic nature.« less
Switchable Chiral Selection of Aspartic Acids by Dynamic States of Brushite
Jiang, Wenge; Pan, Haihua; Zhang, Zhisen; ...
2017-06-15
Here, we show the chiral recognition and separation of aspartic acid (Asp) enantiomers by achiral brushite due to the asymmetries of their dynamical steps in its nonequilibrium states. Growing brushite has a higher adsorption affinity to d-Asp, while l-Asp is predominant on the dissolving brushite surface. Microstructural characterization reveals that chiral selection is mainly attributed to brushite [101] steps, which exhibit two different configurations during crystal growth and dissolution, respectively, with each preferring a distinct enantiomer due to this asymmetry. Because these transition step configurations have different stabilities, they subsequently result in asymmetric adsorption. Furthermore, by varying free energy barriersmore » through solution thermodynamic driving force (i.e., supersaturation), the dominant nonequilibrium intermediate states can be switched and chiral selection regulated. This finding highlights that the dynamic steps can be vital for chiral selection, which may provide a potential pathway for chirality generation through the dynamic nature.« less
Quantization selection in the high-throughput H.264/AVC encoder based on the RD
NASA Astrophysics Data System (ADS)
Pastuszak, Grzegorz
2013-10-01
In the hardware video encoder, the quantization is responsible for quality losses. On the other hand, it allows the reduction of bit rates to the target one. If the mode selection is based on the rate-distortion criterion, the quantization can also be adjusted to obtain better compression efficiency. Particularly, the use of Lagrangian function with a given multiplier enables the encoder to select the most suitable quantization step determined by the quantization parameter QP. Moreover, the quantization offset added before discarding the fraction value after quantization can be adjusted. In order to select the best quantization parameter and offset in real time, the HD/SD encoder should be implemented in the hardware. In particular, the hardware architecture should embed the transformation and quantization modules able to process the same residuals many times. In this work, such an architecture is used. Experimental results show what improvements in terms of compression efficiency are achievable for Intra coding.
NASA Technical Reports Server (NTRS)
Getty, Stephanie A.; Brinckerhoff, William B.; Li, Xiang; Elsila, Jamie; Cornish, Timothy; Ecelberger, Scott; Wu, Qinghao; Zare, Richard
2014-01-01
Two-step laser desorption mass spectrometry is a well suited technique to the analysis of high priority classes of organics, such as polycyclic aromatic hydrocarbons, present in complex samples. The use of decoupled desorption and ionization laser pulses allows for sensitive and selective detection of structurally intact organic species. We have recently demonstrated the implementation of this advancement in laser mass spectrometry in a compact, flight-compatible instrument that could feasibly be the centerpiece of an analytical science payload as part of a future spaceflight mission to a small body or icy moon.
Shirasu, Naoto; Kuroki, Masahide
2014-01-01
We developed a time- and cost-effective multiplex allele-specific polymerase chain reaction (AS-PCR) method based on the two-step PCR thermal cycles for genotyping single-nucleotide polymorphisms in three alcoholism-related genes: alcohol dehydrogenase 1B, aldehyde dehydrogenase 2 and μ-opioid receptor. Applying MightyAmp(®) DNA polymerase with optimized AS-primers and PCR conditions enabled us to achieve effective and selective amplification of the target alleles from alkaline lysates of a human hair root, and simultaneously to determine the genotypes within less than 1.5 h using minimal lab equipment.
Using step and path selection functions for estimating resistance to movement: Pumas as a case study
Katherine A. Zeller; Kevin McGarigal; Samuel A. Cushman; Paul Beier; T. Winston Vickers; Walter M. Boyce
2015-01-01
GPS telemetry collars and their ability to acquire accurate and consistently frequent locations have increased the use of step selection functions (SSFs) and path selection functions (PathSFs) for studying animal movement and estimating resistance. However, previously published SSFs and PathSFs often do not accommodate multiple scales or multiscale modeling....
Does your equipment maintenance management program measure up?
Deinstadt, Deborah C
2003-01-01
Identifying a clear maintenance philosophy is the first step toward choosing the right program for your healthcare organization. The second step is gaining a clear understanding of how proposed savings and improvements will be delivered. The third and last step is requiring that the proposed company or manager have specific tools in place for measuring and analyzing program performance. There are three primary philosophies underlying current equipment management options. These include risk-transfer philosophy (e.g., maintenance insurance, service contracts, multi-vendor and outsource programs), asset management philosophy (e.g., programs delivering a management system based on managed time-and-materials), and internal management (in-house managed programs). The last step in selecting the right program is insisting that proper performance measurements be built into the proposed management program. A well-managed program provides results in three general areas: financial outcomes, operational improvements and process improvements. Financial outcomes are the easiest to measure. Operational and process improvements are more challenging to assess but equally important to the program's overall success. To accurately identify results in these three areas, the overall management program should measure the following eight separate components: procedures and support for department staff; equipment inventory, benchmark costs, and budget guidelines; experienced equipment support team; objective, independent analysis of maintenance events; repair documentation and reporting; vendor relations; equipment acquisition analysis; and recommendations for improvement. Do everything you reasonably can to assure that the selected company can work side-by-side with you, providing objective, measurable advice that is ultimately in your best interest. You will then know that you have been thorough in your marketplace selection and can confidently move into implementation, expecting tangible and successful results.
DOT National Transportation Integrated Search
2007-10-01
The goal of Selective Traffic Enforcement Programs (STEPs) is to induce motorists to drive safely. To achieve this goal, the STEP model combines intensive enforcement of a specific traffic safety law with extensive communication, education, and outre...
McGaghie, William C; Cohen, Elaine R; Wayne, Diane B
2011-01-01
United States Medical Licensing Examination (USMLE) scores are frequently used by residency program directors when evaluating applicants. The objectives of this report are to study the chain of reasoning and evidence that underlies the use of USMLE Step 1 and 2 scores for postgraduate medical resident selection decisions and to evaluate the validity argument about the utility of USMLE scores for this purpose. This is a research synthesis using the critical review approach. The study first describes the chain of reasoning that underlies a validity argument about using test scores for a specific purpose. It continues by summarizing correlations of USMLE Step 1 and 2 scores and reliable measures of clinical skill acquisition drawn from nine studies involving 393 medical learners from 2005 to 2010. The integrity of the validity argument about using USMLE Step 1 and 2 scores for postgraduate residency selection decisions is tested. The research synthesis shows that USMLE Step 1 and 2 scores are not correlated with reliable measures of medical students', residents', and fellows' clinical skill acquisition. The validity argument about using USMLE Step 1 and 2 scores for postgraduate residency selection decisions is neither structured, coherent, nor evidence based. The USMLE score validity argument breaks down on grounds of extrapolation and decision/interpretation because the scores are not associated with measures of clinical skill acquisition among advanced medical students, residents, and subspecialty fellows. Continued use of USMLE Step 1 and 2 scores for postgraduate medical residency selection decisions is discouraged.
Poonam Khanijo Ahluwalia; Nema, Arvind K
2011-07-01
Selection of optimum locations for locating new facilities and decision regarding capacities at the proposed facilities is a major concern for municipal authorities/managers. The decision as to whether a single facility is preferred over multiple facilities of smaller capacities would vary with varying priorities to cost and associated risks such as environmental or health risk or risk perceived by the society. Currently management of waste streams such as that of computer waste is being done using rudimentary practices and is flourishing as an unorganized sector, mainly as backyard workshops in many cities of developing nations such as India. Uncertainty in the quantification of computer waste generation is another major concern due to the informal setup of present computer waste management scenario. Hence, there is a need to simultaneously address uncertainty in waste generation quantities while analyzing the tradeoffs between cost and associated risks. The present study aimed to address the above-mentioned issues in a multi-time-step, multi-objective decision-support model, which can address multiple objectives of cost, environmental risk, socially perceived risk and health risk, while selecting the optimum configuration of existing and proposed facilities (location and capacities).
NASA Astrophysics Data System (ADS)
Hu, Chia-Chang; Lin, Hsuan-Yu; Chen, Yu-Fan; Wen, Jyh-Horng
2006-12-01
An adaptive minimum mean-square error (MMSE) array receiver based on the fuzzy-logic recursive least-squares (RLS) algorithm is developed for asynchronous DS-CDMA interference suppression in the presence of frequency-selective multipath fading. This receiver employs a fuzzy-logic control mechanism to perform the nonlinear mapping of the squared error and squared error variation, denoted by ([InlineEquation not available: see fulltext.],[InlineEquation not available: see fulltext.]), into a forgetting factor[InlineEquation not available: see fulltext.]. For the real-time applicability, a computationally efficient version of the proposed receiver is derived based on the least-mean-square (LMS) algorithm using the fuzzy-inference-controlled step-size[InlineEquation not available: see fulltext.]. This receiver is capable of providing both fast convergence/tracking capability as well as small steady-state misadjustment as compared with conventional LMS- and RLS-based MMSE DS-CDMA receivers. Simulations show that the fuzzy-logic LMS and RLS algorithms outperform, respectively, other variable step-size LMS (VSS-LMS) and variable forgetting factor RLS (VFF-RLS) algorithms at least 3 dB and 1.5 dB in bit-error-rate (BER) for multipath fading channels.
Dallum, Gregory E.; Pratt, Garth C.; Haugen, Peter C.; Romero, Carlos E.
2013-01-15
An ultra-wideband (UWB) dual impulse transmitter is made up of a trigger edge selection circuit actuated by a single trigger input pulse; a first step recovery diode (SRD) based pulser connected to the trigger edge selection circuit to generate a first impulse output; and a second step recovery diode (SRD) based pulser connected to the trigger edge selection circuit in parallel to the first pulser to generate a second impulse output having a selected delay from the first impulse output.
Xiao, Yongling; Abrahamowicz, Michal
2010-03-30
We propose two bootstrap-based methods to correct the standard errors (SEs) from Cox's model for within-cluster correlation of right-censored event times. The cluster-bootstrap method resamples, with replacement, only the clusters, whereas the two-step bootstrap method resamples (i) the clusters, and (ii) individuals within each selected cluster, with replacement. In simulations, we evaluate both methods and compare them with the existing robust variance estimator and the shared gamma frailty model, which are available in statistical software packages. We simulate clustered event time data, with latent cluster-level random effects, which are ignored in the conventional Cox's model. For cluster-level covariates, both proposed bootstrap methods yield accurate SEs, and type I error rates, and acceptable coverage rates, regardless of the true random effects distribution, and avoid serious variance under-estimation by conventional Cox-based standard errors. However, the two-step bootstrap method over-estimates the variance for individual-level covariates. We also apply the proposed bootstrap methods to obtain confidence bands around flexible estimates of time-dependent effects in a real-life analysis of cluster event times.
Re-Organizing Earth Observation Data Storage to Support Temporal Analysis of Big Data
NASA Technical Reports Server (NTRS)
Lynnes, Christopher
2017-01-01
The Earth Observing System Data and Information System archives many datasets that are critical to understanding long-term variations in Earth science properties. Thus, some of these are large, multi-decadal datasets. Yet the challenge in long time series analysis comes less from the sheer volume than the data organization, which is typically one (or a small number of) time steps per file. The overhead of opening and inventorying complex, API-driven data formats such as Hierarchical Data Format introduces a small latency at each time step, which nonetheless adds up for datasets with O(10^6) single-timestep files. Several approaches to reorganizing the data can mitigate this overhead by an order of magnitude: pre-aggregating data along the time axis (time-chunking); storing the data in a highly distributed file system; or storing data in distributed columnar databases. Storing a second copy of the data incurs extra costs, so some selection criteria must be employed, which would be driven by expected or actual usage by the end user community, balanced against the extra cost.
Re-organizing Earth Observation Data Storage to Support Temporal Analysis of Big Data
NASA Astrophysics Data System (ADS)
Lynnes, C.
2017-12-01
The Earth Observing System Data and Information System archives many datasets that are critical to understanding long-term variations in Earth science properties. Thus, some of these are large, multi-decadal datasets. Yet the challenge in long time series analysis comes less from the sheer volume than the data organization, which is typically one (or a small number of) time steps per file. The overhead of opening and inventorying complex, API-driven data formats such as Hierarchical Data Format introduces a small latency at each time step, which nonetheless adds up for datasets with O(10^6) single-timestep files. Several approaches to reorganizing the data can mitigate this overhead by an order of magnitude: pre-aggregating data along the time axis (time-chunking); storing the data in a highly distributed file system; or storing data in distributed columnar databases. Storing a second copy of the data incurs extra costs, so some selection criteria must be employed, which would be driven by expected or actual usage by the end user community, balanced against the extra cost.
Fuhrman, Susan I.; Redfern, Mark S.; Jennings, J. Richard; Perera, Subashan; Nebes, Robert D.; Furman, Joseph M.
2013-01-01
Postural dual-task studies have demonstrated effects of various executive function components on gait and postural control in older adults. The purpose of the study was to explore the role of inhibition during lateral step initiation. Forty older adults participated (range 70–94 yr). Subjects stepped to the left or right in response to congruous and incongruous visual cues that consisted of left and right arrows appearing on left or right sides of a monitor. The timing of postural adjustments was identified by inflection points in the vertical ground reaction forces (VGRF) measured separately under each foot. Step responses could be classified into preferred and nonpreferred step behavior based on the number of postural adjustments that were made. Delays in onset of the first postural adjustment (PA1) and liftoff (LO) of the step leg during preferred steps progressively increased among the simple, choice, congruous, and incongruous tasks, indicating interference in processing the relevant visuospatial cue. Incongruous cues induced subjects to make more postural adjustments than they typically would (i.e., nonpreferred steps), representing errors in selection of the appropriate motor program. During these nonpreferred steps, the onset of the PA1 was earlier than during the preferred steps, indicating a failure to inhibit an inappropriate initial postural adjustment. The functional consequence of the additional postural adjustments was a delay in the LO compared with steps in which they did not make an error. These results suggest that deficits in inhibitory function may detrimentally affect step decision processing, by delaying voluntary step responses. PMID:23114211
Omelyan, Igor; Kovalenko, Andriy
2015-04-14
We developed a generalized solvation force extrapolation (GSFE) approach to speed up multiple time step molecular dynamics (MTS-MD) of biomolecules steered with mean solvation forces obtained from the 3D-RISM-KH molecular theory of solvation (three-dimensional reference interaction site model with the Kovalenko-Hirata closure). GSFE is based on a set of techniques including the non-Eckart-like transformation of coordinate space separately for each solute atom, extension of the force-coordinate pair basis set followed by selection of the best subset, balancing the normal equations by modified least-squares minimization of deviations, and incremental increase of outer time step in motion integration. Mean solvation forces acting on the biomolecule atoms in conformations at successive inner time steps are extrapolated using a relatively small number of best (closest) solute atomic coordinates and corresponding mean solvation forces obtained at previous outer time steps by converging the 3D-RISM-KH integral equations. The MTS-MD evolution steered with GSFE of 3D-RISM-KH mean solvation forces is efficiently stabilized with our optimized isokinetic Nosé-Hoover chain (OIN) thermostat. We validated the hybrid MTS-MD/OIN/GSFE/3D-RISM-KH integrator on solvated organic and biomolecules of different stiffness and complexity: asphaltene dimer in toluene solvent, hydrated alanine dipeptide, miniprotein 1L2Y, and protein G. The GSFE accuracy and the OIN efficiency allowed us to enlarge outer time steps up to huge values of 1-4 ps while accurately reproducing conformational properties. Quasidynamics steered with 3D-RISM-KH mean solvation forces achieves time scale compression of conformational changes coupled with solvent exchange, resulting in further significant acceleration of protein conformational sampling with respect to real time dynamics. Overall, this provided a 50- to 1000-fold effective speedup of conformational sampling for these systems, compared to conventional MD with explicit solvent. We have been able to fold the miniprotein from a fully denatured, extended state in about 60 ns of quasidynamics steered with 3D-RISM-KH mean solvation forces, compared to the average physical folding time of 4-9 μs observed in experiment.
NASA Astrophysics Data System (ADS)
Kojima, Yohei; Takeda, Kazuaki; Adachi, Fumiyuki
Frequency-domain equalization (FDE) based on the minimum mean square error (MMSE) criterion can provide better downlink bit error rate (BER) performance of direct sequence code division multiple access (DS-CDMA) than the conventional rake combining in a frequency-selective fading channel. FDE requires accurate channel estimation. In this paper, we propose a new 2-step maximum likelihood channel estimation (MLCE) for DS-CDMA with FDE in a very slow frequency-selective fading environment. The 1st step uses the conventional pilot-assisted MMSE-CE and the 2nd step carries out the MLCE using decision feedback from the 1st step. The BER performance improvement achieved by 2-step MLCE over pilot assisted MMSE-CE is confirmed by computer simulation.
Kim, Jaerok; Choi, Yoonseok
2014-01-01
BACKGROUND/OBJECTIVES Educational interventions targeted food selection perception, knowledge, attitude, and behavior. Education regarding irradiated food was intended to change food selection behavior specific to it. SUBJECTS AND METHODS There were 43 elementary students (35.0%), 45 middle school students (36.6%), and 35 high school students (28.5%). The first step was research design. Educational targets were selected and informed consent was obtained in step two. An initial survey was conducted as step three. Step four was a 45 minute-long theoretical educational intervention. Step five concluded with a survey and experiment on food selection behavior. RESULTS As a result of conducting a 45 minute-long education on the principles, actual state of usage, and pros and cons of irradiated food for elementary, middle, and high-school students in Korea, perception, knowledge, attitude, and behavior regarding the irradiated food was significantly higher after the education than before the education (P < 0.000). CONCLUSIONS The behavior of irradiated food selection shows high correlation with all variables of perception, knowledge, and attitude, and it is necessary to provide information of each level of change in perception, knowledge, and attitude in order to derive proper behavior change, which is the ultimate goal of the education. PMID:25324942
A Heckman selection model for the safety analysis of signalized intersections
Wong, S. C.; Zhu, Feng; Pei, Xin; Huang, Helai; Liu, Youjun
2017-01-01
Purpose The objective of this paper is to provide a new method for estimating crash rate and severity simultaneously. Methods This study explores a Heckman selection model of the crash rate and severity simultaneously at different levels and a two-step procedure is used to investigate the crash rate and severity levels. The first step uses a probit regression model to determine the sample selection process, and the second step develops a multiple regression model to simultaneously evaluate the crash rate and severity for slight injury/kill or serious injury (KSI), respectively. The model uses 555 observations from 262 signalized intersections in the Hong Kong metropolitan area, integrated with information on the traffic flow, geometric road design, road environment, traffic control and any crashes that occurred during two years. Results The results of the proposed two-step Heckman selection model illustrate the necessity of different crash rates for different crash severity levels. Conclusions A comparison with the existing approaches suggests that the Heckman selection model offers an efficient and convenient alternative method for evaluating the safety performance at signalized intersections. PMID:28732050
Bohre, Ashish; Saha, Basudeb; Abu-Omar, Mahdi M
2015-12-07
Design and synthesis of effective heterogeneous catalysts for the conversion of biomass intermediates into long chain hydrocarbon precursors and their subsequent deoxygenation to hydrocarbons is a viable strategy for upgrading lignocellulose into distillate range drop-in biofuels. Herein, we report a two-step process for upgrading 5-hydroxymethylfurfural (HMF) to C9 and C11 fuels with high yield and selectivity. The first step involves aldol condensation of HMF and acetone with a water tolerant solid base catalyst, zirconium carbonate (Zr(CO3 )x ), which gave 92 % C9 -aldol product with high selectivity at nearly 100 % HMF conversion. The as-synthesised Zr(CO3 )x was analysed by several analytical methods for elucidating its structural properties. Recyclability studies of Zr(CO3 )x revealed a negligible loss of its activity after five consecutive cycles over 120 h of operation. Isolated aldol product from the first step was hydrodeoxygenated with a bifunctional Pd/Zeolite-β catalyst in ethanol, which showed quantitative conversion of the aldol product to n-nonane and 1-ethoxynonane with 40 and 56 % selectivity, respectively. 1-Ethoxynonane, a low oxygenate diesel range fuel, which we report for the first time in this paper, is believed to form through etherification of the hydroxymethyl group of the aldol product with ethanol followed by opening of the furan ring and hydrodeoxygenation of the ether intermediate. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Vanderhaeghe, F; Smolders, A J P; Roelofs, J G M; Hoffmann, M
2012-03-01
Selecting an appropriate variable subset in linear multivariate methods is an important methodological issue for ecologists. Interest often exists in obtaining general predictive capacity or in finding causal inferences from predictor variables. Because of a lack of solid knowledge on a studied phenomenon, scientists explore predictor variables in order to find the most meaningful (i.e. discriminating) ones. As an example, we modelled the response of the amphibious softwater plant Eleocharis multicaulis using canonical discriminant function analysis. We asked how variables can be selected through comparison of several methods: univariate Pearson chi-square screening, principal components analysis (PCA) and step-wise analysis, as well as combinations of some methods. We expected PCA to perform best. The selected methods were evaluated through fit and stability of the resulting discriminant functions and through correlations between these functions and the predictor variables. The chi-square subset, at P < 0.05, followed by a step-wise sub-selection, gave the best results. In contrast to expectations, PCA performed poorly, as so did step-wise analysis. The different chi-square subset methods all yielded ecologically meaningful variables, while probable noise variables were also selected by PCA and step-wise analysis. We advise against the simple use of PCA or step-wise discriminant analysis to obtain an ecologically meaningful variable subset; the former because it does not take into account the response variable, the latter because noise variables are likely to be selected. We suggest that univariate screening techniques are a worthwhile alternative for variable selection in ecology. © 2011 German Botanical Society and The Royal Botanical Society of the Netherlands.
Fabrication of Large Bulk High Temperature Superconducting Articles
NASA Technical Reports Server (NTRS)
Koczor, Ronald (Inventor); Hiser, Robert A. (Inventor)
2003-01-01
A method of fabricating large bulk high temperature superconducting articles which comprises the steps of selecting predetermined sizes of crystalline superconducting materials and mixing these specific sizes of particles into a homogeneous mixture which is then poured into a die. The die is placed in a press and pressurized to predetermined pressure for a predetermined time and is heat treated in the furnace at predetermined temperatures for a predetermined time. The article is left in the furnace to soak at predetermined temperatures for a predetermined period of time and is oxygenated by an oxygen source during the soaking period.
Rapid Calculation of Spacecraft Trajectories Using Efficient Taylor Series Integration
NASA Technical Reports Server (NTRS)
Scott, James R.; Martini, Michael C.
2011-01-01
A variable-order, variable-step Taylor series integration algorithm was implemented in NASA Glenn's SNAP (Spacecraft N-body Analysis Program) code. SNAP is a high-fidelity trajectory propagation program that can propagate the trajectory of a spacecraft about virtually any body in the solar system. The Taylor series algorithm's very high order accuracy and excellent stability properties lead to large reductions in computer time relative to the code's existing 8th order Runge-Kutta scheme. Head-to-head comparison on near-Earth, lunar, Mars, and Europa missions showed that Taylor series integration is 15.8 times faster than Runge- Kutta on average, and is more accurate. These speedups were obtained for calculations involving central body, other body, thrust, and drag forces. Similar speedups have been obtained for calculations that include J2 spherical harmonic for central body gravitation. The algorithm includes a step size selection method that directly calculates the step size and never requires a repeat step. High-order Taylor series integration algorithms have been shown to provide major reductions in computer time over conventional integration methods in numerous scientific applications. The objective here was to directly implement Taylor series integration in an existing trajectory analysis code and demonstrate that large reductions in computer time (order of magnitude) could be achieved while simultaneously maintaining high accuracy. This software greatly accelerates the calculation of spacecraft trajectories. At each time level, the spacecraft position, velocity, and mass are expanded in a high-order Taylor series whose coefficients are obtained through efficient differentiation arithmetic. This makes it possible to take very large time steps at minimal cost, resulting in large savings in computer time. The Taylor series algorithm is implemented primarily through three subroutines: (1) a driver routine that automatically introduces auxiliary variables and sets up initial conditions and integrates; (2) a routine that calculates system reduced derivatives using recurrence relations for quotients and products; and (3) a routine that determines the step size and sums the series. The order of accuracy used in a trajectory calculation is arbitrary and can be set by the user. The algorithm directly calculates the motion of other planetary bodies and does not require ephemeris files (except to start the calculation). The code also runs with Taylor series and Runge-Kutta used interchangeably for different phases of a mission.
Altstein, L.; Li, G.
2012-01-01
Summary This paper studies a semiparametric accelerated failure time mixture model for estimation of a biological treatment effect on a latent subgroup of interest with a time-to-event outcome in randomized clinical trials. Latency is induced because membership is observable in one arm of the trial and unidentified in the other. This method is useful in randomized clinical trials with all-or-none noncompliance when patients in the control arm have no access to active treatment and in, for example, oncology trials when a biopsy used to identify the latent subgroup is performed only on subjects randomized to active treatment. We derive a computational method to estimate model parameters by iterating between an expectation step and a weighted Buckley-James optimization step. The bootstrap method is used for variance estimation, and the performance of our method is corroborated in simulation. We illustrate our method through an analysis of a multicenter selective lymphadenectomy trial for melanoma. PMID:23383608
Preparation of microcapsules by complex coacervation of gum Arabic and chitosan.
Butstraen, Chloé; Salaün, Fabien
2014-01-01
Gum Arabic-chitosan microcapsules containing a commercially available blend of triglycerides (Miglyol 812 N) as core phase were synthesized by complex coacervation. This study was conducted to clarify the influence of different parameters on the encapsulation process, i.e. during the emulsion formation steps and during the shell formation, using conductometry, zeta potential, surface and interface tension measurement and Fourier-transform infrared spectroscopy. By carefully analyzing the influencing factors including phase volume ratio, stirring rate and time, pH, reaction time, biopolymer ratio and crosslinking effect, the optimum synthetic conditions were found out. For the emulsion step, the optimum phase volume ratio chosen was 0.10 and an emulsion time of 15 min at 11,000 rpm was selected. The results also indicated that the optimum formation of these complexes appears at a pH value of 3.6 and a weight ratio of chitosan to gum Arabic mixtures of 0.25. Copyright © 2013 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Huo, Lin; Cheng, Xing-Hua; Yang, Tao
2015-05-01
This paper presents a study of aerothermoelastic response of a C/SiC panel, which is a primary structure for ceramic matrix composite shingle thermal protection system for hypersonic vehicles. It is based on a three dimensional thermal protection shingle panel on a quasi-waverider vehicle model. Firstly, the Thin Shock Layer and piston theory are adopted to compute the aerodynamic pressure of rigid body and deformable body, and a series of engineering methods are used to compute the aerodynamic heating. Then an aerothermoelastic loosely-coupled time marching strategy and self-adapting aerodynamic heating time step are developed to analyze the aerothermoelastic response of the panel, with an aerodynamic heating and temperature field coupling parameter selection method being adopted to increase the efficiency. Finally, a few revealing conclusions are reached by analyzing how coupling at different degrees influences the quasi-static aerothermoelastic response of the panel and how aerodynamic pressure of rigid body time step influences the quasi-static aerothermoelastic response on a glide trajectory.
Moghtader, Farzaneh; Tomak, Aysel; Zareie, Hadi M; Piskin, Erhan
2018-03-27
This study attemps to develop bacterial detection strategies using bacteriophages and gold nanorods (GNRs) by Raman spectral analysis. Escherichia coli was selected as the target and its specific phage was used as the bioprobe. Target bacteria and phages were propagated/purified by traditional techniques. GNRs were synthesized by using hexadecyltrimethyl ammonium bromide (CTAB) as stabilizer. A two-step detection strategy was applied: Firstly, the target bacteria were interacted with GNRs in suspensions, and then they were dropped onto silica substrates for detection. It was possible to obtain clear surface-enchanced Raman spectroscopy (SERS) peaks of the target bacteria, even without using phages. In the second step, the phage nanoemulsions were droped onto the bacterial-GNRs complexes on those surfaces and time-dependent changes in the Raman spectra were monitored at different time intervals upto 40 min. These results demonstrated that how one can apply phages with plasmonic nanoparticles for detection of pathogenic bacteria very effectively in a quite simple test.
Validation of a One-Step Method for Extracting Fatty Acids from Salmon, Chicken and Beef Samples.
Zhang, Zhichao; Richardson, Christine E; Hennebelle, Marie; Taha, Ameer Y
2017-10-01
Fatty acid extraction methods are time-consuming and expensive because they involve multiple steps and copious amounts of extraction solvents. In an effort to streamline the fatty acid extraction process, this study compared the standard Folch lipid extraction method to a one-step method involving a column that selectively elutes the lipid phase. The methods were tested on raw beef, salmon, and chicken. Compared to the standard Folch method, the one-step extraction process generally yielded statistically insignificant differences in chicken and salmon fatty acid concentrations, percent composition and weight percent. Initial testing showed that beef stearic, oleic and total fatty acid concentrations were significantly lower by 9-11% with the one-step method as compared to the Folch method, but retesting on a different batch of samples showed a significant 4-8% increase in several omega-3 and omega-6 fatty acid concentrations with the one-step method relative to the Folch. Overall, the findings reflect the utility of a one-step extraction method for routine and rapid monitoring of fatty acids in chicken and salmon. Inconsistencies in beef concentrations, although minor (within 11%), may be due to matrix effects. A one-step fatty acid extraction method has broad applications for rapidly and routinely monitoring fatty acids in the food supply and formulating controlled dietary interventions. © 2017 Institute of Food Technologists®.
Defense Acquisitions: Assessments of Selected Weapon Programs
2011-03-01
Frequency (UHF) Follow-On ( UFO ) satellite system currently in operation and provide interoperability with legacy terminals. MUOS consists of a...delivery of MUOS capabilities is time-critical due to the operational failures of two UFO satellites. The MUOS program has taken several steps to...launch increased due to the unexpected failures of two UFO satellites. Based on the current health of on-orbit satellites, UHF communication
Microfluidics for mammalian embryo culture and selection: where do we stand now?
Le Gac, Séverine; Nordhoff, Verena
2017-04-01
The optimization of in-vitro culture conditions and the selection of the embryo(s) with the highest developmental competence are essential components in an ART program. Culture conditions are manifold and they underlie not always evidence-based research but also trends entering the IVF laboratory. At the moment, the idea of using sequential media according to the embryo requirements has been given up in favor of the use of single step media in an uninterrupted manner due to practical issues such as time-lapse incubators. The selection of the best embryo is performed using morphological and, recently, also morphokinetic criteria. In this review, we aim to demonstrate how the ART field may benefit from the use of microfluidic technology, with a particular focus on specific steps, namely the embryo in-vitro culture, embryo scoring and selection, and embryo cryopreservation. We first provide an overview of microfluidic and microfabricated devices, which have been developed for embryo culture, characterization of pre-implantation embryos (or in some instances a combination of both steps) and embryo cryopreservation. Building upon these existing platforms and the various capabilities offered by microfluidics, we discuss how this technology could provide integrated and automated systems, not only for real-time and multi-parametric monitoring of embryo development, but also for performing the entire ART procedure. Although microfluidic technology has been around for a couple of decades already, it has still not made its way into the clinics and IVF laboratories, which we discuss in terms of: (i) a lack of user-friendliness and automation of the microfluidic platforms, (ii) a lack of robust and convincing validation using human embryos and (iii) some psychological threshold for embryologists and practitioners to test and use microfluidic technology. In spite of these limitations, we envision that microfluidics is likely to have a significant impact in the field of ART, for fundamental research in the near future and, in the longer term, for providing a novel generation of clinical tools. © The Author 2016. Published by Oxford University Press on behalf of the European Society of Human Reproduction and Embryology. All rights reserved.For Permissions, please email: journals.permissions@oup.com.
Tsang, William W N; Lam, Nazca K Y; Lau, Kit N L; Leung, Harry C H; Tsang, Crystal M S; Lu, Xi
2013-12-01
To investigate the effects of aging on postural control and cognitive performance in single- and dual-tasking. A cross-sectional comparative design was conducted in a university motion analysis laboratory. Young adults (n = 30; age 21.9 ± 2.4 years) and older adults (n = 30; age 71.9 ± 6.4 years) were recruited. Postural control after stepping down was measured with and without performing a concurrent auditory response task. Measurement included: (1) reaction time and (2) error rate in performing the cognitive task; (3) total sway path and (4) total sway area after stepping down. Our findings showed that the older adults had significantly longer reaction times and higher error rates than the younger subjects in both the single-tasking and dual-tasking conditions. The older adults had significantly longer reaction times and higher error rates when dual-tasking compared with single-tasking, but the younger adults did not. The older adults demonstrated significantly less total sway path, but larger total sway area in single-leg stance after stepping down than the young adults. The older adults showed no significant change in total sway path and area between the dual-tasking and when compared with single-tasking conditions, while the younger adults showed significant decreases in sway. Older adults prioritize postural control by sacrificing cognitive performance when faced with dual-tasking.
Strategy Guideline. Proper Water Heater Selection
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hoeschele, M.; Springer, D.; German, A.
2015-04-09
This Strategy Guideline on proper water heater selection was developed by the Building America team Alliance for Residential Building Innovation to provide step-by-step procedures for evaluating preferred cost-effective options for energy efficient water heater alternatives based on local utility rates, climate, and anticipated loads.
Strategy Guideline: Proper Water Heater Selection
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hoeschele, M.; Springer, D.; German, A.
2015-04-01
This Strategy Guideline on proper water heater selection was developed by the Building America team Alliance for Residential Building Innovation to provide step-by-step procedures for evaluating preferred cost-effective options for energy efficient water heater alternatives based on local utility rates, climate, and anticipated loads.
Selective catalytic two-step process for ethylene glycol from carbon monoxide
Dong, Kaiwu; Elangovan, Saravanakumar; Sang, Rui; Spannenberg, Anke; Jackstell, Ralf; Junge, Kathrin; Li, Yuehui; Beller, Matthias
2016-01-01
Upgrading C1 chemicals (for example, CO, CO/H2, MeOH and CO2) with C–C bond formation is essential for the synthesis of bulk chemicals. In general, these industrially important processes (for example, Fischer Tropsch) proceed at drastic reaction conditions (>250 °C; high pressure) and suffer from low selectivity, which makes high capital investment necessary and requires additional purifications. Here, a different strategy for the preparation of ethylene glycol (EG) via initial oxidative coupling and subsequent reduction is presented. Separating coupling and reduction steps allows for a completely selective formation of EG (99%) from CO. This two-step catalytic procedure makes use of a Pd-catalysed oxycarbonylation of amines to oxamides at room temperature (RT) and subsequent Ru- or Fe-catalysed hydrogenation to EG. Notably, in the first step the required amines can be efficiently reused. The presented stepwise oxamide-mediated coupling provides the basis for a new strategy for selective upgrading of C1 chemicals. PMID:27377550
Júnez-Ferreira, H E; Herrera, G S
2013-04-01
This paper presents a new methodology for the optimal design of space-time hydraulic head monitoring networks and its application to the Valle de Querétaro aquifer in Mexico. The selection of the space-time monitoring points is done using a static Kalman filter combined with a sequential optimization method. The Kalman filter requires as input a space-time covariance matrix, which is derived from a geostatistical analysis. A sequential optimization method that selects the space-time point that minimizes a function of the variance, in each step, is used. We demonstrate the methodology applying it to the redesign of the hydraulic head monitoring network of the Valle de Querétaro aquifer with the objective of selecting from a set of monitoring positions and times, those that minimize the spatiotemporal redundancy. The database for the geostatistical space-time analysis corresponds to information of 273 wells located within the aquifer for the period 1970-2007. A total of 1,435 hydraulic head data were used to construct the experimental space-time variogram. The results show that from the existing monitoring program that consists of 418 space-time monitoring points, only 178 are not redundant. The implied reduction of monitoring costs was possible because the proposed method is successful in propagating information in space and time.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu,Y.; Li, M.; Bansil, R.
2007-01-01
We examined the kinetics of the transformation from the lamellar (LAM) to the hexagonally packed cylinder (HEX) phase for the triblock copolymer, polystyrene-b-poly (ethylene-co-butylene)-b-polystyrene (SEBS) in dibutyl phthalate (DBP), a selective solvent for polystyrene (PS), using time-resolved small-angle X-ray scattering (SAXS). We observe the HEX phase with the EB block in the cores at a lower temperature than that observed for the LAM phase due to the solvent selectivity of DBP for the PS block. Analysis of the SAXS data for a deep temperature quench well below the LAM-HEX transition shows that the transformation occurs in a one-step process. Wemore » calculate the scattering using a geometric model of rippled layers with adjacent layers totally out of phase during the transformation. The agreement of the calculations with the data further supports the continuous transformation mechanism from the LAM to HEX for a deep quench. In contrast, for a shallow quench close to the order-order transition, we find agreement with a two-step nucleation and growth mechanism.« less
Edwards, Alyn C; Mocilac, Pavle; Geist, Andreas; Harwood, Laurence M; Sharrad, Clint A; Burton, Neil A; Whitehead, Roger C; Denecke, Melissa A
2017-05-02
The first hydrophilic, 1,10-phenanthroline derived ligands consisting of only C, H, O and N atoms for the selective extraction of Am(iii) from spent nuclear fuel are reported herein. One of these 2,9-bis-triazolyl-1,10-phenanthroline (BTrzPhen) ligands combined with a non-selective extracting agent, was found to exhibit process-suitable selectivity for Am(iii) over Eu(iii) and Cm(iii), providing a clear step forward.
Ito, Yuji
2017-01-01
As an alternative to hybridoma technology, the antibody phage library system can also be used for antibody selection. This method enables the isolation of antigen-specific binders through an in vitro selection process known as biopanning. While it has several advantages, such as an avoidance of animal immunization, the phage cloning and screening steps of biopanning are time-consuming and problematic. Here, we introduce a novel biopanning method combined with high-throughput sequencing (HTS) using a next-generation sequencer (NGS) to save time and effort in antibody selection, and to increase the diversity of acquired antibody sequences. Biopannings against a target antigen were performed using a human single chain Fv (scFv) antibody phage library. VH genes in pooled phages at each round of biopanning were analyzed by HTS on a NGS. The obtained data were trimmed, merged, and translated into amino acid sequences. The frequencies (%) of the respective VH sequences at each biopanning step were calculated, and the amplification factor (change of frequency through biopanning) was obtained to estimate the potential for antigen binding. A phylogenetic tree was drawn using the top 50 VH sequences with high amplification factors. Representative VH sequences forming the cluster were then picked up and used to reconstruct scFv genes harboring these VHs. Their derived scFv-Fc fusion proteins showed clear antigen binding activity. These results indicate that a combination of biopanning and HTS enables the rapid and comprehensive identification of specific binders from antibody phage libraries.
An innovative cascade system for simultaneous separation of multiple cell types.
Pierzchalski, Arkadiusz; Mittag, Anja; Bocsi, Jozsef; Tarnok, Attila
2013-01-01
Isolation of different cell types from one sample by fluorescence activated cell sorting is standard but expensive and time consuming. Magnetic separation is more cost effective and faster by but requires substantial effort. An innovative pluriBead-cascade cell isolation system (pluriSelect GmbH, Leipzig, Germany) simultaneously separates two or more different cell types. It is based on antibody-mediated binding of cells to beads of different size and their isolation with sieves of different mesh-size. For the first time, we validated the pluriSelect system for simultaneous separation of CD4+- and CD8+-cells from human EDTA-blood samples. Results were compared with those obtained by magnetic activated cell sorting (MACS; two steps -first isolation of CD4+, then restaining of the residual cell suspension with anti-human CD8+ MACS antibody followed by the second isolation). pluriSelect separation was done in whole blood, MACS separation on density gradient isolated mononuclear cells. Isolated and residual cells were immunophenotyped by 7-color 9-marker panel (CD3; CD16/56; CD4; CD8; CD14; CD19; CD45; HLA-DR) using flow cytometry. Cell count, purity, yield and viability (7-AAD exclusion) were determined. There were no significant differences between both systems regarding purity (MACS (median[range]: 92.4% [91.5-94.9] vs. pluriSelect 95% [94.9-96.8])) of CD4+ cells, however CD8+ isolation showed lower purity by MACS (74.8% [67.6-77.9], pluriSelect 89.9% [89.0-95.7]). Yield was not significantly different for CD4 (MACS 58.5% [54.1-67.5], pluriSelect 67.9% [56.8-69.8]) and for CD8 (MACS 57.2% [41.3-72.0], pluriSelect 67.2% [60.0-78.5]). Viability was slightly higher with MACS for CD4+ (98.4% [97.8-99.0], pluriSelect 94.1% [92.1-95.2]) and for CD8+-cells (98.8% [98.3-99.1], pluriSelect 86.7% [84.2-89.9]). pluriSelect separation was substantially faster than MACS (1h vs. 2.5h) and no pre-enrichment steps were necessary. In conclusion, pluriSelect is a fast, simple and gentle system for efficient simultaneous separation of two and more cell subpopulation directly from whole blood and provides a simple alternative to magnetic separation.
NASA Astrophysics Data System (ADS)
Jothiprakash, V.; Magar, R. B.
2012-07-01
SummaryIn this study, artificial intelligent (AI) techniques such as artificial neural network (ANN), Adaptive neuro-fuzzy inference system (ANFIS) and Linear genetic programming (LGP) are used to predict daily and hourly multi-time-step ahead intermittent reservoir inflow. To illustrate the applicability of AI techniques, intermittent Koyna river watershed in Maharashtra, India is chosen as a case study. Based on the observed daily and hourly rainfall and reservoir inflow various types of time-series, cause-effect and combined models are developed with lumped and distributed input data. Further, the model performance was evaluated using various performance criteria. From the results, it is found that the performances of LGP models are found to be superior to ANN and ANFIS models especially in predicting the peak inflows for both daily and hourly time-step. A detailed comparison of the overall performance indicated that the combined input model (combination of rainfall and inflow) performed better in both lumped and distributed input data modelling. It was observed that the lumped input data models performed slightly better because; apart from reducing the noise in the data, the better techniques and their training approach, appropriate selection of network architecture, required inputs, and also training-testing ratios of the data set. The slight poor performance of distributed data is due to large variations and lesser number of observed values.
Legerstee, Jeroen S; Tulen, Joke H M; Dierckx, Bram; Treffers, Philip D A; Verhulst, Frank C; Utens, Elisabeth M W J
2010-02-01
This study examined whether treatment response to stepped-care cognitive-behavioural treatment (CBT) is associated with changes in threat-related selective attention and its specific components in a large clinical sample of anxiety-disordered children. Ninety-one children with an anxiety disorder were included in the present study. Children received a standardized stepped-care CBT. Three treatment response groups were distinguished: initial responders (anxiety disorder free after phase one: child-focused CBT), secondary responders (anxiety disorder free after phase two: child-parent-focused CBT), and treatment non-responders. Treatment response was determined using a semi-structured clinical interview. Children performed a pictorial dot-probe task before and after stepped-care CBT (i.e., before phase one and after phase two CBT). Changes in selective attention to severely threatening pictures, but not to mildly threatening pictures, were significantly associated with treatment success. At pre-treatment assessment, initial responders selectively attended away from severely threatening pictures, whereas secondary responders selectively attended toward severely threatening pictures. After stepped-care CBT, initial and secondary responders did not show any selectivity in the attentional processing of severely threatening pictures. Treatment non-responders did not show any changes in selective attention due to CBT. Initial and secondary treatment responders showed a reduction of their predisposition to selectively attend away or toward severely threatening pictures, respectively. Treatment non-responders did not show any changes in selective attention. The pictorial dot-probe task can be considered a potentially valuable tool in assigning children to appropriate treatment formats as well as for monitoring changes in selective attention during the course of CBT.
Multi-Wavelength Laser Transmitter for the Two-Step Laser Time-of-Flight Mass Spectrometer
NASA Technical Reports Server (NTRS)
Yu, Anthony W.; Li, Steven X.; Fahey, Molly E.; Grubisic, Andrej; Farcy, Benjamin J.; Uckert, Kyle; Li, Xiang; Getty, Stephanie
2017-01-01
Missions to diverse Outer Solar System bodies will require investigations that can detect a wide range of organics in complex mixtures, determine the structure of selected molecules, and provide powerful insights into their origin and evolution. Previous studies from remote spectroscopy of the Outer Solar System showed a diverse population of macromolecular species that are likely to include aromatic and conjugated hydrocarbons with varying degrees of methylation and nitrile incorporation. In situ exploration of Titan's upper atmosphere via mass and plasma spectrometry has revealed a complex mixture of organics. Similar material is expected on the Ice Giants, their moons, and other Outer Solar System bodies, where it may subsequently be deposited onto surface ices. It is evident that the detection of organics on other planetary surfaces provides insight into the chemical and geological evolution of a Solar System body of interest and can inform our understanding of its potential habitability. We have developed a prototype two-step laser desorption/ionization time-of-flight mass spectrometer (L2MS) instrument by exploiting the resonance-enhanced desorption of analyte. We have successfully demonstrated the ability of the L2MS to detect hydrocarbons in organically-doped analog minerals, including cryogenic Ocean World-relevant ices and mixtures. The L2MS instrument operates by generating a neutral plume of desorbed analyte with an IR desorption laser pulse, followed at a delay by a ultraviolet (UV) laser pulse, ionizing the plume. Desorption of the analyte, including trace organic species, may be enhanced by selecting the wavelength of the IR desorption laser to coincide with IR absorption features associated with vibration transitions of minerals or organic functional groups. In this effort, a preliminary laser developed for the instrument uses a breadboard mid-infrared (MIR) desorption laser operating at a discrete 3.475 µm wavelength, and a breadboard UV ionization laser operating at a wavelength of 266 nm. The MIR wavelength was selected to overlap the C-H stretch vibrational transition of certain aromatic hydrocarbons, and the UV wavelength provides additional selectivity to aromatic species via UV resonance-enhanced multiphoton ionization effects. The use of distinct laser wavelengths allows efficient coupling to the vibrational and electronic spectra of the analyte in independent desorption and ionization steps, mitigating excess energy that can lead to fragmentation during the ionization process and leading to selectivity that can aid in data interpretation.
DATA QUALITY OBJECTIVES FOR SELECTING WASTE SAMPLES FOR THE BENCH STEAM REFORMER TEST
DOE Office of Scientific and Technical Information (OSTI.GOV)
BANNING DL
2010-08-03
This document describes the data quality objectives to select archived samples located at the 222-S Laboratory for Fluid Bed Steam Reformer testing. The type, quantity and quality of the data required to select the samples for Fluid Bed Steam Reformer testing are discussed. In order to maximize the efficiency and minimize the time to treat Hanford tank waste in the Waste Treatment and Immobilization Plant, additional treatment processes may be required. One of the potential treatment processes is the fluid bed steam reformer (FBSR). A determination of the adequacy of the FBSR process to treat Hanford tank waste is required.more » The initial step in determining the adequacy of the FBSR process is to select archived waste samples from the 222-S Laboratory that will be used to test the FBSR process. Analyses of the selected samples will be required to confirm the samples meet the testing criteria.« less
Barreto, Goncalo; Soininen, Antti; Sillat, Tarvo; Konttinen, Yrjö T; Kaivosoja, Emilia
2014-01-01
Time-of-flight secondary ion mass spectrometry (ToF-SIMS) is increasingly being used in analysis of biological samples. For example, it has been applied to distinguish healthy and osteoarthritic human cartilage. This chapter discusses ToF-SIMS principle and instrumentation including the three modes of analysis in ToF-SIMS. ToF-SIMS sets certain requirements for the samples to be analyzed; for example, the samples have to be vacuum compatible. Accordingly, sample processing steps for different biological samples, i.e., proteins, cells, frozen and paraffin-embedded tissues and extracellular matrix for the ToF-SIMS are presented. Multivariate analysis of the ToF-SIMS data and the necessary data preprocessing steps (peak selection, data normalization, mean-centering, and scaling and transformation) are discussed in this chapter.
Population viability and connectivity of the Louisiana black bear (Ursus americanus luteolus)
Laufenberg, Jared S.; Clark, Joseph D.
2014-01-01
From April 2010 to April 2012, global positioning system (GPS) radio collars were placed on 8 female and 23 male bears ranging from 1 to 11 years of age to develop a step-selection function model to predict routes and rates of interchange. For both males and females, the probability of a step being selected increased as the distance to natural land cover and agriculture at the end of the step decreased and as distance from roads at the end of a step increased. Of 4,000 correlated random walks, the least potential interchange was between TRB and TRC and between UARB and LARB, but the relative potential for natural interchange between UARB and TRC was high. The step-selection model predicted that dispersals between the LARB and UARB populations were infrequent but possible for males and nearly nonexistent for females. No evidence of natural female dispersal between subpopulations has been documented thus far, which is also consistent with model predictions.
Method for localizing and isolating an errant process step
Tobin, Jr., Kenneth W.; Karnowski, Thomas P.; Ferrell, Regina K.
2003-01-01
A method for localizing and isolating an errant process includes the steps of retrieving from a defect image database a selection of images each image having image content similar to image content extracted from a query image depicting a defect, each image in the selection having corresponding defect characterization data. A conditional probability distribution of the defect having occurred in a particular process step is derived from the defect characterization data. A process step as a highest probable source of the defect according to the derived conditional probability distribution is then identified. A method for process step defect identification includes the steps of characterizing anomalies in a product, the anomalies detected by an imaging system. A query image of a product defect is then acquired. A particular characterized anomaly is then correlated with the query image. An errant process step is then associated with the correlated image.
Shear Bond Strengths of Different Adhesive Systems to Biodentine
Odabaş, Mesut Enes; Bani, Mehmet; Tirali, Resmiye Ebru
2013-01-01
The aim of this study was to measure the shear bond strength of different adhesive systems to Biodentine with different time intervals. Eighty specimens of Biodentine were prepared and divided into 8 groups. After 12 minutes, 40 samples were randomly selected and divided into 4 groups of 10 each: group 1: (etch-and-rinse adhesive system) Prime & Bond NT; group 2: (2-step self-etch adhesive system) Clearfil SE Bond; group 3: (1-step self-etch adhesive systems) Clearfil S3 Bond; group 4: control (no adhesive). After the application of adhesive systems, composite resin was applied over Biodentine. This procedure was repeated 24 hours after mixing additional 40 samples, respectively. Shear bond strengths were measured using a universal testing machine, and the data were subjected to 1-way analysis of variance and Scheffé post hoc test. No significant differences were found between all of the adhesive groups at the same time intervals (12 minutes and 24 hours) (P > .05). Among the two time intervals, the lowest value was obtained for group 1 (etch-and-rinse adhesive) at a 12-minute period, and the highest was obtained for group 2 (two-step self-etch adhesive) at a 24-hour period. The placement of composite resin used with self-etch adhesive systems over Biodentine showed better shear bond strength. PMID:24222742
Shear bond strengths of different adhesive systems to biodentine.
Odabaş, Mesut Enes; Bani, Mehmet; Tirali, Resmiye Ebru
2013-01-01
The aim of this study was to measure the shear bond strength of different adhesive systems to Biodentine with different time intervals. Eighty specimens of Biodentine were prepared and divided into 8 groups. After 12 minutes, 40 samples were randomly selected and divided into 4 groups of 10 each: group 1: (etch-and-rinse adhesive system) Prime & Bond NT; group 2: (2-step self-etch adhesive system) Clearfil SE Bond; group 3: (1-step self-etch adhesive systems) Clearfil S(3) Bond; group 4: control (no adhesive). After the application of adhesive systems, composite resin was applied over Biodentine. This procedure was repeated 24 hours after mixing additional 40 samples, respectively. Shear bond strengths were measured using a universal testing machine, and the data were subjected to 1-way analysis of variance and Scheffé post hoc test. No significant differences were found between all of the adhesive groups at the same time intervals (12 minutes and 24 hours) (P > .05). Among the two time intervals, the lowest value was obtained for group 1 (etch-and-rinse adhesive) at a 12-minute period, and the highest was obtained for group 2 (two-step self-etch adhesive) at a 24-hour period. The placement of composite resin used with self-etch adhesive systems over Biodentine showed better shear bond strength.
ASPECTS: an automation-assisted SPE method development system.
Li, Ming; Chou, Judy; King, Kristopher W; Yang, Liyu
2013-07-01
A typical conventional SPE method development (MD) process usually involves deciding the chemistry of the sorbent and eluent based on information about the analyte; experimentally preparing and trying out various combinations of adsorption chemistry and elution conditions; quantitatively evaluating the various conditions; and comparing quantitative results from all combination of conditions to select the best condition for method qualification. The second and fourth steps have mostly been performed manually until now. We developed an automation-assisted system that expedites the conventional SPE MD process by automating 99% of the second step, and expedites the fourth step by automatically processing the results data and presenting it to the analyst in a user-friendly format. The automation-assisted SPE MD system greatly saves the manual labor in SPE MD work, prevents analyst errors from causing misinterpretation of quantitative results, and shortens data analysis and interpretation time.
Architectural programming for the workplace and the careplace.
Easter, James G
2002-01-01
Sensitive planning and architectural design will impact long-term costs and daily operations. At the same time, the quality of the total environment has a direct impact on the patient, the family and the staff. These needs should be carefully balanced with the emotions of the patient, the care partner (parent, husband, wife or guardian) and those of the clinical team (physicians, nurses and staff). This article addresses the first step in the process; the master plan and then focuses in detail on one aspect of the architectural work referred to as architectural programming. The key to the process is selecting the best team of consultants, following the steps carefully, involving the client at every appropriate milestone along the way and asking the right questions. With this experienced team on board; following the proper steps, listening carefully to the answers and observing the daily process one can expect a successful product.
NASA Technical Reports Server (NTRS)
Kwak, Dochan; Kiris, C.; Smith, Charles A. (Technical Monitor)
1998-01-01
Performance of the two commonly used numerical procedures, one based on artificial compressibility method and the other pressure projection method, are compared. These formulations are selected primarily because they are designed for three-dimensional applications. The computational procedures are compared by obtaining steady state solutions of a wake vortex and unsteady solutions of a curved duct flow. For steady computations, artificial compressibility was very efficient in terms of computing time and robustness. For an unsteady flow which requires small physical time step, pressure projection method was found to be computationally more efficient than an artificial compressibility method. This comparison is intended to give some basis for selecting a method or a flow solution code for large three-dimensional applications where computing resources become a critical issue.
Antibiotic Combinations That Enable One-Step, Targeted Mutagenesis of Chromosomal Genes.
Lee, Wonsik; Do, Truc; Zhang, Ge; Kahne, Daniel; Meredith, Timothy C; Walker, Suzanne
2018-06-08
Targeted modification of bacterial chromosomes is necessary to understand new drug targets, investigate virulence factors, elucidate cell physiology, and validate results of -omics-based approaches. For some bacteria, reverse genetics remains a major bottleneck to progress in research. Here, we describe a compound-centric strategy that combines new negative selection markers with known positive selection markers to achieve simple, efficient one-step genome engineering of bacterial chromosomes. The method was inspired by the observation that certain nonessential metabolic pathways contain essential late steps, suggesting that antibiotics targeting a late step can be used to select for the absence of genes that control flux into the pathway. Guided by this hypothesis, we have identified antibiotic/counterselectable markers to accelerate reverse engineering of two increasingly antibiotic-resistant pathogens, Staphylococcus aureus and Acinetobacter baumannii. For S. aureus, we used wall teichoic acid biosynthesis inhibitors to select for the absence of tarO and for A. baumannii, we used colistin to select for the absence of lpxC. We have obtained desired gene deletions, gene fusions, and promoter swaps in a single plating step with perfect efficiency. Our method can also be adapted to generate markerless deletions of genes using FLP recombinase. The tools described here will accelerate research on two important pathogens, and the concept we outline can be readily adapted to any organism for which a suitable target pathway can be identified.
NASA Technical Reports Server (NTRS)
Tuttle, M. E.; Brinson, H. F.
1986-01-01
The impact of flight error in measured viscoelastic parameters on subsequent long-term viscoelastic predictions is numerically evaluated using the Schapery nonlinear viscoelastic model. Of the seven Schapery parameters, the results indicated that long-term predictions were most sensitive to errors in the power law parameter n. Although errors in the other parameters were significant as well, errors in n dominated all other factors at long times. The process of selecting an appropriate short-term test cycle so as to insure an accurate long-term prediction was considered, and a short-term test cycle was selected using material properties typical for T300/5208 graphite-epoxy at 149 C. The process of selection is described, and its individual steps are itemized.
Olenšek, Andrej; Zadravec, Matjaž; Matjačić, Zlatko
2016-06-11
The most common approach to studying dynamic balance during walking is by applying perturbations. Previous studies that investigated dynamic balance responses predominantly focused on applying perturbations in frontal plane while walking on treadmill. The goal of our work was to develop balance assessment robot (BAR) that can be used during overground walking and to assess normative balance responses to perturbations in transversal plane in a group of neurologically healthy individuals. BAR provides three passive degrees of freedom (DoF) and three actuated DoF in pelvis that are admittance-controlled in such a way that the natural movement of pelvis is not significantly affected. In this study BAR was used to assess normative balance responses in neurologically healthy individuals by applying linear perturbations in frontal and sagittal planes and angular perturbations in transversal plane of pelvis. One way repeated measure ANOVA was used to statistically evaluate the effect of selected perturbations on stepping responses. Standard deviations of assessed responses were similar in unperturbed and perturbed walking. Perturbations in frontal direction evoked substantial pelvis displacement and caused statistically significant effect on step length, step width and step time. Likewise, perturbations in sagittal plane also caused statistically significant effect on step length, step width and step time but with less explicit impact on pelvis movement in frontal plane. On the other hand, except from substantial pelvis rotation angular perturbations did not have substantial effect on pelvis movement in frontal and sagittal planes while statistically significant effect was noted only in step length and step width after perturbation in clockwise direction. Results indicate that the proposed device can repeatedly reproduce similar experimental conditions. Results also suggest that "stepping strategy" is the dominant strategy for coping with perturbations in frontal plane, perturbations in sagittal plane are to greater extent handled by "ankle strategy" while angular perturbations in transversal plane do not pose substantial challenge for balance. Results also show that specific perturbation in general elicits responses that extend also to other planes of movement that are not directly associated with plane of perturbation as well as to spatio temporal parameters of gait.
Theoretical Study of the Mechanism Behind the para-Selective Nitration of Toluene in Zeolite H-Beta
DOE Office of Scientific and Technical Information (OSTI.GOV)
Andersen, Amity; Govind, Niranjan; Subramanian, Lalitha
Periodic density functional theory calculations were performed to investigate the origin of the favorable para-selective nitration of toluene exhibited by zeolite H-beta with acetyl nitrate nitration agent. Energy calculations were performed for each of the 32 crystallographically unique Bronsted acid sites of a beta polymorph B zeolite unit cell with multiple Bronsted acid sites of comparable stability. However, one particular aluminum T-site with three favorable Bronsted site oxygens embedded in a straight 12-T channel wall provides multiple favorable proton transfer sites. Transition state searches around this aluminum site were performed to determine the barrier to reaction for both para andmore » ortho nitration of toluene. A three-step process was assumed for the nitration of toluene with two organic intermediates: the pi- and sigma-complexes. The rate limiting step is the proton transfer from the sigma-complex to a zeolite Bronsted site. The barrier for this step in ortho nitration is shown to be nearly 2.5 times that in para nitration. This discrepancy appears to be due to steric constraints imposed by the curvature of the large 12-T pore channels of beta and the toluene methyl group in the ortho approach that are not present in the para approach.« less
Physical Activity of Nurse Clinical Practitioners and Managers.
Jirathananuwat, Areeya; Pongpirul, Krit
2017-11-01
This study was aimed (1) to compare the level of physical activity (PA) between working and nonworking hours and (2) to compare the level of PA during working hours of nurse clinical practitioners (NCPs) with that of nurse managers (NMs). This cross-sectional survey was conducted at a Thai university hospital from October 2015 to March 2016. All randomly selected participants wore an activity tracker on their hip for 5 days, except during bathing and sleeping periods, to record step counts and time points. Of 884 nurses, 289 (142 NCPs and 147 NMs) were randomly selected. The average age was 35.87 years. They spent 9.76 and 6.01 hours on work and nonwork activities, respectively. Daily steps per hour were significantly lower during work than nonwork periods (P < .001). An NCP had significantly higher overall hourly PA (P = .002). The number of steps per hour during work period of NCP was significantly higher than that of NM even after adjusting for age, work experience, and body mass index (P = .034). NCP had higher overall PA than NM, which was partly contributed by work-related PA. Level of PA for a professional with variation of actual work hours should be measured on hourly basis.
Testing Students for Chapter 1 Eligibility: ECIA Chapter 1.
ERIC Educational Resources Information Center
Davis, Walter E.
This document summarizes the criteria for Chapter 1 eligibility, discusses a step-by-step selection procedure, used in the Austin Independent School District, explains the laws and regulations concerning how students are to be selected, emphasizes that special testing should be administered to students whose scores are clearly discrepant from…
Stable and verifiable state estimation methods and systems with spacecraft applications
NASA Technical Reports Server (NTRS)
Li, Rongsheng (Inventor); Wu, Yeong-Wei Andy (Inventor)
2001-01-01
The stability of a recursive estimator process (e.g., a Kalman filter is assured for long time periods by periodically resetting an error covariance P(t.sub.n) of the system to a predetermined reset value P.sub.r. The recursive process is thus repetitively forced to start from a selected covariance and continue for a time period that is short compared to the system's total operational time period. The time period in which the process must maintain its numerical stability is significantly reduced as is the demand on the system's numerical stability. The process stability for an extended operational time period T.sub.o is verified by performing the resetting step at the end of at least one reset time period T.sub.r whose duration is less than the operational time period T.sub.o and then confirming stability of the process over the reset time period T.sub.r. Because the recursive process starts from a selected covariance at the beginning of each reset time period T.sub.r, confirming stability of the process over at least one reset time period substantially confirms stability over the longer operational time period T.sub.o.
Effect of Pd surface structure on the activation of methyl acetate
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xu, Lijun; Xu, Ye
2011-01-01
The activation of methyl acetate (CH3COOCH3; MA) has been studied using periodic density functional theory calculations to probe the effect of Pd surface structure on the selectivity in MA activation. The adsorption of MA, dehydrogenated derivatives, enolate (CH2COOCH3; ENL) and methylene acetate (CH3COOCH2; MeA), and several dissociation products (including acetate, acetyl, ketene, methoxy, formaldehyde, CO, C, O, and H); and C-H and C-O (mainly in the RCO-OR position) bond dissociation in MA, ENL, and MeA, are calculated on Pd(111) terrace, step, and kink; and on Pd(100) terrace and step. The adsorption of most species is not strongly affected between (111)-more » to (100)-type surfaces, but is clearly enhanced by step/kink compared to the corresponding terrace. Going from terrace to step edge and from (111)- to (100)-type surfaces both stabilize the transition states of C-O bond dissociation steps. Going from terrace to step edge also stabilizes the transition states of C-H bond dissociation steps, but going from (111)- to (100)-type surfaces does not clearly do so. We propose that compared to the Pd(111) terrace, the Pd(100) terrace is more selective for C-O bond dissociation that is desirable for alcohol formation, whereas the Pd step edges are more selective for C-H bond dissociation.« less
Application configuration selection for energy-efficient execution on multicore systems
Wang, Shinan; Luo, Bing; Shi, Weisong; ...
2015-09-21
Balanced performance and energy consumption are incorporated in the design of modern computer systems. Several runtime factors, such as concurrency levels, thread mapping strategies, and dynamic voltage and frequency scaling (DVFS) should be considered in order to achieve optimal energy efficiency fora workload. Selecting appropriate run-time factors, however, is one of the most challenging tasks because the run-time factors are architecture-specific and workload-specific. And while most existing works concentrate on either static analysis of the workload or run-time prediction results, we present a hybrid two-step method that utilizes concurrency levels and DVFS settings to achieve the energy efficiency configuration formore » a worldoad. The experimental results based on a Xeon E5620 server with NPB and PARSEC benchmark suites show that the model is able to predict the energy efficient configuration accurately. On average, an additional 10% EDP (Energy Delay Product) saving is obtained by using run-time DVFS for the entire system. An off-line optimal solution is used to compare with the proposed scheme. Finally, the experimental results show that the average extra EDP saved by the optimal solution is within 5% on selective parallel benchmarks.« less
Tharwat, Alaa; Moemen, Yasmine S; Hassanien, Aboul Ella
2016-12-09
Measuring toxicity is one of the main steps in drug development. Hence, there is a high demand for computational models to predict the toxicity effects of the potential drugs. In this study, we used a dataset, which consists of four toxicity effects:mutagenic, tumorigenic, irritant and reproductive effects. The proposed model consists of three phases. In the first phase, rough set-based methods are used to select the most discriminative features for reducing the classification time and improving the classification performance. Due to the imbalanced class distribution, in the second phase, different sampling methods such as Random Under-Sampling, Random Over-Sampling and Synthetic Minority Oversampling Technique are used to solve the problem of imbalanced datasets. ITerative Sampling (ITS) method is proposed to avoid the limitations of those methods. ITS method has two steps. The first step (sampling step) iteratively modifies the prior distribution of the minority and majority classes. In the second step, a data cleaning method is used to remove the overlapping that is produced from the first step. In the third phase, Bagging classifier is used to classify an unknown drug into toxic or non-toxic. The experimental results proved that the proposed model performed well in classifying the unknown samples according to all toxic effects in the imbalanced datasets.
LHCb Kalman Filter cross architecture studies
NASA Astrophysics Data System (ADS)
Cámpora Pérez, Daniel Hugo
2017-10-01
The 2020 upgrade of the LHCb detector will vastly increase the rate of collisions the Online system needs to process in software, in order to filter events in real time. 30 million collisions per second will pass through a selection chain, where each step is executed conditional to its prior acceptance. The Kalman Filter is a fit applied to all reconstructed tracks which, due to its time characteristics and early execution in the selection chain, consumes 40% of the whole reconstruction time in the current trigger software. This makes the Kalman Filter a time-critical component as the LHCb trigger evolves into a full software trigger in the Upgrade. I present a new Kalman Filter algorithm for LHCb that can efficiently make use of any kind of SIMD processor, and its design is explained in depth. Performance benchmarks are compared between a variety of hardware architectures, including x86_64 and Power8, and the Intel Xeon Phi accelerator, and the suitability of said architectures to efficiently perform the LHCb Reconstruction process is determined.
Defense Acquisitions: Assessments of Selected Weapon Programs
2010-03-01
improved availability for small terminals. It is to replace the Ultra High Frequency (UHF) Follow-On ( UFO ) satellite system currently in operation...of MUOS capabilities is time-critical due to the operational failures of two UFO satellites. The MUOS program has taken several steps to address...failures of two UFO satellites. Based on the current health of on-orbit satellites, UHF communication capabilities are predicted to fall below the
Manoochehri, Mahboobeh; Asgharinezhad, Ali Akbar; Shekari, Nafiseh
2015-01-01
This work describes a novel Fe₃O₄@SiO₂@polyaminoquinoline magnetic nanocomposite and its application in the pre-concentration of Cd(II) and Pb(II) ions. The parameters affecting the pre-concentration procedure were optimised by a Box-Behnken design through response surface methodology. Three variables (extraction time, magnetic sorbent amount and pH) were selected as the main factors affecting the sorption step, while four variables (type, volume and concentration of the eluent, and elution time) were selected as main factors in the optimisation study of the elution step. Following the sorption and elution of analytes, the ions were quantified by flame atomic absorption spectrometry (FASS). The limits of detection were 0.1 and 0.7 ng ml(-1) for Cd(II) and Pb(II) ions, respectively. All the relative standard deviations were less than 7.6%. The sorption capacities of this new sorbent were 57 mg g(-)(1) for Cd(II) and 73 mg g(-1) for Pb(II). Ultimately, this nanocomposite was successfully applied to the rapid extraction of trace quantities of these heavy metal ions from seafood and agricultural samples and satisfactory results were obtained.
Wit, Jan M.; Himes, John H.; van Buuren, Stef; Denno, Donna M.; Suchdev, Parminder S.
2017-01-01
Background/Aims Childhood stunting is a prevalent problem in low- and middle-income countries and is associated with long-term adverse neurodevelopment and health outcomes. In this review, we define indicators of growth, discuss key challenges in their analysis and application, and offer suggestions for indicator selection in clinical research contexts. Methods Critical review of the literature. Results Linear growth is commonly expressed as length-for-age or height-for-age z-score (HAZ) in comparison to normative growth standards. Conditional HAZ corrects for regression to the mean where growth changes relate to previous status. In longitudinal studies, growth can be expressed as ΔHAZ at 2 time points. Multilevel modeling is preferable when more measurements per individual child are available over time. Height velocity z-score reference standards are available for children under the age of 2 years. Adjusting for covariates or confounders (e.g., birth weight, gestational age, sex, parental height, maternal education, socioeconomic status) is recommended in growth analyses. Conclusion The most suitable indicator(s) for linear growth can be selected based on the number of available measurements per child and the child's age. By following a step-by-step algorithm, growth analyses can be precisely and accurately performed to allow for improved comparability within and between studies. PMID:28196362
Debrus, Benjamin; Guillarme, Davy; Rudaz, Serge
2013-10-01
A complete strategy dedicated to quality-by-design (QbD) compliant method development using design of experiments (DOE), multiple linear regressions responses modelling and Monte Carlo simulations for error propagation was evaluated for liquid chromatography (LC). The proposed approach includes four main steps: (i) the initial screening of column chemistry, mobile phase pH and organic modifier, (ii) the selectivity optimization through changes in gradient time and mobile phase temperature, (iii) the adaptation of column geometry to reach sufficient resolution, and (iv) the robust resolution optimization and identification of the method design space. This procedure was employed to obtain a complex chromatographic separation of 15 antipsychotic basic drugs, widely prescribed. To fully automate and expedite the QbD method development procedure, short columns packed with sub-2 μm particles were employed, together with a UHPLC system possessing columns and solvents selection valves. Through this example, the possibilities of the proposed QbD method development workflow were exposed and the different steps of the automated strategy were critically discussed. A baseline separation of the mixture of antipsychotic drugs was achieved with an analysis time of less than 15 min and the robustness of the method was demonstrated simultaneously with the method development phase. Copyright © 2013 Elsevier B.V. All rights reserved.
Mathew, Hanna; Kunde, Wilfried; Herbort, Oliver
2017-05-01
When someone grasps an object, the grasp depends on the intended object manipulation and usually facilitates it. If several object manipulation steps are planned, the first step has been reported to primarily determine the grasp selection. We address whether the grasp can be aligned to the second step, if the second step's requirements exceed those of the first step. Participants grasped and rotated a dial first by a small extent and then by various extents in the opposite direction, without releasing the dial. On average, when the requirements of the first and the second step were similar, participants mostly aligned the grasp to the first step. When the requirements of the second step were considerably higher, participants aligned the grasp to the second step, even though the first step still had a considerable impact. Participants employed two different strategies. One subgroup initially aligned the grasp to the first step and then ceased adjusting the grasp to either step. Another group also initially aligned the grasp to the first step and then switched to aligning it primarily to the second step. The data suggest that participants are more likely to switch to the latter strategy when they experienced more awkward arm postures. In summary, grasp selections for multi-step object manipulations can be aligned to the second object manipulation step, if the requirements of this step clearly exceed those of the first step and if participants have some experience with the task.
McGrath, Leslie J; Hinckson, Erica A; Hopkins, Will G; Mavoa, Suzanne; Witten, Karen; Schofield, Grant
2016-07-01
Urban design may affect children's habitual physical activity by influencing active commuting and neighborhood play. Our objective was to examine associations between neighborhood built-environment features near children's homes and objectively measured physical activity. We used geographical information system (GIS) protocols to select 2016 households from 48 low- and high-walkability neighborhoods within four New Zealand cities. Children (n = 227; mean age ± standard deviation [SD] 9.3 ± 2.1 years) from the selected households wore accelerometers that recorded physical activity in the period 2008-2010. We used multilevel linear models to examine the associations of GIS and street-audit measures, using the systematic pedestrian and cycling environmental scan (SPACES), of the residential environment (ranked into tertiles) on children's hourly step counts and proportions of time spent at moderate-to-vigorous intensity on school and non-school days. During school-travel times (8:00-8:59 a.m. and 15:00-15:59 p.m.), children in the mid-tertile distance from school (~1 to 2 km) were more active than children with shorter or longer commute distances (1290 vs. 1130 and 1140 steps·h(-1); true between-child SD 440). After school (16:00-17:59 p.m.), children residing closest to school were more active (890 vs. 800 and 790 steps·h(-1); SD 310). Neighborhoods with more green space, attractive streets, or low-walkability streets showed a moderate positive association on non-school day moderate-to-vigorous steps, whereas neighborhoods with additional pedestrian infrastructure or more food outlets showed moderate negative associations. Other associations of residential neighborhoods were unclear but, at most, small. Designing the urban environment to promote safe child-pedestrian roaming may increase children's moderate-to-vigorous physical activity.
Chen, Hui-Ya; Wing, Alan M; Pratt, David
2006-04-01
Stepping in time with a metronome has been reported to improve pathological gait. Although there have been many studies of finger tapping synchronisation tasks with a metronome, the specific details of the influences of metronome timing on walking remain unknown. As a preliminary to studying pathological control of gait timing, we designed an experiment with four synchronisation tasks, unilateral heel tapping in sitting, bilateral heel tapping in sitting, bilateral heel tapping in standing, and stepping on the spot, in order to examine the influence of biomechanical constraints on metronome timing. These four conditions allow study of the effects of bilateral co-ordination and maintenance of balance on timing. Eight neurologically normal participants made heel tapping and stepping responses in synchrony with a metronome producing 500 ms interpulse intervals. In each trial comprising 40 intervals, one interval, selected at random between intervals 15 and 30, was lengthened or shortened, which resulted in a shift in phase of all subsequent metronome pulses. Performance measures were the speed of compensation for the phase shift, in terms of the temporal difference between the response and the metronome pulse, i.e. asynchrony, and the standard deviation of the asynchronies and interresponse intervals of steady state synchronisation. The speed of compensation decreased with increase in the demands of maintaining balance. The standard deviation varied across conditions but was not related to the compensation speed. The implications of these findings for metronome assisted gait are discussed in terms of a first-order linear correction account of synchronisation.
van Mierlo, Pieter; Lie, Octavian; Staljanssens, Willeke; Coito, Ana; Vulliémoz, Serge
2018-04-26
We investigated the influence of processing steps in the estimation of multivariate directed functional connectivity during seizures recorded with intracranial EEG (iEEG) on seizure-onset zone (SOZ) localization. We studied the effect of (i) the number of nodes, (ii) time-series normalization, (iii) the choice of multivariate time-varying connectivity measure: Adaptive Directed Transfer Function (ADTF) or Adaptive Partial Directed Coherence (APDC) and (iv) graph theory measure: outdegree or shortest path length. First, simulations were performed to quantify the influence of the various processing steps on the accuracy to localize the SOZ. Afterwards, the SOZ was estimated from a 113-electrodes iEEG seizure recording and compared with the resection that rendered the patient seizure-free. The simulations revealed that ADTF is preferred over APDC to localize the SOZ from ictal iEEG recordings. Normalizing the time series before analysis resulted in an increase of 25-35% of correctly localized SOZ, while adding more nodes to the connectivity analysis led to a moderate decrease of 10%, when comparing 128 with 32 input nodes. The real-seizure connectivity estimates localized the SOZ inside the resection area using the ADTF coupled to outdegree or shortest path length. Our study showed that normalizing the time-series is an important pre-processing step, while adding nodes to the analysis did only marginally affect the SOZ localization. The study shows that directed multivariate Granger-based connectivity analysis is feasible with many input nodes (> 100) and that normalization of the time-series before connectivity analysis is preferred.
Time resolved infrared studies of C-H bond activation by organometallics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Asplund, M.C.
This work describes how step-scan Fourier Transform Infrared spectroscopy and visible and near infrared ultrafast lasers have been applied to the study of the photochemical activation of C-H bonds in organometallic systems, which allow for the selective breaking of C-H bonds in alkanes. The author has established the photochemical mechanism of C-H activation by Tp{sup *}Rh(CO){sub 2}(Tp{sup *} = HB-Pz{sup *}{sub 3}, Pz = 3,5-dimethylpyrazolyl) in alkane solution. The initially formed monocarbonyl forms a weak solvent complex, which undergoes a change in Tp{sup *} ligand connectivity. The final C-H bond breaking step occurs at different time scales depending on themore » structure of the alkane. In linear solvents, the time scale is <50 ns and cyclic alkanes is {approximately}200 ps. The reactivity of the Tp{sup *}Rh(CO){sub 2} system has also been studied in aromatic solvents. Here the reaction proceeds through two different pathways, with very different time scales. The first proceeds in a manner analogous to alkanes and takes <50 ns. The second proceeds through a Rh-C-C complex, and takes place on a time scale of 1.8 {micro}s.« less
The time course of saccadic decision making: dynamic field theory.
Wilimzig, Claudia; Schneider, Stefan; Schöner, Gregor
2006-10-01
Making a saccadic eye movement involves two decisions, the decision to initiate the saccade and the selection of the visual target of the saccade. Here we provide a theoretical account for the time-courses of these two processes, whose instabilities are the basis of decision making. We show how the cross-over from spatial averaging for fast saccades to selection for slow saccades arises from the balance between excitatory and inhibitory processes. Initiating a saccade involves overcoming fixation, as can be observed in the countermanding paradigm, which we model accounting both for the temporal evolution of the suppression probability and its dependence on fixation activity. The interaction between the two forms of decision making is demonstrated by predicting how the cross-over from averaging to selection depends on the fixation stimulus in gap-step-overlap paradigms. We discuss how the activation dynamics of our model may be mapped onto neuronal structures including the motor map and the fixation cells in superior colliculus.
Process, including PSA and membrane separation, for separating hydrogen from hydrocarbons
Baker, Richard W.; Lokhandwala, Kaaeid A.; He, Zhenjie; Pinnau, Ingo
2001-01-01
An improved process for separating hydrogen from hydrocarbons. The process includes a pressure swing adsorption step, a compression/cooling step and a membrane separation step. The membrane step relies on achieving a methane/hydrogen selectivity of at least about 2.5 under the conditions of the process.
A method for tailoring the information content of a software process model
NASA Technical Reports Server (NTRS)
Perkins, Sharon; Arend, Mark B.
1990-01-01
The framework is defined for a general method for selecting a necessary and sufficient subset of a general software life cycle's information products, to support new software development process. Procedures for characterizing problem domains in general and mapping to a tailored set of life cycle processes and products is presented. An overview of the method is shown using the following steps: (1) During the problem concept definition phase, perform standardized interviews and dialogs between developer and user, and between user and customer; (2) Generate a quality needs profile of the software to be developed, based on information gathered in step 1; (3) Translate the quality needs profile into a profile of quality criteria that must be met by the software to satisfy the quality needs; (4) Map the quality criteria to set of accepted processes and products for achieving each criterion; (5) Select the information products which match or support the accepted processes and product of step 4; and (6) Select the design methodology which produces the information products selected in step 5.
A method for tailoring the information content of a software process model
NASA Technical Reports Server (NTRS)
Perkins, Sharon; Arend, Mark B.
1990-01-01
The framework is defined for a general method for selecting a necessary and sufficient subset of a general software life cycle's information products, to support new software development process. Procedures for characterizing problem domains in general and mapping to a tailored set of life cycle processes and products is presented. An overview of the method is shown using the following steps: (1) During the problem concept definition phase, perform standardized interviews and dialogs between developer and user, and between user and customer; (2) Generate a quality needs profile of the software to be developed, based on information gathered in step 1; (3) Translate the quality needs profile into a profile of quality criteria that must be met by the software to satisfy the quality needs; (4) Map the quality criteria to a set of accepted processes and products for achieving each criterion; (5) select the information products which match or support the accepted processes and product of step 4; and (6) Select the design methodology which produces the information products selected in step 5.
NASA Astrophysics Data System (ADS)
Warmer, F.; Beidler, C. D.; Dinklage, A.; Wolf, R.; The W7-X Team
2016-07-01
As a starting point for a more in-depth discussion of a research strategy leading from Wendelstein 7-X to a HELIAS power plant, the respective steps in physics and engineering are considered from different vantage points. The first approach discusses the direct extrapolation of selected physics and engineering parameters. This is followed by an examination of advancing the understanding of stellarator optimisation. Finally, combining a dimensionless parameter approach with an empirical energy confinement time scaling, the necessary development steps are highlighted. From this analysis it is concluded that an intermediate-step burning-plasma stellarator is the most prudent approach to bridge the gap between W7-X and a HELIAS power plant. Using a systems code approach in combination with transport simulations, a range of possible conceptual designs is analysed. This range is exemplified by two bounding cases, a fast-track, cost-efficient device with low magnetic field and without a blanket and a device similar to a demonstration power plant with blanket and net electricity power production.
Why the null matters: statistical tests, random walks and evolution.
Sheets, H D; Mitchell, C E
2001-01-01
A number of statistical tests have been developed to determine what type of dynamics underlie observed changes in morphology in evolutionary time series, based on the pattern of change within the time series. The theory of the 'scaled maximum', the 'log-rate-interval' (LRI) method, and the Hurst exponent all operate on the same principle of comparing the maximum change, or rate of change, in the observed dataset to the maximum change expected of a random walk. Less change in a dataset than expected of a random walk has been interpreted as indicating stabilizing selection, while more change implies directional selection. The 'runs test' in contrast, operates on the sequencing of steps, rather than on excursion. Applications of these tests to computer generated, simulated time series of known dynamical form and various levels of additive noise indicate that there is a fundamental asymmetry in the rate of type II errors of the tests based on excursion: they are all highly sensitive to noise in models of directional selection that result in a linear trend within a time series, but are largely noise immune in the case of a simple model of stabilizing selection. Additionally, the LRI method has a lower sensitivity than originally claimed, due to the large range of LRI rates produced by random walks. Examination of the published results of these tests show that they have seldom produced a conclusion that an observed evolutionary time series was due to directional selection, a result which needs closer examination in light of the asymmetric response of these tests.
Dong, Jun; Li, Wang; Zeng, Qi; Li, Sheng; Gong, Xiao; Shen, Lujun; Mao, Siyue; Dong, Annan; Wu, Peihong
2015-01-01
Abstract The location of the caudate lobe and its complex anatomy make caudate lobectomy and radiofrequency ablation (RFA) under ultrasound guidance technically challenging. The objective of the exploratory study was to introduce a novel modality of treatment of lesions in caudate lobe and discuss all details with our experiences to make this novel treatment modality repeatable and educational. The study enrolled 39 patients with liver caudate lobe tumor first diagnosed by computerized tomography (CT) or magnetic resonance imaging (MRI). After consultation of multi-disciplinary team, 7 patients with hepatic caudate lobe lesions were enrolled and accepted CT-guided percutaneous step-by-step RFA treatment. A total of 8 caudate lobe lesions of the 7 patients were treated by RFA in 6 cases and RFA combined with percutaneous ethanol injection (PEI) in 1 case. Median tumor diameter was 29 mm (range, 18–69 mm). A right approach was selected for 6 patients and a dorsal approach for 1 patient. Median operative time was 64 min (range, 59–102 min). Median blood loss was 10 mL (range, 8-16 mL) and mainly due to puncture injury. Median hospitalization time was 4 days (range, 2–5 days). All lesions were completely ablated (8/8; 100%) and no recurrence at the site of previous RFA was observed during median 8 months follow-up (range 3–11 months). No major or life-threatening complications or deaths occurred. In conclusion, percutaneous step-by-step RFA under CT guidance is a novel and effective minimally invasive therapy for hepatic caudate lobe lesions with well repeatability. PMID:26426638
Huang, Chang-Ming; Huang, Ze-Ning; Zheng, Chao-Hui; Li, Ping; Xie, Jian-Wei; Wang, Jia-Bin; Lin, Jian-Xian; Jun, Lu; Chen, Qi-Yue; Cao, Long-Long; Lin, Mi; Tu, Ru-Hong
2017-12-01
The goal of this study was to investigate the difference between the learning curves of different maneuvers in laparoscopic spleen-preserving splenic hilar lymphadenectomy for advanced upper gastric cancer. From January 2010 to April 2014, 53 consecutive patients who underwent laparoscopic spleen-preserving splenic hilar lymphadenectomy via the traditional-step maneuver (group A) and 53 consecutive patients via Huang's three-step maneuver (group B) were retrospectively analyzed. No significant difference in patient characteristics were found between the two groups. The learning curves of groups A and B were divided into phase 1 (1-43 cases and 1-30 cases, respectively) and phase 2 (44-53 cases and 31-53 cases, respectively). Compared with group A, the dissection time, bleeding loss and vascular injury were significantly decreased in group B. No significant differences in short-term outcomes were found between the two maneuvers. The multivariate analysis indicated that the body mass index, short gastric vessels, splenic artery type and maneuver were significantly associated with the dissection time in group B. No significant difference in the survival curve was found between the maneuvers. The learning curve of Huang's three-step maneuver was shorter than that of the traditional-step maneuver, and the former represents an ideal maneuver for laparoscopic spleen-preserving splenic hilar lymphadenectomy.To shorten the learning curve at the beginning of laparoscopic spleen-preserving splenic hilar lymphadenectomy, beginners should beneficially use Huang's three-step maneuver and select patients with advanced upper gastric cancer with a body mass index of less than 25 kg/m 2 and the concentrated type of splenic artery. Copyright © 2017. Published by Elsevier Ltd.
Effectiveness of en masse versus two-step retraction: a systematic review and meta-analysis.
Rizk, Mumen Z; Mohammed, Hisham; Ismael, Omar; Bearn, David R
2018-01-05
This review aims to compare the effectiveness of en masse and two-step retraction methods during orthodontic space closure regarding anchorage preservation and anterior segment retraction and to assess their effect on the duration of treatment and root resorption. An electronic search for potentially eligible randomized controlled trials and prospective controlled trials was performed in five electronic databases up to July 2017. The process of study selection, data extraction, and quality assessment was performed by two reviewers independently. A narrative review is presented in addition to a quantitative synthesis of the pooled results where possible. The Cochrane risk of bias tool and the Newcastle-Ottawa Scale were used for the methodological quality assessment of the included studies. Eight studies were included in the qualitative synthesis in this review. Four studies were included in the quantitative synthesis. En masse/miniscrew combination showed a statistically significant standard mean difference regarding anchorage preservation - 2.55 mm (95% CI - 2.99 to - 2.11) and the amount of upper incisor retraction - 0.38 mm (95% CI - 0.70 to - 0.06) when compared to a two-step/conventional anchorage combination. Qualitative synthesis suggested that en masse retraction requires less time than two-step retraction with no difference in the amount of root resorption. Both en masse and two-step retraction methods are effective during the space closure phase. The en masse/miniscrew combination is superior to the two-step/conventional anchorage combination with regard to anchorage preservation and amount of retraction. Limited evidence suggests that anchorage reinforcement with a headgear produces similar results with both retraction methods. Limited evidence also suggests that en masse retraction may require less time and that no significant differences exist in the amount of root resorption between the two methods.
Simulation methods with extended stability for stiff biochemical Kinetics.
Rué, Pau; Villà-Freixa, Jordi; Burrage, Kevin
2010-08-11
With increasing computer power, simulating the dynamics of complex systems in chemistry and biology is becoming increasingly routine. The modelling of individual reactions in (bio)chemical systems involves a large number of random events that can be simulated by the stochastic simulation algorithm (SSA). The key quantity is the step size, or waiting time, tau, whose value inversely depends on the size of the propensities of the different channel reactions and which needs to be re-evaluated after every firing event. Such a discrete event simulation may be extremely expensive, in particular for stiff systems where tau can be very short due to the fast kinetics of some of the channel reactions. Several alternative methods have been put forward to increase the integration step size. The so-called tau-leap approach takes a larger step size by allowing all the reactions to fire, from a Poisson or Binomial distribution, within that step. Although the expected value for the different species in the reactive system is maintained with respect to more precise methods, the variance at steady state can suffer from large errors as tau grows. In this paper we extend Poisson tau-leap methods to a general class of Runge-Kutta (RK) tau-leap methods. We show that with the proper selection of the coefficients, the variance of the extended tau-leap can be well-behaved, leading to significantly larger step sizes. The benefit of adapting the extended method to the use of RK frameworks is clear in terms of speed of calculation, as the number of evaluations of the Poisson distribution is still one set per time step, as in the original tau-leap method. The approach paves the way to explore new multiscale methods to simulate (bio)chemical systems.
Frontal and Parietal Cortices Show Different Spatiotemporal Dynamics across Problem-solving Stages.
Tschentscher, Nadja; Hauk, Olaf
2016-08-01
Arithmetic problem-solving can be conceptualized as a multistage process ranging from task encoding over rule and strategy selection to step-wise task execution. Previous fMRI research suggested a frontal-parietal network involved in the execution of complex numerical and nonnumerical tasks, but evidence is lacking on the particular contributions of frontal and parietal cortices across time. In an arithmetic task paradigm, we evaluated individual participants' "retrieval" and "multistep procedural" strategies on a trial-by-trial basis and contrasted those in time-resolved analyses using combined EEG and MEG. Retrieval strategies relied on direct retrieval of arithmetic facts (e.g., 2 + 3 = 5). Procedural strategies required multiple solution steps (e.g., 12 + 23 = 12 + 20 + 3 or 23 + 10 + 2). Evoked source analyses revealed independent activation dynamics within the first second of problem-solving in brain areas previously described as one network, such as the frontal-parietal cognitive control network: The right frontal cortex showed earliest effects of strategy selection for multistep procedural strategies around 300 msec, before parietal cortex activated around 700 msec. In time-frequency source power analyses, memory retrieval and multistep procedural strategies were differentially reflected in theta, alpha, and beta frequencies: Stronger beta and alpha desynchronizations emerged for procedural strategies in right frontal, parietal, and temporal regions as function of executive demands. Arithmetic fact retrieval was reflected in right prefrontal increases in theta power. Our results demonstrate differential brain dynamics within frontal-parietal networks across the time course of a problem-solving process, and analyses of different frequency bands allowed us to disentangle cortical regions supporting the underlying memory and executive functions.
Web processing service for landslide hazard assessment
NASA Astrophysics Data System (ADS)
Sandric, I.; Ursaru, P.; Chitu, D.; Mihai, B.; Savulescu, I.
2012-04-01
Hazard analysis requires heavy computation and specialized software. Web processing services can offer complex solutions that can be accessed through a light client (web or desktop). This paper presents a web processing service (both WPS and Esri Geoprocessing Service) for landslides hazard assessment. The web processing service was build with Esri ArcGIS Server solution and Python, developed using ArcPy, GDAL Python and NumPy. A complex model for landslide hazard analysis using both predisposing and triggering factors combined into a Bayesian temporal network with uncertainty propagation was build and published as WPS and Geoprocessing service using ArcGIS Standard Enterprise 10.1. The model uses as predisposing factors the first and second derivatives from DEM, the effective precipitations, runoff, lithology and land use. All these parameters can be served by the client from other WFS services or by uploading and processing the data on the server. The user can select the option of creating the first and second derivatives from the DEM automatically on the server or to upload the data already calculated. One of the main dynamic factors from the landslide analysis model is leaf area index. The LAI offers the advantage of modelling not just the changes from different time periods expressed in years, but also the seasonal changes in land use throughout a year. The LAI index can be derived from various satellite images or downloaded as a product. The upload of such data (time series) is possible using a NetCDF file format. The model is run in a monthly time step and for each time step all the parameters values, a-priory, conditional and posterior probability are obtained and stored in a log file. The validation process uses landslides that have occurred during the period up to the active time step and checks the records of the probabilities and parameters values for those times steps with the values of the active time step. Each time a landslide has been positive identified new a-priory probabilities are recorded for each parameter. A complete log for the entire model is saved and used for statistical analysis and a NETCDF file is created and it can be downloaded from the server with the log file
Prinsen, Cecilia A C; Vohra, Sunita; Rose, Michael R; Boers, Maarten; Tugwell, Peter; Clarke, Mike; Williamson, Paula R; Terwee, Caroline B
2016-09-13
In cooperation with the Core Outcome Measures in Effectiveness Trials (COMET) initiative, the COnsensus-based Standards for the selection of health Measurement INstruments (COSMIN) initiative aimed to develop a guideline on how to select outcome measurement instruments for outcomes (i.e., constructs or domains) included in a "Core Outcome Set" (COS). A COS is an agreed minimum set of outcomes that should be measured and reported in all clinical trials of a specific disease or trial population. Informed by a literature review to identify potentially relevant tasks on outcome measurement instrument selection, a Delphi study was performed among a panel of international experts, representing diverse stakeholders. In three consecutive rounds, panelists were asked to rate the importance of different tasks in the selection of outcome measurement instruments, to justify their choices, and to add other relevant tasks. Consensus was defined as being achieved when 70 % or more of the panelists agreed and when fewer than 15 % of the panelists disagreed. Of the 481 invited experts, 120 agreed to participate of whom 95 (79 %) completed the first Delphi questionnaire. We reached consensus on four main steps in the selection of outcome measurement instruments for COS: Step 1, conceptual considerations; Step 2, finding existing outcome measurement instruments, by means of a systematic review and/or a literature search; Step 3, quality assessment of outcome measurement instruments, by means of the evaluation of the measurement properties and feasibility aspects of outcome measurement instruments; and Step 4, generic recommendations on the selection of outcome measurement instruments for outcomes included in a COS (consensus ranged from 70 to 99 %). This study resulted in a consensus-based guideline on the methods for selecting outcome measurement instruments for outcomes included in a COS. This guideline can be used by COS developers in defining how to measure core outcomes.
Use of models to map potential capture of surface water
Leake, Stanley A.
2006-01-01
The effects of ground-water withdrawals on surface-water resources and riparian vegetation have become important considerations in water-availability studies. Ground water withdrawn by a well initially comes from storage around the well, but with time can eventually increase inflow to the aquifer and (or) decrease natural outflow from the aquifer. This increased inflow and decreased outflow is referred to as “capture.” For a given time, capture can be expressed as a fraction of withdrawal rate that is accounted for as increased rates of inflow and decreased rates of outflow. The time frames over which capture might occur at different locations commonly are not well understood by resource managers. A ground-water model, however, can be used to map potential capture for areas and times of interest. The maps can help managers visualize the possible timing of capture over large regions. The first step in the procedure to map potential capture is to run a ground-water model in steady-state mode without withdrawals to establish baseline total flow rates at all sources and sinks. The next step is to select a time frame and appropriate withdrawal rate for computing capture. For regional aquifers, time frames of decades to centuries may be appropriate. The model is then run repeatedly in transient mode, each run with one well in a different model cell in an area of interest. Differences in inflow and outflow rates from the baseline conditions for each model run are computed and saved. The differences in individual components are summed and divided by the withdrawal rate to obtain a single capture fraction for each cell. Values are contoured to depict capture fractions for the time of interest. Considerations in carrying out the analysis include use of realistic physical boundaries in the model, understanding the degree of linearity of the model, selection of an appropriate time frame and withdrawal rate, and minimizing error in the global mass balance of the model.
Bébéar, Cécile M.; Renaudin, Hélène; Charron, Alain; Bové, Joseph M.; Bébéar, Christiane; Renaudin, Joel
1998-01-01
Mycoplasma hominis mutants were selected stepwise for resistance to ofloxacin and sparfloxacin, and their gyrA, gyrB, parC, and parE quinolone resistance-determining regions were characterized. For ofloxacin, four rounds of selection yielded six first-, six second-, five third-, and two fourth-step mutants. The first-step mutants harbored a single Asp426→Asn substitution in ParE. GyrA changes (Ser83→Leu or Trp) were found only from the third round of selection. With sparfloxacin, three rounds of selection generated 4 first-, 7 second-, and 10 third-step mutants. In contrast to ofloxacin resistance, GyrA mutations (Ser83→Leu or Ser84→Trp) were detected in the first-step mutants prior to ParC changes (Glu84→Lys), which appeared only after the second round of selection. Further analysis of eight multistep-selected mutants of M. hominis that were previously described (2) revealed that they carried mutations in ParE (Asp426→Asn), GyrA (Ser83→Leu) and ParE (Asp426→Asn), GyrA (Ser83→Leu) and ParC (Ser80→Ile), or ParC (Ser80→Ile) alone, depending on the fluoroquinolone used for selection, i.e., ciprofloxacin, norfloxacin, ofloxacin, or pefloxacin, respectively. These data indicate that in M. hominis DNA gyrase is the primary target of sparfloxacin whereas topoisomerase IV is the primary target of pefloxacin, ofloxacin, and ciprofloxacin. PMID:9736554
Exactly energy conserving semi-implicit particle in cell formulation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lapenta, Giovanni, E-mail: giovanni.lapenta@kuleuven.be
We report a new particle in cell (PIC) method based on the semi-implicit approach. The novelty of the new method is that unlike any of its semi-implicit predecessors at the same time it retains the explicit computational cycle and conserves energy exactly. Recent research has presented fully implicit methods where energy conservation is obtained as part of a non-linear iteration procedure. The new method (referred to as Energy Conserving Semi-Implicit Method, ECSIM), instead, does not require any non-linear iteration and its computational cycle is similar to that of explicit PIC. The properties of the new method are: i) it conservesmore » energy exactly to round-off for any time step or grid spacing; ii) it is unconditionally stable in time, freeing the user from the need to resolve the electron plasma frequency and allowing the user to select any desired time step; iii) it eliminates the constraint of the finite grid instability, allowing the user to select any desired resolution without being forced to resolve the Debye length; iv) the particle mover has a computational complexity identical to that of the explicit PIC, only the field solver has an increased computational cost. The new ECSIM is tested in a number of benchmarks where accuracy and computational performance are tested. - Highlights: • We present a new fully energy conserving semi-implicit particle in cell (PIC) method based on the implicit moment method (IMM). The new method is called Energy Conserving Implicit Moment Method (ECIMM). • The novelty of the new method is that unlike any of its predecessors at the same time it retains the explicit computational cycle and conserves energy exactly. • The new method is unconditionally stable in time, freeing the user from the need to resolve the electron plasma frequency. • The new method eliminates the constraint of the finite grid instability, allowing the user to select any desired resolution without being forced to resolve the Debye length. • These features are achieved at a reduced cost compared with either previous IMM or fully implicit implementation of PIC.« less
Selectivity Mechanism of the Nuclear Pore Complex Characterized by Single Cargo Tracking
Lowe, Alan R.; Siegel, Jake J.; Kalab, Petr; Siu, Merek; Weis, Karsten; Liphardt, Jan T.
2010-01-01
The Nuclear Pore Complex (NPC) mediates all exchange between the cytoplasm and the nucleus. Small molecules can passively diffuse through the NPC, while larger cargos require transport receptors to translocate1. How the NPC facilitates the translocation of transport receptor/cargo complexes remains unclear. Here, we track single protein-functionalized Quantum Dot (QD) cargos as they translocate the NPC. Import proceeds by successive sub-steps comprising cargo capture, filtering and translocation, and release into the nucleus. The majority of QDs are rejected at one of these steps and return to the cytoplasm including very large cargos that abort at a size-selective barrier. Cargo movement in the central channel is subdiffusive and cargos that can bind more transport receptors diffuse more freely. Without Ran, cargos still explore the entire NPC, but have a markedly reduced probability of exit into the nucleus, suggesting that NPC entry and exit steps are not equivalent and that the pore is functionally asymmetric to importing cargos. The overall selectivity of the NPC appears to arise from the cumulative action of multiple reversible sub-steps and a final irreversible exit step. PMID:20811366
NASA Astrophysics Data System (ADS)
Kozikowski, Raymond T.; Sorg, Brian S.
2012-03-01
Chemotherapy is a standard treatment for metastatic cancer. However drug toxicity limits the dosage that can safely be used, thus reducing treatment efficacy. Drug carrier particles, like liposomes, can help reduce toxicity by shielding normal tissue from drug and selectively depositing drug in tumors. Over years of development, liposomes have been optimized to avoid uptake by the Reticuloendothelial System (RES) as well as effectively retain their drug content during circulation. As a result, liposomes release drug passively, by slow leakage, but this uncontrolled drug release can limit treatment efficacy as it can be difficult to achieve therapeutic concentrations of drug at tumor sites even with tumor-specific accumulation of the carriers. Lipid membranes can be photochemically lysed by both Type I (photosensitizer-substrate) and Type II (photosensitizer-oxygen) reactions. It has been demonstrated in red blood cells (RBCs) in vitro that these photolysis reactions can occur in two distinct steps: a light-initiated reaction followed by a thermally-initiated reaction. These separable activation steps allow for the delay of photohemolysis in a controlled manner using the irradiation energy, temperature and photosensitizer concentration. In this work we have translated this technique from RBCs to liposomal nanoparticles. To that end, we present in vitro data demonstrating this delayed bolus release from liposomes, as well as the ability to control the timing of this event. Further, we demonstrate for the first time the improved delivery of bioavailable cargo selectively to target sites in vivo.
Foadi, James; Aller, Pierre; Alguel, Yilmaz; Cameron, Alex; Axford, Danny; Owen, Robin L; Armour, Wes; Waterman, David G; Iwata, So; Evans, Gwyndaf
2013-08-01
The availability of intense microbeam macromolecular crystallography beamlines at third-generation synchrotron sources has enabled data collection and structure solution from microcrystals of <10 µm in size. The increased likelihood of severe radiation damage where microcrystals or particularly sensitive crystals are used forces crystallographers to acquire large numbers of data sets from many crystals of the same protein structure. The associated analysis and merging of multi-crystal data is currently a manual and time-consuming step. Here, a computer program, BLEND, that has been written to assist with and automate many of the steps in this process is described. It is demonstrated how BLEND has successfully been used in the solution of a novel membrane protein.
Foadi, James; Aller, Pierre; Alguel, Yilmaz; Cameron, Alex; Axford, Danny; Owen, Robin L.; Armour, Wes; Waterman, David G.; Iwata, So; Evans, Gwyndaf
2013-01-01
The availability of intense microbeam macromolecular crystallography beamlines at third-generation synchrotron sources has enabled data collection and structure solution from microcrystals of <10 µm in size. The increased likelihood of severe radiation damage where microcrystals or particularly sensitive crystals are used forces crystallographers to acquire large numbers of data sets from many crystals of the same protein structure. The associated analysis and merging of multi-crystal data is currently a manual and time-consuming step. Here, a computer program, BLEND, that has been written to assist with and automate many of the steps in this process is described. It is demonstrated how BLEND has successfully been used in the solution of a novel membrane protein. PMID:23897484
NASA Astrophysics Data System (ADS)
Gao, M.; Li, J.
2018-04-01
Geometric correction is an important preprocessing process in the application of GF4 PMS image. The method of geometric correction that is based on the manual selection of geometric control points is time-consuming and laborious. The more common method, based on a reference image, is automatic image registration. This method involves several steps and parameters. For the multi-spectral sensor GF4 PMS, it is necessary for us to identify the best combination of parameters and steps. This study mainly focuses on the following issues: necessity of Rational Polynomial Coefficients (RPC) correction before automatic registration, base band in the automatic registration and configuration of GF4 PMS spatial resolution.
Method of Simulating Flow-Through Area of a Pressure Regulator
NASA Technical Reports Server (NTRS)
Hass, Neal E. (Inventor); Schallhorn, Paul A. (Inventor)
2011-01-01
The flow-through area of a pressure regulator positioned in a branch of a simulated fluid flow network is generated. A target pressure is defined downstream of the pressure regulator. A projected flow-through area is generated as a non-linear function of (i) target pressure, (ii) flow-through area of the pressure regulator for a current time step and a previous time step, and (iii) pressure at the downstream location for the current time step and previous time step. A simulated flow-through area for the next time step is generated as a sum of (i) flow-through area for the current time step, and (ii) a difference between the projected flow-through area and the flow-through area for the current time step multiplied by a user-defined rate control parameter. These steps are repeated for a sequence of time steps until the pressure at the downstream location is approximately equal to the target pressure.
OrthoSelect: a protocol for selecting orthologous groups in phylogenomics.
Schreiber, Fabian; Pick, Kerstin; Erpenbeck, Dirk; Wörheide, Gert; Morgenstern, Burkhard
2009-07-16
Phylogenetic studies using expressed sequence tags (EST) are becoming a standard approach to answer evolutionary questions. Such studies are usually based on large sets of newly generated, unannotated, and error-prone EST sequences from different species. A first crucial step in EST-based phylogeny reconstruction is to identify groups of orthologous sequences. From these data sets, appropriate target genes are selected, and redundant sequences are eliminated to obtain suitable sequence sets as input data for tree-reconstruction software. Generating such data sets manually can be very time consuming. Thus, software tools are needed that carry out these steps automatically. We developed a flexible and user-friendly software pipeline, running on desktop machines or computer clusters, that constructs data sets for phylogenomic analyses. It automatically searches assembled EST sequences against databases of orthologous groups (OG), assigns ESTs to these predefined OGs, translates the sequences into proteins, eliminates redundant sequences assigned to the same OG, creates multiple sequence alignments of identified orthologous sequences and offers the possibility to further process this alignment in a last step by excluding potentially homoplastic sites and selecting sufficiently conserved parts. Our software pipeline can be used as it is, but it can also be adapted by integrating additional external programs. This makes the pipeline useful for non-bioinformaticians as well as to bioinformatic experts. The software pipeline is especially designed for ESTs, but it can also handle protein sequences. OrthoSelect is a tool that produces orthologous gene alignments from assembled ESTs. Our tests show that OrthoSelect detects orthologs in EST libraries with high accuracy. In the absence of a gold standard for orthology prediction, we compared predictions by OrthoSelect to a manually created and published phylogenomic data set. Our tool was not only able to rebuild the data set with a specificity of 98%, but it detected four percent more orthologous sequences. Furthermore, the results OrthoSelect produces are in absolut agreement with the results of other programs, but our tool offers a significant speedup and additional functionality, e.g. handling of ESTs, computing sequence alignments, and refining them. To our knowledge, there is currently no fully automated and freely available tool for this purpose. Thus, OrthoSelect is a valuable tool for researchers in the field of phylogenomics who deal with large quantities of EST sequences. OrthoSelect is written in Perl and runs on Linux/Mac OS X. The tool can be downloaded at (http://gobics.de/fabian/orthoselect.php).
Dong, Hongjuan; Marchetti-Deschmann, Martina; Allmaier, Günter
2014-01-01
Traditionally characterization of microbial proteins is performed by a complex sequence of steps with the final step to be either Edman sequencing or mass spectrometry, which generally takes several weeks or months to be complete. In this work, we proposed a strategy for the characterization of tryptic peptides derived from Giberella zeae (anamorph: Fusarium graminearum) proteins in parallel to intact cell mass spectrometry (ICMS) in which no complicated and time-consuming steps were needed. Experimentally, after a simple washing treatment of the spores, the aliquots of the intact G. zeae macro conidia spores solution, were deposited two times onto one MALDI (matrix-assisted laser desorption ionization) mass spectrometry (MS) target (two spots). One spot was used for ICMS and the second spot was subject to a brief on-target digestion with bead-immobilized or non-immobilized trypsin. Subsequently, one spot was analyzed immediately by MALDI MS in the linear mode (ICMS) whereas the second spot containing the digested material was investigated by MALDI MS in the reflectron mode ("peptide mass fingerprint") followed by protonated peptide selection for MS/MS (post source decay (PSD) fragment ion) analysis. Based on the formed fragment ions of selected tryptic peptides a complete or partial amino acid sequence was generated by manual de novo sequencing. These sequence data were used for homology search for protein identification. Finally four different peptides of varying abundances have been identified successfully allowing the verification that our desorbed/ionized surface compounds were indeed derived from proteins. The presence of three different proteins could be found unambiguously. Interestingly, one of these proteins is belonging to the ribosomal superfamily which indicates that not only surface-associated proteins were digested. This strategy minimized the amount of time and labor required for obtaining deeper information on spore preparations within the nowadays widely used ICMS approach. Copyright © 2013 Elsevier Ltd. All rights reserved.
Näreoja, Tuomas; Rosenholm, Jessica M; Lamminmäki, Urpo; Hänninen, Pekka E
2017-05-01
Thyrotropin or thyroid-stimulating hormone (TSH) is used as a marker for thyroid function. More precise and more sensitive immunoassays are needed to facilitate continuous monitoring of thyroid dysfunctions and to assess the efficacy of the selected therapy and dosage of medication. Moreover, most thyroid diseases are autoimmune diseases making TSH assays very prone to immunoassay interferences due to autoantibodies in the sample matrix. We have developed a super-sensitive TSH immunoassay utilizing nanoparticle labels with a detection limit of 60 nU L -1 in preprocessed serum samples by reducing nonspecific binding. The developed preprocessing step by affinity purification removed interfering compounds and improved the recovery of spiked TSH from serum. The sensitivity enhancement was achieved by stabilization of the protein corona of the nanoparticle bioconjugates and a spot-coated configuration of the active solid-phase that reduced sedimentation of the nanoparticle bioconjugates and their contact time with antibody-coated solid phase, thus making use of the higher association rate of specific binding due to high avidity nanoparticle bioconjugates. Graphical Abstract We were able to decrease the lowest limit of detection and increase sensitivity of TSH immunoassay using Eu(III)-nanoparticles. The improvement was achieved by decreasing binding time of nanoparticle bioconjugates by small capture area and fast circular rotation. Also, we applied a step to stabilize protein corona of the nanoparticles and a serum-preprocessing step with a structurally related antibody.
Selected Physical Properties of 2-Chloroethyl-3-Chloropropyl Sulfide (CECPRS)
2010-10-01
Analysis * For this work, a TA Instruments 910 Differential Scanning Calorimeter and 2200 Controller were used. Prior to sample measurements, the DSC...controlled mass flow rate over a known time, concentrated, and the mass quantified by GC-FID analysis . This step enables vapor pressure measurements for low...Bellefonte, PA), with a 1.0 (im RTx-1 ( polydimethylsiloxane ) stationary phase, was maintained at 40 °C for 2 min following sample introduction, then heated
2012-09-01
make end of life ( EOL ) and remaining useful life (RUL) estimations. Model-based prognostics approaches perform these tasks with the help of first...in parameters Degradation Modeling Parameter estimation Prediction Thermal / Electrical Stress Experimental Data State Space model RUL EOL ...distribution at given single time point kP , and use this for multi-step predictions to EOL . There are several methods which exits for selecting the sigma
Tsunekawa, Ryuji; Hanaya, Kengo; Higashibayashi, Shuhei; Sugai, Takeshi
2018-04-26
Fisetin and 2',4',6'-trihydroxydihyrochalcone 4'-O-β-neohesperidoside were synthesized from commercially available quercetin and naringin in five steps. The key steps are site-selective deacetylation and subsequent deoxygenation. The target molecules were obtained in 37% and 23% yields from the starting materials, respectively.
Determining the optimum solar water pumping system for domestic use, livestock water, or irrigation
USDA-ARS?s Scientific Manuscript database
For several years we have field tested many different types of solar powered water pumping systems. In this paper, several steps are given to select a solar-PV water pumping system. The steps for selection of stand-alone water pumping system were: deciding whether a wind or solar water pumping sys...
Selection of Yeasts as Starter Cultures for Table Olives: A Step-by-Step Procedure
Bevilacqua, Antonio; Corbo, Maria Rosaria; Sinigaglia, Milena
2012-01-01
The selection of yeasts intended as starters for table olives is a complex process, including a characterization step at laboratory level and a validation at lab level and factory-scale. The characterization at lab level deals with the assessment of some technological traits (growth under different temperatures and at alkaline pHs, effect of salt, and for probiotic strains the resistance to preservatives), enzymatic activities, and some new functional properties (probiotic traits, production of vitamin B-complex, biological debittering). The paper reports on these traits, focusing both on their theoretical implications and lab protocols; moreover, there are some details on predictive microbiology for yeasts of table olives and on the use of multivariate approaches to select suitable starters. PMID:22666220
CLARIPED: a new tool for risk classification in pediatric emergencies.
Magalhães-Barbosa, Maria Clara de; Prata-Barbosa, Arnaldo; Alves da Cunha, Antonio José Ledo; Lopes, Cláudia de Souza
2016-09-01
To present a new pediatric risk classification tool, CLARIPED, and describe its development steps. Development steps: (i) first round of discussion among experts, first prototype; (ii) pre-test of reliability, 36 hypothetical cases; (iii) second round of discussion to perform adjustments; (iv) team training; (v) pre-test with patients in real time; (vi) third round of discussion to perform new adjustments; (vii) final pre-test of validity (20% of medical treatments in five days). CLARIPED features five urgency categories: Red (Emergency), Orange (very urgent), Yellow (urgent), Green (little urgent) and Blue (not urgent). The first classification step includes the measurement of four vital signs (Vipe score); the second step consists in the urgency discrimination assessment. Each step results in assigning a color, selecting the most urgent one for the final classification. Each color corresponds to a maximum waiting time for medical care and referral to the most appropriate physical area for the patient's clinical condition. The interobserver agreement was substantial (kappa=0.79) and the final pre-test, with 82 medical treatments, showed good correlation between the proportion of patients in each urgency category and the number of used resources (p<0.001). CLARIPED is an objective and easy-to-use tool for simple risk classification, of which pre-tests suggest good reliability and validity. Larger-scale studies on its validity and reliability in different health contexts are ongoing and can contribute to the implementation of a nationwide pediatric risk classification system. Copyright © 2016 Sociedade de Pediatria de São Paulo. Publicado por Elsevier Editora Ltda. All rights reserved.
Alsulays, Bader B; Fayed, Mohamed H; Alalaiwe, Ahmed; Alshahrani, Saad M; Alshetaili, Abdullah S; Alshehri, Sultan M; Alanazi, Fars K
2018-05-16
The objective of this study was to examine the influence of drug amount and mixing time on the homogeneity and content uniformity of a low-dose drug formulation during the dry mixing step using a new gentle-wing high-shear mixer. Moreover, the study investigated the influence of drug incorporation mode on the content uniformity of tablets manufactured by different methods. Albuterol sulfate was selected as a model drug and was blended with the other excipients at two different levels, 1% w/w and 5% w/w at impeller speed of 300 rpm and chopper speed of 3000 rpm for 30 min. Utilizing a 1 ml unit side-sampling thief probe, triplicate samples were taken from nine different positions in the mixer bowl at selected time points. Two methods were used for manufacturing of tablets, direct compression and wet granulation. The produced tablets were sampled at the beginning, middle, and end of the compression cycle. An analysis of variance analysis indicated the significant effect (p < .05) of drug amount on the content uniformity of the powder blend and the corresponding tablets. For 1% w/w and 5% w/w formulations, incorporation of the drug in the granulating fluid provided tablets with excellent content uniformity and very low relative standard deviation (∼0.61%) during the whole tableting cycle compared to direct compression and granulation method with dry incorporation mode of the drug. Overall, gentle-wing mixer is a good candidate for mixing of low-dose cohesive drug and provides tablets with acceptable content uniformity with no need for pre-blending step.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Havasy, C.K.; Quach, T.K.; Bozada, C.A.
1995-12-31
This work is the development of a single-layer integrated-metal field effect transistor (SLIMFET) process for a high performance 0.2 {mu}m AlGaAs/InGaAs pseudomorphic high electron mobility transistor (PHEMT). This process is compatible with MMIC fabrication and minimizes process variations, cycle time, and cost. This process uses non-alloyed ohmic contacts, a selective gate-recess etching process, and a single gate/source/drain metal deposition step to form both Schottky and ohmic contacts at the same time.
Cosmetic surgery in times of recession: macroeconomics for plastic surgeons.
Krieger, Lloyd M
2002-10-01
Periods of economic downturn place special demands on the plastic surgeon whose practice involves a large amount of cosmetic surgery. When determining strategy during difficult economic times, it is useful to understand the macroeconomic background of these downturns and to draw lessons from businesses in other service industries. Business cycles and monetary policy determine the overall environment in which plastic surgery is practiced. Plastic surgeons can take both defensive and proactive steps to maintain their profits during recessions and to prepare for the inevitable upturn. Care should also be taken when selecting pricing strategy during economic slowdowns.
Methods, systems and devices for detecting and locating ferromagnetic objects
Roybal, Lyle Gene [Idaho Falls, ID; Kotter, Dale Kent [Shelley, ID; Rohrbaugh, David Thomas [Idaho Falls, ID; Spencer, David Frazer [Idaho Falls, ID
2010-01-26
Methods for detecting and locating ferromagnetic objects in a security screening system. One method includes a step of acquiring magnetic data that includes magnetic field gradients detected during a period of time. Another step includes representing the magnetic data as a function of the period of time. Another step includes converting the magnetic data to being represented as a function of frequency. Another method includes a step of sensing a magnetic field for a period of time. Another step includes detecting a gradient within the magnetic field during the period of time. Another step includes identifying a peak value of the gradient detected during the period of time. Another step includes identifying a portion of time within the period of time that represents when the peak value occurs. Another step includes configuring the portion of time over the period of time to represent a ratio.
Liu, Yu; Holmstrom, Erik; Yu, Ping; Tan, Kemin; Zuo, Xiaobing; Nesbitt, David J; Sousa, Rui; Stagno, Jason R; Wang, Yun-Xing
2018-05-01
Site-specific incorporation of labeled nucleotides is an extremely useful synthetic tool for many structural studies (e.g., NMR, electron paramagnetic resonance (EPR), fluorescence resonance energy transfer (FRET), and X-ray crystallography) of RNA. However, specific-position-labeled RNAs >60 nt are not commercially available on a milligram scale. Position-selective labeling of RNA (PLOR) has been applied to prepare large RNAs labeled at desired positions, and all the required reagents are commercially available. Here, we present a step-by-step protocol for the solid-liquid hybrid phase method PLOR to synthesize 71-nt RNA samples with three different modification applications, containing (i) a 13 C 15 N-labeled segment; (ii) discrete residues modified with Cy3, Cy5, or biotin; or (iii) two iodo-U residues. The flexible procedure enables a wide range of downstream biophysical analyses using precisely localized functionalized nucleotides. All three RNAs were obtained in <2 d, excluding time for preparing reagents and optimizing experimental conditions. With optimization, the protocol can be applied to other RNAs with various labeling schemes, such as ligation of segmentally labeled fragments.
Fine-scale movement decisions of tropical forest birds in a fragmented landscape.
Gillies, Cameron S; Beyer, Hawthorne L; St Clair, Colleen Cassady
2011-04-01
The persistence of forest-dependent species in fragmented landscapes is fundamentally linked to the movement of individuals among subpopulations. The paths taken by dispersing individuals can be considered a series of steps built from individual route choices. Despite the importance of these fine-scale movement decisions, it has proved difficult to collect such data that reveal how forest birds move in novel landscapes. We collected unprecedented route information about the movement of translocated forest birds from two species in the highly fragmented tropical dry forest of Costa Rica. In this pasture-dominated landscape, forest remains in patches or riparian corridors, with lesser amounts of living fencerows and individual trees or "stepping stones." We used step selection functions to quantify how route choice was influenced by these habitat elements. We found that the amount of risk these birds were willing to take by crossing open habitat was context dependent. The forest-specialist Barred Antshrike (Thamnophilus doliatus) exhibited stronger selection for forested routes when moving in novel landscapes distant from its territory relative to locations closer to its territory. It also selected forested routes when its step originated in forest habitat. It preferred steps ending in stepping stones when the available routes had little forest cover, but avoided them when routes had greater forest cover. The forest-generalist Rufous-naped Wren (Campylorhynchus rufinucha) preferred steps that contained more pasture, but only when starting from non-forest habitats. Our results showed that forested corridors (i.e., riparian corridors) best facilitated the movement of a sensitive forest specialist through this fragmented landscape. They also suggested that stepping stones can be important in highly fragmented forests with little remaining forest cover. We expect that naturally dispersing birds and species with greater forest dependence would exhibit even stronger selection for forested routes than did the birds in our experiments.
Evaluating and selecting an information system, Part 1.
Neal, T
1993-01-01
Initial steps in the process of evaluating and selecting a computerized information system for the pharmacy department are described. The first step in the selection process is to establish a steering committee and a project committee. The steering committee oversees the project, providing policy guidance, making major decisions, and allocating budgeted expenditures. The project committee conducts the departmental needs assessment, identifies system requirements, performs day-to-day functions, evaluates vendor proposals, trains personnel, and implements the system chosen. The second step is the assessment of needs in terms of personnel, workload, physical layout, and operating requirements. The needs assessment should be based on the department's mission statement and strategic plan. The third step is the development of a request for information (RFI) and a request for proposal (RFP). The RFI is a document designed for gathering preliminary information from a wide range of vendors; this general information is used in deciding whether to send the RFP to a given vendor. The RFP requests more detailed information and gives the purchaser's exact specifications for a system; the RFP also includes contractual information. To help ensure project success, many institutions turn to computer consultants for guidance. The initial steps in selecting a computerized pharmacy information system are establishing computerization committees, conducting a needs assessment, and writing an RFI and an RFP. A crucial early decision is whether to seek a consultant's expertise.
Accuracy of an unstructured-grid upwind-Euler algorithm for the ONERA M6 wing
NASA Technical Reports Server (NTRS)
Batina, John T.
1991-01-01
Improved algorithms for the solution of the three-dimensional, time-dependent Euler equations are presented for aerodynamic analysis involving unstructured dynamic meshes. The improvements have been developed recently to the spatial and temporal discretizations used by unstructured-grid flow solvers. The spatial discretization involves a flux-split approach that is naturally dissipative and captures shock waves sharply with at most one grid point within the shock structure. The temporal discretization involves either an explicit time-integration scheme using a multistage Runge-Kutta procedure or an implicit time-integration scheme using a Gauss-Seidel relaxation procedure, which is computationally efficient for either steady or unsteady flow problems. With the implicit Gauss-Seidel procedure, very large time steps may be used for rapid convergence to steady state, and the step size for unsteady cases may be selected for temporal accuracy rather than for numerical stability. Steady flow results are presented for both the NACA 0012 airfoil and the Office National d'Etudes et de Recherches Aerospatiales M6 wing to demonstrate applications of the new Euler solvers. The paper presents a description of the Euler solvers along with results and comparisons that assess the capability.
An automated workflow for parallel processing of large multiview SPIM recordings
Schmied, Christopher; Steinbach, Peter; Pietzsch, Tobias; Preibisch, Stephan; Tomancak, Pavel
2016-01-01
Summary: Selective Plane Illumination Microscopy (SPIM) allows to image developing organisms in 3D at unprecedented temporal resolution over long periods of time. The resulting massive amounts of raw image data requires extensive processing interactively via dedicated graphical user interface (GUI) applications. The consecutive processing steps can be easily automated and the individual time points can be processed independently, which lends itself to trivial parallelization on a high performance computing (HPC) cluster. Here, we introduce an automated workflow for processing large multiview, multichannel, multiillumination time-lapse SPIM data on a single workstation or in parallel on a HPC cluster. The pipeline relies on snakemake to resolve dependencies among consecutive processing steps and can be easily adapted to any cluster environment for processing SPIM data in a fraction of the time required to collect it. Availability and implementation: The code is distributed free and open source under the MIT license http://opensource.org/licenses/MIT. The source code can be downloaded from github: https://github.com/mpicbg-scicomp/snakemake-workflows. Documentation can be found here: http://fiji.sc/Automated_workflow_for_parallel_Multiview_Reconstruction. Contact: schmied@mpi-cbg.de Supplementary information: Supplementary data are available at Bioinformatics online. PMID:26628585
Method and Apparatus for Monitoring of Daily Activity in Terms of Ground Reaction Forces
NASA Technical Reports Server (NTRS)
Whalen, Robert T. (Inventor); Breit, Gregory A. (Inventor)
2001-01-01
A device to record and analyze habitual daily activity in terms of the history of gait-related musculoskeletal loading is disclosed. The device consists of a pressure-sensing insole placed into the shoe or embedded in a shoe sole, which detects contact of the foot with the ground. The sensor is coupled to a portable battery-powered digital data logger clipped to the shoe or worn around the ankle or waist. During the course of normal daily activity, the system maintains a record of time-of-occurrence of all non-spurious foot-down and lift-off events. Off line, these data are filtered and converted to a history of foot-ground contact times, from which measures of cumulative musculoskeletal loading, average walking- and running-specific gait speed, total time spent walking and running, total number of walking steps and running steps, and total gait-related energy expenditure are estimated from empirical regressions of various gait parameters to the contact time reciprocal. Data are available as cumulative values or as daily averages by menu selection. The data provided by this device are useful for assessment of musculoskeletal and cardiovascular health and risk factors associated with habitual patterns of daily activity.
An automated workflow for parallel processing of large multiview SPIM recordings.
Schmied, Christopher; Steinbach, Peter; Pietzsch, Tobias; Preibisch, Stephan; Tomancak, Pavel
2016-04-01
Selective Plane Illumination Microscopy (SPIM) allows to image developing organisms in 3D at unprecedented temporal resolution over long periods of time. The resulting massive amounts of raw image data requires extensive processing interactively via dedicated graphical user interface (GUI) applications. The consecutive processing steps can be easily automated and the individual time points can be processed independently, which lends itself to trivial parallelization on a high performance computing (HPC) cluster. Here, we introduce an automated workflow for processing large multiview, multichannel, multiillumination time-lapse SPIM data on a single workstation or in parallel on a HPC cluster. The pipeline relies on snakemake to resolve dependencies among consecutive processing steps and can be easily adapted to any cluster environment for processing SPIM data in a fraction of the time required to collect it. The code is distributed free and open source under the MIT license http://opensource.org/licenses/MIT The source code can be downloaded from github: https://github.com/mpicbg-scicomp/snakemake-workflows Documentation can be found here: http://fiji.sc/Automated_workflow_for_parallel_Multiview_Reconstruction : schmied@mpi-cbg.de Supplementary data are available at Bioinformatics online. © The Author 2015. Published by Oxford University Press.
Liu, Chiung-ju; Xu, Huiping; Keith, NiCole R; Clark, Daniel O
2017-01-01
Background Resistance exercise is effective to increase muscle strength for older adults; however, its effect on the outcome of activities of daily living is often limited. The purpose of this study was to examine whether 3-Step Workout for Life (which combines resistance exercise, functional exercise, and activities of daily living exercise) would be more beneficial than resistance exercise alone. Methods A single-blind randomized controlled trial was conducted. Fifty-two inactive, community-dwelling older adults (mean age =73 years) with muscle weakness and difficulty in activities of daily living were randomized to receive 3-Step Workout for Life or resistance exercise only. Participants in the 3-Step Workout for Life Group performed functional movements and selected activities of daily living at home in addition to resistance exercise. Participants in the Resistance Exercise Only Group performed resistance exercise only. Both groups were comparable in exercise intensity (moderate), duration (50–60 minutes each time for 10 weeks), and frequency (three times a week). Assessment of Motor and Process Skills, a standard performance test on activities of daily living, was administered at baseline, postintervention, and 6 months after intervention completion. Results At postintervention, the 3-Step Workout for Life Group showed improvement on the outcome measure (mean change from baseline =0.29, P=0.02), but the improvement was not greater than the Resistance Exercise Only Group (group mean difference =0.24, P=0.13). However, the Resistance Exercise Only Group showed a significant decline (mean change from baseline =−0.25, P=0.01) 6 months after the intervention completion. Meanwhile, the superior effect of 3-Step Workout for Life was observed (group mean difference =0.37, P<0.01). Conclusion Compared to resistance exercise alone, 3-Step Workout for Life improves the performance of activities of daily living and attenuates the disablement process in older adults. PMID:28769559
Brown, Guy C
2010-10-01
Control analysis can be used to try to understand why (quantitatively) systems are the way that they are, from rate constants within proteins to the relative amount of different tissues in organisms. Many biological parameters appear to be optimized to maximize rates under the constraint of minimizing space utilization. For any biological process with multiple steps that compete for control in series, evolution by natural selection will tend to even out the control exerted by each step. This is for two reasons: (i) shared control maximizes the flux for minimum protein concentration, and (ii) the selection pressure on any step is proportional to its control, and selection will, by increasing the rate of a step (relative to other steps), decrease its control over a pathway. The control coefficient of a parameter P over fitness can be defined as (∂N/N)/(∂P/P), where N is the number of individuals in the population, and ∂N is the change in that number as a result of the change in P. This control coefficient is equal to the selection pressure on P. I argue that biological systems optimized by natural selection will conform to a principle of sufficiency, such that the control coefficient of all parameters over fitness is 0. Thus in an optimized system small changes in parameters will have a negligible effect on fitness. This principle naturally leads to (and is supported by) the dominance of wild-type alleles over null mutants.
The Chemical Basis for the Origin of the Genetic Code and the Process of Protein Synthesis
NASA Technical Reports Server (NTRS)
Lacey, James C., Jr.
1990-01-01
A model for the origin of protein synthesis. The essential features of the model are that 5'-AMP and perhaps other monoribonucleotides can serve as catalysts for the selective synthesis of L-based peptides. A unique set of characteristics of 5'-AMP is responsible for the selective catalysts and these characteristics are described in detail. The model involves the formation of diesters as intermediates and selectivity for use of the L-isomer occurs principally at the step of forming the diester. However, in the formation of acetyl phenylalanine-AMP monoester there is a selectivity for esterification by the D-isomer. Data showing this selectivity is presented. This selectivity for D-isomer disappears after the first step. The identity was confirmed of all four of possible diesters of acetyl-D- and -L phenylaline with 5'-AMP by nuclear magnetic resonance (NMR). The data using flourescence and NMR show the Trp ring can associate with the adenine ring more strongly when the D-isomer is in the 2' position than it can when in the 3' position. These same data also suggest a molecular mechanisim for the faster esterificaton of 5'-AMP by acetyl-D-phenylaline. Some new data is also presented on the possible structure of the 2' isomer of acetyl-D-tryptophan-AMP monoester. The HPLC elution times of all four possible acetyl diphenylalanine esters of 5'-AMP were studied, these peptidyl esters will be products in the studies of peptide formation on the ribose of 5'-AMP. Other studies were on the rate of synthesis and the identity of the product when producing 3'Ac-Phe-2'tBOC-Phe-AMP diester. HPLC purification and identification of this product were accomplished.
Novel Two-Step Hierarchical Screening of Mutant Pools Reveals Mutants under Selection in Chicks
Yang, Hee-Jeong; Bogomolnaya, Lydia M.; Elfenbein, Johanna R.; Endicott-Yazdani, Tiana; Reynolds, M. Megan; Porwollik, Steffen; Cheng, Pui; Xia, Xiao-Qin
2016-01-01
Contaminated chicken/egg products are major sources of human salmonellosis, yet the strategies used by Salmonella to colonize chickens are poorly understood. We applied a novel two-step hierarchical procedure to identify new genes important for colonization and persistence of Salmonella enterica serotype Typhimurium in chickens. A library of 182 S. Typhimurium mutants each containing a targeted deletion of a group of contiguous genes (for a total of 2,069 genes deleted) was used to identify regions under selection at 1, 3, and 9 days postinfection in chicks. Mutants in 11 regions were under selection at all assayed times (colonization mutants), and mutants in 15 regions were under selection only at day 9 (persistence mutants). We assembled a pool of 92 mutants, each deleted for a single gene, representing nearly all genes in nine regions under selection. Twelve single gene deletion mutants were under selection in this assay, and we confirmed 6 of 9 of these candidate mutants via competitive infections and complementation analysis in chicks. STM0580, STM1295, STM1297, STM3612, STM3615, and STM3734 are needed for Salmonella to colonize and persist in chicks and were not previously associated with this ability. One of these key genes, STM1297 (selD), is required for anaerobic growth and supports the ability to utilize formate under these conditions, suggesting that metabolism of formate is important during infection. We report a hierarchical screening strategy to interrogate large portions of the genome during infection of animals using pools of mutants of low complexity. Using this strategy, we identified six genes not previously known to be needed during infection in chicks, and one of these (STM1297) suggests an important role for formate metabolism during infection. PMID:26857572
Multiple stage miniature stepping motor
Niven, William A.; Shikany, S. David; Shira, Michael L.
1981-01-01
A stepping motor comprising a plurality of stages which may be selectively activated to effect stepping movement of the motor, and which are mounted along a common rotor shaft to achieve considerable reduction in motor size and minimum diameter, whereby sequential activation of the stages results in successive rotor steps with direction being determined by the particular activating sequence followed.
Optical Trap Loading of Dielectric Microparticles In Air.
Park, Haesung; LeBrun, Thomas W
2017-02-05
We demonstrate a method to trap a selected dielectric microparticle in air using radiation pressure from a single-beam gradient optical trap. Randomly scattered dielectric microparticles adhered to a glass substrate are momentarily detached using ultrasonic vibrations generated by a piezoelectric transducer (PZT). Then, the optical beam focused on a selected particle lifts it up to the optical trap while the vibrationally excited microparticles fall back to the substrate. A particle may be trapped at the nominal focus of the trapping beam or at a position above the focus (referred to here as the levitation position) where gravity provides the restoring force. After the measurement, the trapped particle can be placed at a desired position on the substrate in a controlled manner. In this protocol, an experimental procedure for selective optical trap loading in air is outlined. First, the experimental setup is briefly introduced. Second, the design and fabrication of a PZT holder and a sample enclosure are illustrated in detail. The optical trap loading of a selected microparticle is then demonstrated with step-by-step instructions including sample preparation, launching into the trap, and use of electrostatic force to excite particle motion in the trap and measure charge. Finally, we present recorded particle trajectories of Brownian and ballistic motions of a trapped microparticle in air. These trajectories can be used to measure stiffness or to verify optical alignment through time domain and frequency domain analysis. Selective trap loading enables optical tweezers to track a particle and its changes over repeated trap loadings in a reversible manner, thereby enabling studies of particle-surface interaction.
Shah, Shaheen; Hao, Ce
2017-07-01
Sulfamethoxypyridazine (SMP) is one of the commonly used sulfonamide antibiotics (SAs). SAs are mainly studied to undergo triplet-sensitized photodegradation in water under natural sunlight with other coexisting aquatic environmental organic pollutants. In this work, SMP was selected as a representative of SAs. We studied the mechanisms of triplet-sensitized photodegradation of SMP and the influence of selected dissolved inorganic matter, i.e., anions (Br - , Cl - , and NO 3 - ) and cations ions (Ca 2+ , Mg 2+ , and Zn 2+ ) on SMP photodegradation mechanism by quantum chemical methods. In addition, the degradation mechanisms of SMP by hydroxyl radical (OH) were also investigated. The creation of SO 2 extrusion product was accessed with two different energy pathways (pathway-1 and pathway-2) by following two steps (step-I and step-II) in the triplet-sensitized photodegradation of SMP. Due to low activation energy, the pathway-1 was considered as the main pathway to obtain SO 2 extrusion product. Step-II of pathway-1 was measured to be the rate-limiting step (RLS) of SMP photodegradation mechanism and the effect of the selected anions and cations was estimated for this step. All selected anions and cations promoted photodegradation of SMP by dropping the activation energy of pathway-1. The estimated low activation energies of different degradation pathways of SMP with OH radical indicate that OH radical is a very powerful oxidizing agent for SMP degradation via attack through benzene derivative and pyridazine derivative ring. Copyright © 2016. Published by Elsevier B.V.
Cobb, Zoe; Sellergren, Börje; Andersson, Lars I
2007-12-01
Two novel molecularly imprinted polymers (MIPs) selected from a combinatorial library of bupivacaine imprinted polymers were used for selective on-line solid-phase extraction of bupivacaine and ropivacaine from human plasma. The MIPs were prepared using methacrylic acid as the functional monomer, ethylene glycol dimethacrylate as the cross-linking monomer and in addition hydroxyethylmethacrylate to render the polymer surface hydrophilic. The novel MIPs showed high selectivity for the analytes and required fewer and lower concentrations of additives to suppress non-specific adsorption compared with a conventional MIP. This enabled the development of an on-line system for direct extraction of buffered plasma. Selective extraction was achieved without the use of time-consuming solvent switch steps, and transfer of the analytes from the MIP column to the analytical column was carried out under aqueous conditions fully compatible with reversed-phase LC gradient separation of analyte and internal standard. The MIPs showed excellent aqueous compatibility and yielded extractions with acceptable recovery and high selectivity.
Eini C. Lowell; Dennis R. Becker; Robert Rummer; Debra Larson; Linda Wadleigh
2008-01-01
This research provides an important step in the conceptualization and development of an integrated wildfire fuels reduction system from silvicultural prescription, through stem selection, harvesting, in-woods processing, transport, and market selection. Decisions made at each functional step are informed by knowledge about subsequent functions. Data on the resource...
Eini C. Lowell; Dennis R. Becker; Robert Rummer; Debra Larson; Linda Wadleigh
2008-01-01
This research provides an important step in the conceptualization and development of an integrated wildfire fuels reduction system from silvicultural prescription, through stem selection, harvesting, in-woods processing, transport, and market selection. Decisions made at each functional step are informed by knowledge about subsequent functions. Data on the resource...
Evaluation of isolation methods for pathogenic Yersinia enterocolitica from pig intestinal content.
Laukkanen, R; Hakkinen, M; Lundén, J; Fredriksson-Ahomaa, M; Johansson, T; Korkeala, H
2010-03-01
The aim of this study was to evaluate the efficiency of four isolation methods for the detection of pathogenic Yersinia enterocolitica from pig intestinal content. The four methods comprised of 15 isolation steps using selective enrichments (irgasan-ticarcillin-potassium chlorate and modified Rappaport broth) and mildly selective enrichments at 4 or 25 degrees C. Salmonella-Shigella-desoxycholate-calcium chloride agar, cefsulodin-irgasan-novobiocin agar were used as plating media. The most sensitive method detected 78% (53/68) of the positive samples. Individual isolation steps using cold enrichment as the only enrichment or as a pre-enrichment step with further selective enrichment showed the highest sensitivities (55-66%). All isolation methods resulted in high numbers of suspected colonies not confirmed as pathogenic Y. enterocolitica. Cold enrichment should be used in the detection of pathogenic Y. enterocolitica from pig intestinal contents. In addition, more than one parallel isolation step is needed. The study shows that depending on the isolation method used for Y. enterocolitica, the detected prevalence of Y. enterocolitica in pig intestinal contents varies greatly. More selective and sensitive isolation methods need to be developed for pathogenic Y. enterocolitica.
Sustainable Production of o-Xylene from Biomass-Derived Pinacol and Acrolein.
Hu, Yancheng; Li, Ning; Li, Guangyi; Wang, Aiqin; Cong, Yu; Wang, Xiaodong; Zhang, Tao
2017-07-21
o-Xylene (OX) is a large-volume commodity chemical that is conventionally produced from fossil fuels. In this study, an efficient and sustainable two-step route is used to produce OX from biomass-derived pinacol and acrolein. In the first step, the phosphotungstic acid (HPW)-catalyzed pinacol dehydration in 1-ethyl-3-methylimidazolium chloride ([emim]Cl) selectively affords 2,3-dimethylbutadiene. The high selectivity of this reaction can be ascribed to the H-bonding interaction between Cl - and the hydroxy group of pinacol. The stabilization of the carbocation intermediate by the surrounding anion Cl - may be another reason for the high selectivity. Notably, the good reusability of the HPW/[emim]Cl system can reduce the waste output and production cost. In the second step, OX is selectively produced by a Diels-Alder reaction of 2,3-dimethylbutadiene and acrolein, followed by a Pd/C-catalyzed decarbonylation/aromatization cascade in a one-pot fashion. The sustainable two-step process efficiently produces renewable OX in 79 % overall yield. Analogously, biomass-derived crotonaldehyde and pinacol can also serve as the feedstocks for the production of 1,2,4-trimethylbenzene. © 2017 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.
Design of a high definition imaging (HDI) analysis technique adapted to challenging environments
NASA Astrophysics Data System (ADS)
Laurent, Sophie Nathalie
2005-11-01
This dissertation describes a new comprehensive, flexible, highly-automated and computationally-robust approach for high definition imaging (HDI), a data acquisition technique for video-rate imaging through a turbulent atmosphere with telescopes not equipped with adaptive optics (AO). The HDI process, when applied to astronomical objects, involves the recording of a large number of images (10 3 -10 5 ) from the Earth and, in post-processing mode, selection of the very best ones to create a "perfect-seeing" diffraction-limited image via a three-step process. First, image registration is performed to find the exact position of the object in each field, using a template similar in size and shape to the target. The next task is to select only higher-quality fields using a criterion based on a measure of the blur in a region of interest around that object. The images are then shifted and added together to create an effective time exposure under ideal observing conditions. The last step's objective is to remove residual distortions in the image caused by the atmosphere and the optical equipment, using a point spread function (PSF), and a technique called "l 1 regularization" that has been adapted to this type of environment. In order to study the tenuous sodium atmospheres around solar system bodies, the three-step HDI procedure is done first in the white light domain (695-950 nm), where the Signal-to-Noise Ratio (SNR) of the images is high, resulting in an image with a sharp limb. Then the known selection and registration results are mapped to the simultaneously recorded spectral data (sodium lines: 589 and 589.6 nm), where the lower-SNR images cannot support independent registration and selection. Science results can then be derived from this spectral study to understand the structure of the atmospheres of moons and planets. This dissertation's contribution to space physics deals with locating the source of escaping sodium from Jupiter's moon lo. The results show, for the first time, that the source region is not homogeneously distributed around the small moon, but concentrated on its side of orbital motion. This identifies for modelers the physical mechanisms taking place around the most volcanic moon in the solar system.
Limited-memory adaptive snapshot selection for proper orthogonal decomposition
DOE Office of Scientific and Technical Information (OSTI.GOV)
Oxberry, Geoffrey M.; Kostova-Vassilevska, Tanya; Arrighi, Bill
2015-04-02
Reduced order models are useful for accelerating simulations in many-query contexts, such as optimization, uncertainty quantification, and sensitivity analysis. However, offline training of reduced order models can have prohibitively expensive memory and floating-point operation costs in high-performance computing applications, where memory per core is limited. To overcome this limitation for proper orthogonal decomposition, we propose a novel adaptive selection method for snapshots in time that limits offline training costs by selecting snapshots according an error control mechanism similar to that found in adaptive time-stepping ordinary differential equation solvers. The error estimator used in this work is related to theory boundingmore » the approximation error in time of proper orthogonal decomposition-based reduced order models, and memory usage is minimized by computing the singular value decomposition using a single-pass incremental algorithm. Results for a viscous Burgers’ test problem demonstrate convergence in the limit as the algorithm error tolerances go to zero; in this limit, the full order model is recovered to within discretization error. The resulting method can be used on supercomputers to generate proper orthogonal decomposition-based reduced order models, or as a subroutine within hyperreduction algorithms that require taking snapshots in time, or within greedy algorithms for sampling parameter space.« less
Perovskite nanocomposites as effective CO2-splitting agents in a cyclic redox scheme
Zhang, Junshe; Haribal, Vasudev; Li, Fanxing
2017-01-01
We report iron-containing mixed-oxide nanocomposites as highly effective redox materials for thermochemical CO2 splitting and methane partial oxidation in a cyclic redox scheme, where methane was introduced as an oxygen “sink” to promote the reduction of the redox materials followed by reoxidation through CO2 splitting. Up to 96% syngas selectivity in the methane partial oxidation step and close to complete conversion of CO2 to CO in the CO2-splitting step were achieved at 900° to 980°C with good redox stability. The productivity and production rate of CO in the CO2-splitting step were about seven times higher than those in state-of-the-art solar-thermal CO2-splitting processes, which are carried out at significantly higher temperatures. The proposed approach can potentially be applied for acetic acid synthesis with up to 84% reduction in CO2 emission when compared to state-of-the-art processes. PMID:28875171
New subtraction algorithms for evaluation of lesions on dynamic contrast-enhanced MR mammography.
Choi, Byung Gil; Kim, Hak Hee; Kim, Euy Neyng; Kim, Bum-soo; Han, Ji-Youn; Yoo, Seung-Schik; Park, Seog Hee
2002-12-01
We report new subtraction algorithms for the detection of lesions in dynamic contrast-enhanced MR mammography(CE MRM). Twenty-five patients with suspicious breast lesions underwent dynamic CE MRM using 3D fast low-angle shot. After the acquisition of the T1-weighted scout images, dynamic images were acquired six times after the bolus injection of contrast media. Serial subtractions, step-by-step subtractions, and reverse subtractions, were performed. Two radiologists attempted to differentiate benign from malignant lesion in consensus. The sensitivity, specificity, and accuracy of the method leading to the differentiation of malignant tumor from benign lesions were 85.7, 100, and 96%, respectively. Subtraction images allowed for better visualization of the enhancement as well as its temporal pattern than visual inspection of dynamic images alone. Our findings suggest that the new subtraction algorithm is adequate for screening malignant breast lesions and can potentially replace the time-intensity profile analysis on user-selected regions of interest.
Sort-Mid tasks scheduling algorithm in grid computing.
Reda, Naglaa M; Tawfik, A; Marzok, Mohamed A; Khamis, Soheir M
2015-11-01
Scheduling tasks on heterogeneous resources distributed over a grid computing system is an NP-complete problem. The main aim for several researchers is to develop variant scheduling algorithms for achieving optimality, and they have shown a good performance for tasks scheduling regarding resources selection. However, using of the full power of resources is still a challenge. In this paper, a new heuristic algorithm called Sort-Mid is proposed. It aims to maximizing the utilization and minimizing the makespan. The new strategy of Sort-Mid algorithm is to find appropriate resources. The base step is to get the average value via sorting list of completion time of each task. Then, the maximum average is obtained. Finally, the task has the maximum average is allocated to the machine that has the minimum completion time. The allocated task is deleted and then, these steps are repeated until all tasks are allocated. Experimental tests show that the proposed algorithm outperforms almost other algorithms in terms of resources utilization and makespan.
Sort-Mid tasks scheduling algorithm in grid computing
Reda, Naglaa M.; Tawfik, A.; Marzok, Mohamed A.; Khamis, Soheir M.
2014-01-01
Scheduling tasks on heterogeneous resources distributed over a grid computing system is an NP-complete problem. The main aim for several researchers is to develop variant scheduling algorithms for achieving optimality, and they have shown a good performance for tasks scheduling regarding resources selection. However, using of the full power of resources is still a challenge. In this paper, a new heuristic algorithm called Sort-Mid is proposed. It aims to maximizing the utilization and minimizing the makespan. The new strategy of Sort-Mid algorithm is to find appropriate resources. The base step is to get the average value via sorting list of completion time of each task. Then, the maximum average is obtained. Finally, the task has the maximum average is allocated to the machine that has the minimum completion time. The allocated task is deleted and then, these steps are repeated until all tasks are allocated. Experimental tests show that the proposed algorithm outperforms almost other algorithms in terms of resources utilization and makespan. PMID:26644937
Solution of nonlinear time-dependent PDEs through componentwise approximation of matrix functions
NASA Astrophysics Data System (ADS)
Cibotarica, Alexandru; Lambers, James V.; Palchak, Elisabeth M.
2016-09-01
Exponential propagation iterative (EPI) methods provide an efficient approach to the solution of large stiff systems of ODEs, compared to standard integrators. However, the bulk of the computational effort in these methods is due to products of matrix functions and vectors, which can become very costly at high resolution due to an increase in the number of Krylov projection steps needed to maintain accuracy. In this paper, it is proposed to modify EPI methods by using Krylov subspace spectral (KSS) methods, instead of standard Krylov projection methods, to compute products of matrix functions and vectors. Numerical experiments demonstrate that this modification causes the number of Krylov projection steps to become bounded independently of the grid size, thus dramatically improving efficiency and scalability. As a result, for each test problem featured, as the total number of grid points increases, the growth in computation time is just below linear, while other methods achieved this only on selected test problems or not at all.
Biosensor method and system based on feature vector extraction
Greenbaum, Elias [Knoxville, TN; Rodriguez, Jr., Miguel; Qi, Hairong [Knoxville, TN; Wang, Xiaoling [San Jose, CA
2012-04-17
A method of biosensor-based detection of toxins comprises the steps of providing at least one time-dependent control signal generated by a biosensor in a gas or liquid medium, and obtaining a time-dependent biosensor signal from the biosensor in the gas or liquid medium to be monitored or analyzed for the presence of one or more toxins selected from chemical, biological or radiological agents. The time-dependent biosensor signal is processed to obtain a plurality of feature vectors using at least one of amplitude statistics and a time-frequency analysis. At least one parameter relating to toxicity of the gas or liquid medium is then determined from the feature vectors based on reference to the control signal.
A Markovian state-space framework for integrating flexibility into space system design decisions
NASA Astrophysics Data System (ADS)
Lafleur, Jarret M.
The past decades have seen the state of the art in aerospace system design progress from a scope of simple optimization to one including robustness, with the objective of permitting a single system to perform well even in off-nominal future environments. Integrating flexibility, or the capability to easily modify a system after it has been fielded in response to changing environments, into system design represents a further step forward. One challenge in accomplishing this rests in that the decision-maker must consider not only the present system design decision, but also sequential future design and operation decisions. Despite extensive interest in the topic, the state of the art in designing flexibility into aerospace systems, and particularly space systems, tends to be limited to analyses that are qualitative, deterministic, single-objective, and/or limited to consider a single future time period. To address these gaps, this thesis develops a stochastic, multi-objective, and multi-period framework for integrating flexibility into space system design decisions. Central to the framework are five steps. First, system configuration options are identified and costs of switching from one configuration to another are compiled into a cost transition matrix. Second, probabilities that demand on the system will transition from one mission to another are compiled into a mission demand Markov chain. Third, one performance matrix for each design objective is populated to describe how well the identified system configurations perform in each of the identified mission demand environments. The fourth step employs multi-period decision analysis techniques, including Markov decision processes from the field of operations research, to find efficient paths and policies a decision-maker may follow. The final step examines the implications of these paths and policies for the primary goal of informing initial system selection. Overall, this thesis unifies state-centric concepts of flexibility from economics and engineering literature with sequential decision-making techniques from operations research. The end objective of this thesis’ framework and its supporting tools is to enable selection of the next-generation space systems today, tailored to decision-maker budget and performance preferences, that will be best able to adapt and perform in a future of changing environments and requirements. Following extensive theoretical development, the framework and its steps are applied to space system planning problems of (1) DARPA-motivated multiple- or distributed-payload satellite selection and (2) NASA human space exploration architecture selection.
Halstead, S B; Marchette, N J; Diwan, A R; Palumbo, N E; Putvatana, R
1984-07-01
Uncloned dengue (DEN) 4 (H-241) which had been passaged 15, 30 and 50 times in primary dog kidney (PDK) cells were subjected to two successive terminal dilution procedures. In the first (3Cl), virus was diluted in 10-fold steps in 10 replicate tubes. An infected tube from a dilution row with three or fewer virus-infected tubes was selected for two further passages. In the second (TD3), virus was triple terminal diluted using 2-fold dilution steps and selecting one positive tube out of 10. Both procedures selected virus population which differed from antecedents. Plaque size of PDK 15 was medium, PDK 30, small and PDK 50, pin-point. PDK 19-3Cl were medium and 56-3Cl, 24-TD3, 35-TD3 and 61-TD3 were all small. All cloned virus replication was completely shut-off at 38.5 degrees C; PDK 15 and 30 continued to replicate at this temperature. Uncloned viruses showed a graduated decrease in monkey virulence with PDK passage; cloned viruses were either avirulent for monkeys (19-3Cl, 56-31Cl, 24-TD3 and 35-TD3) or produced revertant large plaque parental-type viremia (35-3Cl and 61-TD3). Those cloned viruses which exhibited temperature sensitivity, reduced monkey virulence and stability after monkey passage may be suitable as vaccine candidates for evaluation in human beings.
Influence of BMI and dietary restraint on self-selected portions of prepared meals in US women.
Labbe, David; Rytz, Andréas; Brunstrom, Jeffrey M; Forde, Ciarán G; Martin, Nathalie
2017-04-01
The rise of obesity prevalence has been attributed in part to an increase in food and beverage portion sizes selected and consumed among overweight and obese consumers. Nevertheless, evidence from observations of adults is mixed and contradictory findings might reflect the use of small or unrepresentative samples. The objective of this study was i) to determine the extent to which BMI and dietary restraint predict self-selected portion sizes for a range of commercially available prepared savoury meals and ii) to consider the importance of these variables relative to two previously established predictors of portion selection, expected satiation and expected liking. A representative sample of female consumers (N = 300, range 18-55 years) evaluated 15 frozen savoury prepared meals. For each meal, participants rated their expected satiation and expected liking, and selected their ideal portion using a previously validated computer-based task. Dietary restraint was quantified using the Dutch Eating Behaviour Questionnaire (DEBQ-R). Hierarchical multiple regression was performed on self-selected portions with age, hunger level, and meal familiarity entered as control variables in the first step of the model, expected satiation and expected liking as predictor variables in the second step, and DEBQ-R and BMI as exploratory predictor variables in the third step. The second and third steps significantly explained variance in portion size selection (18% and 4%, respectively). Larger portion selections were significantly associated with lower dietary restraint and with lower expected satiation. There was a positive relationship between BMI and portion size selection (p = 0.06) and between expected liking and portion size selection (p = 0.06). Our discussion considers future research directions, the limited variance explained by our model, and the potential for portion size underreporting by overweight participants. Copyright © 2016 Nestec S.A. Published by Elsevier Ltd.. All rights reserved.
User's instructions for the cardiovascular Walters model
NASA Technical Reports Server (NTRS)
Croston, R. C.
1973-01-01
The model is a combined, steady-state cardiovascular and thermal model. It was originally developed for interactive use, but was converted to batch mode simulation for the Sigma 3 computer. The model has the purpose to compute steady-state circulatory and thermal variables in response to exercise work loads and environmental factors. During a computer simulation run, several selected variables are printed at each time step. End conditions are also printed at the completion of the run.
Bouck, Emily C; Satsangi, Rajiv; Bartlett, Whitney
2016-01-01
Price comparison is an important and complex skill, but it lacks sufficient research attention in terms of educating secondary students with intellectual disability and/or autism spectrum disorder. This alternating treatment design study compared the use of a paper-based number line and audio prompts delivered via an audio recorder to support three secondary students with intellectual disability to independently and accuracy compare the price of three separate grocery items. The study consisted of 22 sessions, spread across baseline, intervention, best treatment, and two different generalization phases. Data were collected on the percent of task analysis steps completed independently, the type of prompts needed, students' accuracy selecting the lowest priced item, and task completion time. With both intervention conditions, students were able to independently complete the task analysis steps as well as accurately select the lowest priced item and decrease their task completion time. For two of the students, the audio recorder condition resulted in the greatest independence and for one the number line. For only one student was the condition with the greatest independence also the condition for the highest rate of accuracy. The results suggest both tools can support students with price comparison. Yet, audio recorders offer students and teachers an age-appropriate and setting-appropriate option. Copyright © 2016 Elsevier Ltd. All rights reserved.
Robust perception algorithms for road and track autonomous following
NASA Astrophysics Data System (ADS)
Marion, Vincent; Lecointe, Olivier; Lewandowski, Cecile; Morillon, Joel G.; Aufrere, Romuald; Marcotegui, Beatrix; Chapuis, Roland; Beucher, Serge
2004-09-01
The French Military Robotic Study Program (introduced in Aerosense 2003), sponsored by the French Defense Procurement Agency and managed by Thales Airborne Systems as the prime contractor, focuses on about 15 robotic themes, which can provide an immediate "operational add-on value." The paper details the "road and track following" theme (named AUT2), which main purpose was to develop a vision based sub-system to automatically detect roadsides of an extended range of roads and tracks suitable to military missions. To achieve the goal, efforts focused on three main areas: (1) Improvement of images quality at algorithms inputs, thanks to the selection of adapted video cameras, and the development of a THALES patented algorithm: it removes in real time most of the disturbing shadows in images taken in natural environments, enhances contrast and lowers reflection effect due to films of water. (2) Selection and improvement of two complementary algorithms (one is segment oriented, the other region based) (3) Development of a fusion process between both algorithms, which feeds in real time a road model with the best available data. Each previous step has been developed so that the global perception process is reliable and safe: as an example, the process continuously evaluates itself and outputs confidence criteria qualifying roadside detection. The paper presents the processes in details, and the results got from passed military acceptance tests, which trigger the next step: autonomous track following (named AUT3).
Knob, Radim; Hanson, Robert L; Tateoka, Olivia B; Wood, Ryan L; Guerrero-Arguero, Israel; Robison, Richard A; Pitt, William G; Woolley, Adam T
2018-05-21
Fast determination of antibiotic resistance is crucial in selecting appropriate treatment for sepsis patients, but current methods based on culture are time consuming. We are developing a microfluidic platform with a monolithic column modified with oligonucleotides designed for sequence-specific capture of target DNA related to the Klebsiella pneumoniae carbapenemase (KPC) gene. We developed a novel single-step monolith fabrication method with an acrydite-modified capture oligonucleotide in the polymerization mixture, enabling fast monolith preparation in a microfluidic channel using UV photopolymerization. These prepared columns had a threefold higher capacity compared to monoliths prepared in a multistep process involving Schiff-base DNA attachment. Conditions for denaturing, capture and fluorescence labeling using hybridization probes were optimized with synthetic 90-mer oligonucleotides. These procedures were applied for extraction of a PCR amplicon from the KPC antibiotic resistance gene in bacterial lysate obtained from a blood sample spiked with E. coli. The results showed similar eluted peak areas for KPC amplicon extracted from either hybridization buffer or bacterial lysate. Selective extraction of the KPC DNA was verified by real time PCR on eluted fractions. These results show great promise for application in an integrated microfluidic diagnostic system that combines upstream blood sample preparation and downstream single-molecule counting detection. Copyright © 2018 Elsevier B.V. All rights reserved.
Scarafoni, Alessio; Ronchi, Alessandro; Prinsi, Bhakti; Espen, Luca; Assante, Gemma; Venturini, Giovanni; Duranti, Marcello
2013-03-01
The general knowledge of defence activity during the first steps of seed germination is still largely incomplete. The present study focused on the proteins released in the exudates of germinating white lupin seeds. During the first 24 h, a release of proteins was observed. Initially (i.e. during the first 12 h), the proteins found in exudates reflected the composition of the seed, indicating a passive extrusion of pre-formed proteins. Subsequently, when the rate of protein release was at its highest, the composition of the released proteome changed drastically. This transition occurred in a short time, indicating that more selective and regulated events, such as secretory processes, took place soon after the onset of germination. The present study considered: (a) the characterization of the proteome accumulated in the germinating medium collected after the appearance of the post-extrusion events; (b) the biosynthetic origin and the modalities that are the basis of protein release outside the seeds; and (c) an assessment of antifungal activity of these exudates. The most represented protein in the exudate was chitinase, which was synthesized de novo. The other proteins are involved in the cellular mechanisms responding to stress events, including biotic ones. This exudate was effectively able to inhibit fungal growth. The results of the present study indicate that seed exudation is a dual-step process that leads to the secretion of selected proteins and thus is not a result of passive leakage. The released proteome is involved in protecting the spermosphere environment and thus may act as first defence against pathogens. © 2013 The Authors Journal compilation © 2013 FEBS.
Guo, Linjuan; Zu, Baiyi; Yang, Zheng; Cao, Hongyu; Zheng, Xuefang; Dou, Xincun
2014-01-01
For the first time, flexible PVP/pyrene/APTS/rGO fluorescent nanonets were designed and synthesized via a one-step electrospinning method to detect representative subsaturated nitroaromatic explosive vapor. The functional fluorescent nanonets, which were highly stable in air, showed an 81% quenching efficiency towards TNT vapor (∼10 ppb) with an exposure time of 540 s at room temperature. The nice performance of the nanonets was ascribed to the synergistic effects induced by the specific adsorption properties of APTS, the fast charge transfer properties and the effective π-π interaction with pyrene and TNT of rGO. Compared to the analogues of TNT, the PVP/pyrene/APTS/rGO nanonets showed notable selectivity towards TNT and DNT vapors. The explored functionalization method opens up brand new insight into sensitive and selective detection of vapor phase nitroaromatic explosives.
2011-01-01
Background Thousands of children experience cardiac arrest events every year in pediatric intensive care units. Most of these children die. Cardiac arrest prediction tools are used as part of medical emergency team evaluations to identify patients in standard hospital beds that are at high risk for cardiac arrest. There are no models to predict cardiac arrest in pediatric intensive care units though, where the risk of an arrest is 10 times higher than for standard hospital beds. Current tools are based on a multivariable approach that does not characterize deterioration, which often precedes cardiac arrests. Characterizing deterioration requires a time series approach. The purpose of this study is to propose a method that will allow for time series data to be used in clinical prediction models. Successful implementation of these methods has the potential to bring arrest prediction to the pediatric intensive care environment, possibly allowing for interventions that can save lives and prevent disabilities. Methods We reviewed prediction models from nonclinical domains that employ time series data, and identified the steps that are necessary for building predictive models using time series clinical data. We illustrate the method by applying it to the specific case of building a predictive model for cardiac arrest in a pediatric intensive care unit. Results Time course analysis studies from genomic analysis provided a modeling template that was compatible with the steps required to develop a model from clinical time series data. The steps include: 1) selecting candidate variables; 2) specifying measurement parameters; 3) defining data format; 4) defining time window duration and resolution; 5) calculating latent variables for candidate variables not directly measured; 6) calculating time series features as latent variables; 7) creating data subsets to measure model performance effects attributable to various classes of candidate variables; 8) reducing the number of candidate features; 9) training models for various data subsets; and 10) measuring model performance characteristics in unseen data to estimate their external validity. Conclusions We have proposed a ten step process that results in data sets that contain time series features and are suitable for predictive modeling by a number of methods. We illustrated the process through an example of cardiac arrest prediction in a pediatric intensive care setting. PMID:22023778
Kennedy, Curtis E; Turley, James P
2011-10-24
Thousands of children experience cardiac arrest events every year in pediatric intensive care units. Most of these children die. Cardiac arrest prediction tools are used as part of medical emergency team evaluations to identify patients in standard hospital beds that are at high risk for cardiac arrest. There are no models to predict cardiac arrest in pediatric intensive care units though, where the risk of an arrest is 10 times higher than for standard hospital beds. Current tools are based on a multivariable approach that does not characterize deterioration, which often precedes cardiac arrests. Characterizing deterioration requires a time series approach. The purpose of this study is to propose a method that will allow for time series data to be used in clinical prediction models. Successful implementation of these methods has the potential to bring arrest prediction to the pediatric intensive care environment, possibly allowing for interventions that can save lives and prevent disabilities. We reviewed prediction models from nonclinical domains that employ time series data, and identified the steps that are necessary for building predictive models using time series clinical data. We illustrate the method by applying it to the specific case of building a predictive model for cardiac arrest in a pediatric intensive care unit. Time course analysis studies from genomic analysis provided a modeling template that was compatible with the steps required to develop a model from clinical time series data. The steps include: 1) selecting candidate variables; 2) specifying measurement parameters; 3) defining data format; 4) defining time window duration and resolution; 5) calculating latent variables for candidate variables not directly measured; 6) calculating time series features as latent variables; 7) creating data subsets to measure model performance effects attributable to various classes of candidate variables; 8) reducing the number of candidate features; 9) training models for various data subsets; and 10) measuring model performance characteristics in unseen data to estimate their external validity. We have proposed a ten step process that results in data sets that contain time series features and are suitable for predictive modeling by a number of methods. We illustrated the process through an example of cardiac arrest prediction in a pediatric intensive care setting.
Qi, Miao; Wang, Ting; Yi, Yugen; Gao, Na; Kong, Jun; Wang, Jianzhong
2017-04-01
Feature selection has been regarded as an effective tool to help researchers understand the generating process of data. For mining the synthesis mechanism of microporous AlPOs, this paper proposes a novel feature selection method by joint l 2,1 norm and Fisher discrimination constraints (JNFDC). In order to obtain more effective feature subset, the proposed method can be achieved in two steps. The first step is to rank the features according to sparse and discriminative constraints. The second step is to establish predictive model with the ranked features, and select the most significant features in the light of the contribution of improving the predictive accuracy. To the best of our knowledge, JNFDC is the first work which employs the sparse representation theory to explore the synthesis mechanism of six kinds of pore rings. Numerical simulations demonstrate that our proposed method can select significant features affecting the specified structural property and improve the predictive accuracy. Moreover, comparison results show that JNFDC can obtain better predictive performances than some other state-of-the-art feature selection methods. © 2017 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.
A process for the preparation of cysteine from cystine
Chang, Shih-Ger; Liu, David K.; Griffiths, Elizabeth A.; Littlejohn, David
1989-01-01
The present invention in one aspect relates to a process for the simultaneous removal of NO.sub.x and SO.sub.2 from a fluid stream comprising mixtures thereof and in another aspect relates to the separation, use and/or regeneration of various chemicals contaminated or spent in the process and which includes the steps of: (A) contacting the fluid stream at a temperature of between about 105.degree. and 180.degree. C. with a liquid aqueous slurry or solution comprising an effective amount of an iron chelate of an amino acid moiety having at least one --SH group; (B) separating the fluid stream from the particulates formed in step (A) comprising the chelate of the amino acid moiety and fly ash; (C) washing and separating the particulates of step (B) with an aqeous solution having a pH value of between about 5 to 8; (D) subsequently washing and separating the particulates of step (C) with a strongly acidic aqueous solution having a pH value of between about 1 to 3; (E) washing and separating the particulates of step (D) with an basic aqueous solution having a pH value of between about 9 to 12; (F) optionally adding additional amino acid moiety, iron (II) and alkali to the aqueous liquid from step (D) to produce an aqueous solution or slurry similar to that in step (A) having a pH value of between about 4 to 12; and (G) recycling the aqueous slurry of step (F) to the contacting zone of step (A). Steps (D) and (E) can be carried out in the reverse sequence, however the preferred order is (D) and then (E). In a preferred embodiment the present invention provides an improved process for the preparation (regeneration) of cysteine from cystine, which includes reacting an aqueous solution of cystine at a pH of between about 9 to 13 with a reducing agent selected from hydrogen sulfide or alkali metal sulfides, sulfur dioxide, an alkali metal sulfite or mixtures thereof for a time and at a temperature effective to cleave and reduce the cystine to cysteine with subsequent recovery of the cysteine. In another preferred embodiment the present invention provides a process for the removal of NO.sub.x, SO.sub.2 and particulates from a fluid stream which includes the steps of (A) injecting into a reaction zone an aqueous solution itself comprising (i) an amino acid moiety selected from those described above; (ii) iron (II) ion; and (iii) an alkali, wherein the aqueous solution has a pH of between about 4 and 11; followed by solids separation and washing as is described in steps (B), (C), (D) and (E) above. The overall process is useful to reduce acid rain components from combustion gas sources.
Li, Liyuan; Huang, Weimin; Gu, Irene Yu-Hua; Luo, Ruijiang; Tian, Qi
2008-10-01
Efficiency and robustness are the two most important issues for multiobject tracking algorithms in real-time intelligent video surveillance systems. We propose a novel 2.5-D approach to real-time multiobject tracking in crowds, which is formulated as a maximum a posteriori estimation problem and is approximated through an assignment step and a location step. Observing that the occluding object is usually less affected by the occluded objects, sequential solutions for the assignment and the location are derived. A novel dominant color histogram (DCH) is proposed as an efficient object model. The DCH can be regarded as a generalized color histogram, where dominant colors are selected based on a given distance measure. Comparing with conventional color histograms, the DCH only requires a few color components (31 on average). Furthermore, our theoretical analysis and evaluation on real data have shown that DCHs are robust to illumination changes. Using the DCH, efficient implementations of sequential solutions for the assignment and location steps are proposed. The assignment step includes the estimation of the depth order for the objects in a dispersing group, one-by-one assignment, and feature exclusion from the group representation. The location step includes the depth-order estimation for the objects in a new group, the two-phase mean-shift location, and the exclusion of tracked objects from the new position in the group. Multiobject tracking results and evaluation from public data sets are presented. Experiments on image sequences captured from crowded public environments have shown good tracking results, where about 90% of the objects have been successfully tracked with the correct identification numbers by the proposed method. Our results and evaluation have indicated that the method is efficient and robust for tracking multiple objects (>or= 3) in complex occlusion for real-world surveillance scenarios.
dos Santos, Bruno César Diniz Brito; Flumignan, Danilo Luiz; de Oliveira, José Eduardo
2012-10-01
A three-step development, optimization and validation strategy is described for gas chromatography (GC) fingerprints of Brazilian commercial diesel fuel. A suitable GC-flame ionization detection (FID) system was selected to assay a complex matrix such as diesel. The next step was to improve acceptable chromatographic resolution with reduced analysis time, which is recommended for routine applications. Full three-level factorial designs were performed to improve flow rate, oven ramps, injection volume and split ratio in the GC system. Finally, several validation parameters were performed. The GC fingerprinting can be coupled with pattern recognition and multivariate regressions analyses to determine fuel quality and fuel physicochemical parameters. This strategy can also be applied to develop fingerprints for quality control of other fuel types.
Effect of a Starting Model on the Solution of a Travel Time Seismic Tomography Problem
NASA Astrophysics Data System (ADS)
Yanovskaya, T. B.; Medvedev, S. V.; Gobarenko, V. S.
2018-03-01
In the problems of three-dimensional (3D) travel time seismic tomography where the data are travel times of diving waves and the starting model is a system of plane layers where the velocity is a function of depth alone, the solution turns out to strongly depend on the selection of the starting model. This is due to the fact that in the different starting models, the rays between the same points can intersect different layers, which makes the tomography problem fundamentally nonlinear. This effect is demonstrated by the model example. Based on the same example, it is shown how the starting model should be selected to ensure a solution close to the true velocity distribution. The starting model (the average dependence of the seismic velocity on depth) should be determined by the method of successive iterations at each step of which the horizontal velocity variations in the layers are determined by solving the two-dimensional tomography problem. An example illustrating the application of this technique to the P-wave travel time data in the region of the Black Sea basin is presented.
Efficient Encoding and Rendering of Time-Varying Volume Data
NASA Technical Reports Server (NTRS)
Ma, Kwan-Liu; Smith, Diann; Shih, Ming-Yun; Shen, Han-Wei
1998-01-01
Visualization of time-varying volumetric data sets, which may be obtained from numerical simulations or sensing instruments, provides scientists insights into the detailed dynamics of the phenomenon under study. This paper describes a coherent solution based on quantization, coupled with octree and difference encoding for visualizing time-varying volumetric data. Quantization is used to attain voxel-level compression and may have a significant influence on the performance of the subsequent encoding and visualization steps. Octree encoding is used for spatial domain compression, and difference encoding for temporal domain compression. In essence, neighboring voxels may be fused into macro voxels if they have similar values, and subtrees at consecutive time steps may be merged if they are identical. The software rendering process is tailored according to the tree structures and the volume visualization process. With the tree representation, selective rendering may be performed very efficiently. Additionally, the I/O costs are reduced. With these combined savings, a higher level of user interactivity is achieved. We have studied a variety of time-varying volume datasets, performed encoding based on data statistics, and optimized the rendering calculations wherever possible. Preliminary tests on workstations have shown in many cases tremendous reduction by as high as 90% in both storage space and inter-frame delay.
Performance analysis and kernel size study of the Lynx real-time operating system
NASA Technical Reports Server (NTRS)
Liu, Yuan-Kwei; Gibson, James S.; Fernquist, Alan R.
1993-01-01
This paper analyzes the Lynx real-time operating system (LynxOS), which has been selected as the operating system for the Space Station Freedom Data Management System (DMS). The features of LynxOS are compared to other Unix-based operating system (OS). The tools for measuring the performance of LynxOS, which include a high-speed digital timer/counter board, a device driver program, and an application program, are analyzed. The timings for interrupt response, process creation and deletion, threads, semaphores, shared memory, and signals are measured. The memory size of the DMS Embedded Data Processor (EDP) is limited. Besides, virtual memory is not suitable for real-time applications because page swap timing may not be deterministic. Therefore, the DMS software, including LynxOS, has to fit in the main memory of an EDP. To reduce the LynxOS kernel size, the following steps are taken: analyzing the factors that influence the kernel size; identifying the modules of LynxOS that may not be needed in an EDP; adjusting the system parameters of LynxOS; reconfiguring the device drivers used in the LynxOS; and analyzing the symbol table. The reductions in kernel disk size, kernel memory size and total kernel size reduction from each step mentioned above are listed and analyzed.
Okubo, Yoshiro; Menant, Jasmine; Udyavar, Manasa; Brodie, Matthew A; Barry, Benjamin K; Lord, Stephen R; L Sturnieks, Daina
2017-05-01
Although step training improves the ability of quick stepping, some home-based step training systems train limited stepping directions and may cause harm by reducing stepping performance in untrained directions. This study examines the possible transfer effects of step training on stepping performance in untrained directions in older people. Fifty four older adults were randomized into: forward step training (FT); lateral plus forward step training (FLT); or no training (NT) groups. FT and FLT participants undertook a 15-min training session involving 200 step repetitions. Prior to and post training, choice stepping reaction time and stepping kinematics in untrained, diagonal and lateral directions were assessed. Significant interactions of group and time (pre/post-assessment) were evident for the first step after training indicating negative (delayed response time) and positive (faster peak stepping speed) transfer effects in the diagonal direction in the FT group. However, when the second to the fifth steps after training were included in the analysis, there were no significant interactions of group and time for measures in the diagonal stepping direction. Step training only in the forward direction improved stepping speed but may acutely slow response times in the untrained diagonal direction. However, this acute effect appears to dissipate after a few repeated step trials. Step training in both forward and lateral directions appears to induce no negative transfer effects in diagonal stepping. These findings suggest home-based step training systems present low risk of harm through negative transfer effects in untrained stepping directions. ANZCTR 369066. Copyright © 2017 Elsevier B.V. All rights reserved.
Silsupadol, Patima; Teja, Kunlanan; Lugade, Vipul
2017-10-01
The assessment of spatiotemporal gait parameters is a useful clinical indicator of health status. Unfortunately, most assessment tools require controlled laboratory environments which can be expensive and time consuming. As smartphones with embedded sensors are becoming ubiquitous, this technology can provide a cost-effective, easily deployable method for assessing gait. Therefore, the purpose of this study was to assess the reliability and validity of a smartphone-based accelerometer in quantifying spatiotemporal gait parameters when attached to the body or in a bag, belt, hand, and pocket. Thirty-four healthy adults were asked to walk at self-selected comfortable, slow, and fast speeds over a 10-m walkway while carrying a smartphone. Step length, step time, gait velocity, and cadence were computed from smartphone-based accelerometers and validated with GAITRite. Across all walking speeds, smartphone data had excellent reliability (ICC 2,1 ≥0.90) for the body and belt locations, with bag, hand, and pocket locations having good to excellent reliability (ICC 2,1 ≥0.69). Correlations between the smartphone-based and GAITRite-based systems were very high for the body (r=0.89, 0.98, 0.96, and 0.87 for step length, step time, gait velocity, and cadence, respectively). Similarly, Bland-Altman analysis demonstrated that the bias approached zero, particularly in the body, bag, and belt conditions under comfortable and fast speeds. Thus, smartphone-based assessments of gait are most valid when placed on the body, in a bag, or on a belt. The use of a smartphone to assess gait can provide relevant data to clinicians without encumbering the user and allow for data collection in the free-living environment. Copyright © 2017 Elsevier B.V. All rights reserved.
Masarwa, Nader; Mohamed, Ahmed; Abou-Rabii, Iyad; Abu Zaghlan, Rawan; Steier, Liviu
2016-06-01
A systematic review and meta-analysis were performed to compare longevity of Self-Etch Dentin Bonding Adhesives to Etch-and-Rinse Dentin Bonding Adhesives. The following databases were searched for PubMed, MEDLINE, Web of Science, CINAHL, the Cochrane Library complemented by a manual search of the Journal of Adhesive Dentistry. The MESH keywords used were: "etch and rinse," "total etch," "self-etch," "dentin bonding agent," "bond durability," and "bond degradation." Included were in-vitro experimental studies performed on human dental tissues of sound tooth structure origin. The examined Self-Etch Bonds were of two subtypes; Two Steps and One Step Self-Etch Bonds, while Etch-and-Rinse Bonds were of two subtypes; Two Steps and Three Steps. The included studies measured micro tensile bond strength (μTBs) to evaluate bond strength and possible longevity of both types of dental adhesives at different times. The selected studies depended on water storage as the aging technique. Statistical analysis was performed for outcome measurements compared at 24 h, 3 months, 6 months and 12 months of water storage. After 24 hours (p-value = 0.051), 3 months (p-value = 0.756), 6 months (p-value=0.267), 12 months (p-value=0.785) of water storage self-etch adhesives showed lower μTBs when compared to the etch-and-rinse adhesives, but the comparisons were statistically insignificant. In this study, longevity of Dentin Bonds was related to the measured μTBs. Although Etch-and-Rinse bonds showed higher values at all times, the meta-analysis found no difference in longevity of the two types of bonds at the examined aging times. Copyright © 2016 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Wu, Bo; Yang, Minglei; Li, Kehuang; Huang, Zhen; Siniscalchi, Sabato Marco; Wang, Tong; Lee, Chin-Hui
2017-12-01
A reverberation-time-aware deep-neural-network (DNN)-based multi-channel speech dereverberation framework is proposed to handle a wide range of reverberation times (RT60s). There are three key steps in designing a robust system. First, to accomplish simultaneous speech dereverberation and beamforming, we propose a framework, namely DNNSpatial, by selectively concatenating log-power spectral (LPS) input features of reverberant speech from multiple microphones in an array and map them into the expected output LPS features of anechoic reference speech based on a single deep neural network (DNN). Next, the temporal auto-correlation function of received signals at different RT60s is investigated to show that RT60-dependent temporal-spatial contexts in feature selection are needed in the DNNSpatial training stage in order to optimize the system performance in diverse reverberant environments. Finally, the RT60 is estimated to select the proper temporal and spatial contexts before feeding the log-power spectrum features to the trained DNNs for speech dereverberation. The experimental evidence gathered in this study indicates that the proposed framework outperforms the state-of-the-art signal processing dereverberation algorithm weighted prediction error (WPE) and conventional DNNSpatial systems without taking the reverberation time into account, even for extremely weak and severe reverberant conditions. The proposed technique generalizes well to unseen room size, array geometry and loudspeaker position, and is robust to reverberation time estimation error.
Caso, Giuseppe; de Nardis, Luca; di Benedetto, Maria-Gabriella
2015-10-30
The weighted k-nearest neighbors (WkNN) algorithm is by far the most popular choice in the design of fingerprinting indoor positioning systems based on WiFi received signal strength (RSS). WkNN estimates the position of a target device by selecting k reference points (RPs) based on the similarity of their fingerprints with the measured RSS values. The position of the target device is then obtained as a weighted sum of the positions of the k RPs. Two-step WkNN positioning algorithms were recently proposed, in which RPs are divided into clusters using the affinity propagation clustering algorithm, and one representative for each cluster is selected. Only cluster representatives are then considered during the position estimation, leading to a significant computational complexity reduction compared to traditional, flat WkNN. Flat and two-step WkNN share the issue of properly selecting the similarity metric so as to guarantee good positioning accuracy: in two-step WkNN, in particular, the metric impacts three different steps in the position estimation, that is cluster formation, cluster selection and RP selection and weighting. So far, however, the only similarity metric considered in the literature was the one proposed in the original formulation of the affinity propagation algorithm. This paper fills this gap by comparing different metrics and, based on this comparison, proposes a novel mixed approach in which different metrics are adopted in the different steps of the position estimation procedure. The analysis is supported by an extensive experimental campaign carried out in a multi-floor 3D indoor positioning testbed. The impact of similarity metrics and their combinations on the structure and size of the resulting clusters, 3D positioning accuracy and computational complexity are investigated. Results show that the adoption of metrics different from the one proposed in the original affinity propagation algorithm and, in particular, the combination of different metrics can significantly improve the positioning accuracy while preserving the efficiency in computational complexity typical of two-step algorithms.
Caso, Giuseppe; de Nardis, Luca; di Benedetto, Maria-Gabriella
2015-01-01
The weighted k-nearest neighbors (WkNN) algorithm is by far the most popular choice in the design of fingerprinting indoor positioning systems based on WiFi received signal strength (RSS). WkNN estimates the position of a target device by selecting k reference points (RPs) based on the similarity of their fingerprints with the measured RSS values. The position of the target device is then obtained as a weighted sum of the positions of the k RPs. Two-step WkNN positioning algorithms were recently proposed, in which RPs are divided into clusters using the affinity propagation clustering algorithm, and one representative for each cluster is selected. Only cluster representatives are then considered during the position estimation, leading to a significant computational complexity reduction compared to traditional, flat WkNN. Flat and two-step WkNN share the issue of properly selecting the similarity metric so as to guarantee good positioning accuracy: in two-step WkNN, in particular, the metric impacts three different steps in the position estimation, that is cluster formation, cluster selection and RP selection and weighting. So far, however, the only similarity metric considered in the literature was the one proposed in the original formulation of the affinity propagation algorithm. This paper fills this gap by comparing different metrics and, based on this comparison, proposes a novel mixed approach in which different metrics are adopted in the different steps of the position estimation procedure. The analysis is supported by an extensive experimental campaign carried out in a multi-floor 3D indoor positioning testbed. The impact of similarity metrics and their combinations on the structure and size of the resulting clusters, 3D positioning accuracy and computational complexity are investigated. Results show that the adoption of metrics different from the one proposed in the original affinity propagation algorithm and, in particular, the combination of different metrics can significantly improve the positioning accuracy while preserving the efficiency in computational complexity typical of two-step algorithms. PMID:26528984
Gulab, Hussain; Jan, Muhammad Rasul; Shah, Jasmin; Manos, George
2010-01-01
This paper presents results regarding the effect of various process conditions on the performance of a zeolite catalyst in pyrolysis of high density polyethylene. The results show that polymer catalytic degradation can be operated at relatively low catalyst content reducing the cost of a potential industrial process. As the polymer to catalyst mass ratio increases, the system becomes less active, but high temperatures compensate for this activity loss resulting in high conversion values at usual batch times and even higher yields of liquid products due to less overcracking. The results also show that high flow rate of carrier gas causes evaporation of liquid products falsifying results, as it was obvious from liquid yield results at different reaction times as well as the corresponding boiling point distributions. Furthermore, results are presented regarding temperature effects on liquid selectivity. Similar values resulted from different final reactor temperatures, which are attributed to the batch operation of the experimental equipment. Since polymer and catalyst both undergo the same temperature profile, which is the same up to a specific time independent of the final temperature. Obviously, this common temperature step determines the selectivity to specific products. However, selectivity to specific products is affected by the temperature, as shown in the corresponding boiling point distributions, with higher temperatures showing an increased selectivity to middle boiling point components (C(8)-C(9)) and lower temperatures increased selectivity to heavy components (C(14)-C(18)).
The potential role of real-time geodetic observations in tsunami early warning
NASA Astrophysics Data System (ADS)
Tinti, Stefano; Armigliato, Alberto
2016-04-01
Tsunami warning systems (TWS) have the final goal to launch a reliable alert of an incoming dangerous tsunami to coastal population early enough to allow people to flee from the shore and coastal areas according to some evacuation plans. In the last decade, especially after the catastrophic 2004 Boxing Day tsunami in the Indian Ocean, much attention has been given to filling gaps in the existing TWSs (only covering the Pacific Ocean at that time) and to establishing new TWSs in ocean regions that were uncovered. Typically, TWSs operating today work only on earthquake-induced tsunamis. TWSs estimate quickly earthquake location and size by real-time processing seismic signals; on the basis of some pre-defined "static" procedures (either based on decision matrices or on pre-archived tsunami simulations), assess the tsunami alert level on a large regional scale and issue specific bulletins to a pre-selected recipients audience. Not unfrequently these procedures result in generic alert messages with little value. What usually operative TWSs do not do, is to compute earthquake focal mechanism, to calculate the co-seismic sea-floor displacement, to assess the initial tsunami conditions, to input these data into tsunami simulation models and to compute tsunami propagation up to the threatened coastal districts. This series of steps is considered nowadays too time consuming to provide the required timely alert. An equivalent series of steps could start from the same premises (earthquake focal parameters) and reach the same result (tsunami height at target coastal areas) by replacing the intermediate steps of real-time tsunami simulations with proper selection from a large archive of pre-computed tsunami scenarios. The advantage of real-time simulations and of archived scenarios selection is that estimates are tailored to the specific occurring tsunami and alert can be more detailed (less generic) and appropriate for local needs. Both these procedures are still at an experimental or testing stage and haven't been implemented yet in any standard TWS operations. Nonetheless, this is seen to be the future and the natural TWS evolving enhancement. In this context, improvement of the real-time estimates of tsunamigenic earthquake focal mechanism is of fundamental importance to trigger the appropriate computational chain. Quick discrimination between strike-slip and thrust-fault earthquakes, and equally relevant, quick assessment of co-seismic on-fault slip distribution, are exemplary cases to which a real-time geodetic monitoring system can contribute significantly. Robust inversion of geodetic data can help to reconstruct the sea floor deformation pattern especially if two conditions are met: the source is not too far from network stations and is well covered azimuthally. These two conditions are sometimes hard to satisfy fully, but in certain regions, like the Mediterranean and the Caribbean sea, this is quite possible due to the limited size of the ocean basins. Close cooperation between the Global Geodetic Observing System (GGOS) community, seismologists, tsunami scientists and TWS operators is highly recommended to obtain significant progresses in the quick determination of the earthquake source, which can trigger a timely estimation of the ensuing tsunami and a more reliable and detailed assessment of the tsunami size at the coast.
Brestrich, Nina; Briskot, Till; Osberghaus, Anna; Hubbuch, Jürgen
2014-07-01
Selective quantification of co-eluting proteins in chromatography is usually performed by offline analytics. This is time-consuming and can lead to late detection of irregularities in chromatography processes. To overcome this analytical bottleneck, a methodology for selective protein quantification in multicomponent mixtures by means of spectral data and partial least squares regression was presented in two previous studies. In this paper, a powerful integration of software and chromatography hardware will be introduced that enables the applicability of this methodology for a selective inline quantification of co-eluting proteins in chromatography. A specific setup consisting of a conventional liquid chromatography system, a diode array detector, and a software interface to Matlab® was developed. The established tool for selective inline quantification was successfully applied for a peak deconvolution of a co-eluting ternary protein mixture consisting of lysozyme, ribonuclease A, and cytochrome c on SP Sepharose FF. Compared to common offline analytics based on collected fractions, no loss of information regarding the retention volumes and peak flanks was observed. A comparison between the mass balances of both analytical methods showed, that the inline quantification tool can be applied for a rapid determination of pool yields. Finally, the achieved inline peak deconvolution was successfully applied to make product purity-based real-time pooling decisions. This makes the established tool for selective inline quantification a valuable approach for inline monitoring and control of chromatographic purification steps and just in time reaction on process irregularities. © 2014 Wiley Periodicals, Inc.
Baker, Richard W.; Pinnau, Ingo; He, Zhenjie; Da Costa, Andre R.; Daniels, Ramin; Amo, Karl D.; Wijmans, Johannes G.
2003-06-03
A process for treating a gas mixture containing at least an organic compound gas or vapor and a second gas, such as natural gas, refinery off-gas or air. The process uses two sequential membrane separation steps, one using membrane selective for the organic compound over the second gas, the other selective for the second gas over the organic vapor. The second-gas-selective membranes use a selective layer made from a polymer having repeating units of a fluorinated polymer, and demonstrate good resistance to plasticization by the organic components in the gas mixture under treatment, and good recovery after exposure to liquid aromatic hydrocarbons. The membrane steps can be combined in either order.
Impens, Saartje; Chen, Yantian; Mullens, Steven; Luyten, Frank; Schrooten, Jan
2010-12-01
The repair of large and complex bone defects could be helped by a cell-based bone tissue engineering strategy. A reliable and consistent cell-seeding methodology is a mandatory step in bringing bone tissue engineering into the clinic. However, optimization of the cell-seeding step is only relevant when it can be reliably evaluated. The cell seeding efficiency (CSE) plays a fundamental role herein. Results showed that cell lysis and the definition used to determine the CSE played a key role in quantifying the CSE. The definition of CSE should therefore be consistent and unambiguous. The study of the influence of five drop-seeding-related parameters within the studied test conditions showed that (i) the cell density and (ii) the seeding vessel did not significantly affect the CSE, whereas (iii) the volume of seeding medium-to-free scaffold volume ratio (MFR), (iv) the seeding time, and (v) the scaffold morphology did. Prolonging the incubation time increased the CSE up to a plateau value at 4 h. Increasing the MFR or permeability by changing the morphology of the scaffolds significantly reduced the CSE. These results confirm that cell seeding optimization is needed and that an evidence-based selection of the seeding conditions is favored.
DATA QUALITY OBJECTIVES FOR SELECTING WASTE SAMPLES FOR BENCH-SCALE REFORMER TREATABILITY STUDIES
DOE Office of Scientific and Technical Information (OSTI.GOV)
BANNING DL
2011-02-11
This document describes the data quality objectives to select archived samples located at the 222-S Laboratory for Bench-Scale Reforming testing. The type, quantity, and quality of the data required to select the samples for Fluid Bed Steam Reformer testing are discussed. In order to maximize the efficiency and minimize the time to treat Hanford tank waste in the Waste Treatment and Immobilization Plant, additional treatment processes may be required. One of the potential treatment processes is the fluidized bed steam reformer. A determination of the adequacy of the fluidized bed steam reformer process to treat Hanford tank waste is required.more » The initial step in determining the adequacy of the fluidized bed steam reformer process is to select archived waste samples from the 222-S Laboratory that will be used in a bench scale tests. Analyses of the selected samples will be required to confirm the samples meet the shipping requirements and for comparison to the bench scale reformer (BSR) test sample selection requirements.« less
Three-step HPLC-ESI-MS/MS procedure for screening and identifying non-target flavonoid derivatives
NASA Astrophysics Data System (ADS)
Rak, Gábor; Fodor, Péter; Abrankó, László
2010-02-01
A three-step HPLC-ESI-MS/MS procedure is designed for screening and identification of non-target flavonoid derivatives of selected flavonoid aglycones. In this method the five commonly appearing aglycones (apigenin, luteolin, myricetin, naringenin and quercetin) were selected. The method consists of three individual mass spectrometric experiments of which the first two were implemented within a single chromatographic acquisition. The third step was carried out during a replicate chromatographic run using the same RP-HPLC conditions. The first step, a multiple reaction monitoring (MRM) scan of the aglycones was performed to define the number of derivatives relating to the selected aglycones. For this purpose the characteristic aglycone parts of the unknowns were used as specific tags of the molecules, which were generated as in-source fragments. Secondly, a full scan MS experiment is performed to identify the masses of the potential derivatives of the selected aglycones. Finally, the third step had the capability to confirm the supposed derivatives. The developed method was applied to a commercially available black currant juice to demonstrate its capability to detect and identify various flavonoid glycosides without any preliminary information about their presence in the sample. As a result 13 compounds were detected and identified in total. Namely, 3 different myricetin glycosides and the myricetin aglycone 2 luteolin glycosides plus the aglycone and 3 quercetin glycosides plus the aglycone could be identified from the tested black currant sample. In the case of apigenin and naringenin only the aglycones could be detected.
A clinical test of stepping and change of direction to identify multiple falling older adults.
Dite, Wayne; Temple, Viviene A
2002-11-01
To establish the reliability and validity of a new clinical test of dynamic standing balance, the Four Square Step Test (FSST), to evaluate its sensitivity, specificity, and predictive value in identifying subjects who fall, and to compare it with 3 established balance and mobility tests. A 3-group comparison performed by using 3 validated tests and 1 new test. A rehabilitation center and university medical school in Australia. Eighty-one community-dwelling adults over the age of 65 years. Subjects were age- and gender-matched to form 3 groups: multiple fallers, nonmultiple fallers, and healthy comparisons. Not applicable. Time to complete the FSST and Timed Up and Go test and the number of steps to complete the Step Test and Functional Reach Test distance. High reliability was found for interrater (n=30, intraclass correlation coefficient [ICC]=.99) and retest reliability (n=20, ICC=.98). Evidence for validity was found through correlation with other existing balance tests. Validity was supported, with the FSST showing significantly better performance scores (P<.01) for each of the healthier and less impaired groups. The FSST also revealed a sensitivity of 85%, a specificity of 88% to 100%, and a positive predictive value of 86%. As a clinical test, the FSST is reliable, valid, easy to score, quick to administer, requires little space, and needs no special equipment. It is unique in that it involves stepping over low objects (2.5cm) and movement in 4 directions. The FSST had higher combined sensitivity and specificity for identifying differences between groups in the selected sample population of older adults than the 3 tests with which it was compared. Copyright 2002 by the American Congress of Rehabilitation Medicine and the American Academy of Physical Medicine and Rehabilitation
Tan, Swee Jin; Phan, Huan; Gerry, Benjamin Michael; Kuhn, Alexandre; Hong, Lewis Zuocheng; Min Ong, Yao; Poon, Polly Suk Yean; Unger, Marc Alexander; Jones, Robert C; Quake, Stephen R; Burkholder, William F
2013-01-01
Library preparation for next-generation DNA sequencing (NGS) remains a key bottleneck in the sequencing process which can be relieved through improved automation and miniaturization. We describe a microfluidic device for automating laboratory protocols that require one or more column chromatography steps and demonstrate its utility for preparing Next Generation sequencing libraries for the Illumina and Ion Torrent platforms. Sixteen different libraries can be generated simultaneously with significantly reduced reagent cost and hands-on time compared to manual library preparation. Using an appropriate column matrix and buffers, size selection can be performed on-chip following end-repair, dA tailing, and linker ligation, so that the libraries eluted from the chip are ready for sequencing. The core architecture of the device ensures uniform, reproducible column packing without user supervision and accommodates multiple routine protocol steps in any sequence, such as reagent mixing and incubation; column packing, loading, washing, elution, and regeneration; capture of eluted material for use as a substrate in a later step of the protocol; and removal of one column matrix so that two or more column matrices with different functional properties can be used in the same protocol. The microfluidic device is mounted on a plastic carrier so that reagents and products can be aliquoted and recovered using standard pipettors and liquid handling robots. The carrier-mounted device is operated using a benchtop controller that seals and operates the device with programmable temperature control, eliminating any requirement for the user to manually attach tubing or connectors. In addition to NGS library preparation, the device and controller are suitable for automating other time-consuming and error-prone laboratory protocols requiring column chromatography steps, such as chromatin immunoprecipitation.
Costa-Borges, Nuno; Bellés, Marta; Meseguer, Marcos; Galliano, Daniela; Ballesteros, Agustin; Calderón, Gloria
2016-03-01
To evaluate the efficiency of using a continuous (one-step) protocol with a single medium for the culture of human embryos in a time-lapse incubator (TLI). Prospective cohort study on sibling donor oocytes. University-affiliated in vitro fertilization (IVF) center. Embryos from 59 patients. Culture in a TLI in a single medium with or without renewal of the medium on day-3. Embryo morphology and morphokinetic parameters, clinical pregnancy, take-home baby rate, and perinatal outcomes. The blastocyst rates (68.3 vs. 66.8%) and the proportion of good-quality blastocysts (transferred plus frozen) obtained with the two-step (80.0%) protocol were statistically significantly similar to those obtained in the one-step protocol (72.2%). Similarly, morphokinetic events from early cleavage until late blastocyst stages were statistically significantly equivalent between both groups. No differences were found either in clinical pregnancy rates when comparing pure transfers performed with embryos selected from the two-step (75.0%), one-step (70.0%, respectively), and mixed (57.1%) groups. A total of 55 out of 91 embryos transferred implanted successfully (60.4%), resulting in a total of 37 newborns with a comparable birth weight mean among groups. Our findings support the idea that in a TLI with a controlled air purification system, human embryos can be successfully cultured continuously from day 0 onward in single medium with no need to renew it on day-3. This strategy does not affect embryo morphokinetics or development to term and offers more stable culture conditions for embryos as well as practical advantages and reduced costs for the IVF laboratory. Copyright © 2016 American Society for Reproductive Medicine. Published by Elsevier Inc. All rights reserved.
Tan, Swee Jin; Phan, Huan; Gerry, Benjamin Michael; Kuhn, Alexandre; Hong, Lewis Zuocheng; Min Ong, Yao; Poon, Polly Suk Yean; Unger, Marc Alexander; Jones, Robert C.; Quake, Stephen R.; Burkholder, William F.
2013-01-01
Library preparation for next-generation DNA sequencing (NGS) remains a key bottleneck in the sequencing process which can be relieved through improved automation and miniaturization. We describe a microfluidic device for automating laboratory protocols that require one or more column chromatography steps and demonstrate its utility for preparing Next Generation sequencing libraries for the Illumina and Ion Torrent platforms. Sixteen different libraries can be generated simultaneously with significantly reduced reagent cost and hands-on time compared to manual library preparation. Using an appropriate column matrix and buffers, size selection can be performed on-chip following end-repair, dA tailing, and linker ligation, so that the libraries eluted from the chip are ready for sequencing. The core architecture of the device ensures uniform, reproducible column packing without user supervision and accommodates multiple routine protocol steps in any sequence, such as reagent mixing and incubation; column packing, loading, washing, elution, and regeneration; capture of eluted material for use as a substrate in a later step of the protocol; and removal of one column matrix so that two or more column matrices with different functional properties can be used in the same protocol. The microfluidic device is mounted on a plastic carrier so that reagents and products can be aliquoted and recovered using standard pipettors and liquid handling robots. The carrier-mounted device is operated using a benchtop controller that seals and operates the device with programmable temperature control, eliminating any requirement for the user to manually attach tubing or connectors. In addition to NGS library preparation, the device and controller are suitable for automating other time-consuming and error-prone laboratory protocols requiring column chromatography steps, such as chromatin immunoprecipitation. PMID:23894273
Attitude-Independent Magnetometer Calibration for Spin-Stabilized Spacecraft
NASA Technical Reports Server (NTRS)
Natanson, Gregory
2005-01-01
The paper describes a three-step estimator to calibrate a Three-Axis Magnetometer (TAM) using TAM and slit Sun or star sensor measurements. In the first step, the Calibration Utility forms a loss function from the residuals of the magnitude of the geomagnetic field. This loss function is minimized with respect to biases, scale factors, and nonorthogonality corrections. The second step minimizes residuals of the projection of the geomagnetic field onto the spin axis under the assumption that spacecraft nutation has been suppressed by a nutation damper. Minimization is done with respect to various directions of the body spin axis in the TAM frame. The direction of the spin axis in the inertial coordinate system required for the residual computation is assumed to be unchanged with time. It is either determined independently using other sensors or included in the estimation parameters. In both cases all estimation parameters can be found using simple analytical formulas derived in the paper. The last step is to minimize a third loss function formed by residuals of the dot product between the geomagnetic field and Sun or star vector with respect to the misalignment angle about the body spin axis. The method is illustrated by calibrating TAM for the Fast Auroral Snapshot Explorer (FAST) using in-flight TAM and Sun sensor data. The estimated parameters include magnetic biases, scale factors, and misalignment angles of the spin axis in the TAM frame. Estimation of the misalignment angle about the spin axis was inconclusive since (at least for the selected time interval) the Sun vector was about 15 degrees from the direction of the spin axis; as a result residuals of the dot product between the geomagnetic field and Sun vectors were to a large extent minimized as a by-product of the second step.
Bean, William T.; Stafford, Robert; Butterfield, H. Scott; Brashares, Justin S.
2014-01-01
Species distributions are known to be limited by biotic and abiotic factors at multiple temporal and spatial scales. Species distribution models, however, frequently assume a population at equilibrium in both time and space. Studies of habitat selection have repeatedly shown the difficulty of estimating resource selection if the scale or extent of analysis is incorrect. Here, we present a multi-step approach to estimate the realized and potential distribution of the endangered giant kangaroo rat. First, we estimate the potential distribution by modeling suitability at a range-wide scale using static bioclimatic variables. We then examine annual changes in extent at a population-level. We define “available” habitat based on the total suitable potential distribution at the range-wide scale. Then, within the available habitat, model changes in population extent driven by multiple measures of resource availability. By modeling distributions for a population with robust estimates of population extent through time, and ecologically relevant predictor variables, we improved the predictive ability of SDMs, as well as revealed an unanticipated relationship between population extent and precipitation at multiple scales. At a range-wide scale, the best model indicated the giant kangaroo rat was limited to areas that received little to no precipitation in the summer months. In contrast, the best model for shorter time scales showed a positive relation with resource abundance, driven by precipitation, in the current and previous year. These results suggest that the distribution of the giant kangaroo rat was limited to the wettest parts of the drier areas within the study region. This multi-step approach reinforces the differing relationship species may have with environmental variables at different scales, provides a novel method for defining “available” habitat in habitat selection studies, and suggests a way to create distribution models at spatial and temporal scales relevant to theoretical and applied ecologists. PMID:25237807
Is USMLE Step 1 score a valid predictor of success in surgical residency?
Sutton, Erica; Richardson, James David; Ziegler, Craig; Bond, Jordan; Burke-Poole, Molly; McMasters, Kelly M
2014-12-01
Many programs rely extensively on United States Medical Licensing Examination (USMLE) scores for interviews/selection of surgical residents. However, their predictive ability remains controversial. We examined the association between USMLE scores and success in surgical residency. We compared USMLE scores for 123 general surgical residents who trained in the past 20 years and their performance evaluation. Scores were normalized to the mean for the testing year and expressed as a ratio (1 = mean). Performances were evaluated by (1) rotation evaluations; (2) "dropouts;" (3) overall American Board of Surgery pass rate; (4) first-time American Board of Surgery pass rate; and (5) a retrospective comprehensive faculty evaluation. For the latter, 16 surgeons (average faculty tenure 22 years) rated residents on a 1 to 4 score (1 = fair; 4 = excellent). Rotation evaluations by faculty and "drop out" rates were not associated with USMLE score differences (dropouts had average above the mean). One hundred percent of general surgery practitioners achieved board certification regardless of USMLE score but trainees with an average above the mean had a higher first-time pass rate (P = .04). Data from the comprehensive faculty evaluations were conflicting: there was a moderate degree of correlation between board scores and faculty evaluations (r = .287, P = .001). However, a score above the mean was associated with a faculty ranking of 3 to 4 in only 51.7% of trainees. Higher USMLE scores were associated with higher faculty evaluations and first-time board pass rates. However, their positive predictive value was only 50% for higher faculty evaluations and a high overall board pass rate can be achieved regardless of USMLE scores. USMLE Step 1 score is a valid tool for selecting residents but caution might be indicated in using it as a single selection factor. Copyright © 2014 Elsevier Inc. All rights reserved.
Gobi, K Vengatajalabathy; Matsumoto, Kiyoshi; Toko, Kiyoshi; Ikezaki, Hidekazu; Miura, Norio
2007-04-01
This paper describes the fabrication and sensing characteristics of a self-assembled monolayer (SAM)-based surface plasmon resonance (SPR) immunosensor for detection of benzaldehyde (BZ). The functional sensing surface was fabricated by the immobilization of a benzaldehyde-ovalbumin conjugate (BZ-OVA) on Au-thiolate SAMs containing carboxyl end groups. Covalent binding of BZ-OVA on SAM was found to be dependent on the composition of the base SAM, and it is improved very much with the use of a mixed monolayer strategy. Based on SPR angle measurements, the functional sensor surface is established as a compact monolayer of BZ-OVA bound on the mixed SAM. The BZ-OVA-bound sensor surface undergoes immunoaffinity binding with anti-benzaldehyde antibody (BZ-Ab) selectively. An indirect inhibition immunoassay principle has been applied, in which analyte benzaldehyde solution was incubated with an optimal concentration of BZ-Ab for 5 min and injected over the sensor chip. Analyte benzaldehyde undergoes immunoreaction with BZ-Ab and makes it inactive for binding to BZ-OVA on the sensor chip. As a result, the SPR angle response decreases with an increase in the concentration of benzaldehyde. The fabricated immunosensor demonstrates a low detection limit (LDL) of 50 ppt (pg mL(-1)) with a response time of 5 min. Antibodies bound to the sensor chip during an immunoassay could be detached by a brief exposure to acidic pepsin. With this surface regeneration, reusability of the same sensor chip for as many as 30 determination cycles has been established. Sensitivity has been enhanced further with the application of an additional single-step multi-sandwich immunoassay step, in which the BZ-Ab bound to the sensor chip was treated with a mixture of biotin-labeled secondary antibody, streptavidin and biotin-bovine serum albumin (Bio-BSA) conjugate. With this approach, the SPR sensor signal increased by ca. 12 times and the low detection limit improved to 5 ppt with a total response time of no more than ca. 10 min. Figure A single-step multi-sandwich immunoassay step increases SPR sensor signal by ca. 12 times affording a low detection limit for benzaldehyde of 5 ppt.
Variable-mesh method of solving differential equations
NASA Technical Reports Server (NTRS)
Van Wyk, R.
1969-01-01
Multistep predictor-corrector method for numerical solution of ordinary differential equations retains high local accuracy and convergence properties. In addition, the method was developed in a form conducive to the generation of effective criteria for the selection of subsequent step sizes in step-by-step solution of differential equations.
2012-01-01
Background Previous studies demonstrated that stroke survivors have a limited capacity to increase their walking speeds beyond their self-selected maximum walking speed (SMWS). The purpose of this study was to determine the capacity of stroke survivors to reach faster speeds than their SMWS while walking on a treadmill belt or while being pushed by a robotic system (i.e. “push mode”). Methods Eighteen chronic stroke survivors with hemiplegia were involved in the study. We calculated their self-selected comfortable walking speed (SCWS) and SMWS overground using a 5-meter walk test (5-MWT). Then, they were exposed to walking at increased speeds, on a treadmill and while in “push mode” in an overground robotic device, the KineAssist, until they were tested at a speed that they could not sustain without losing balance. We recorded the time and number of steps during each trial and calculated gait speed, average cadence and average step length. Results Maximum walking speed in the “push mode” was 13% higher than the maximum walking speed on the treadmill and both were higher (“push mode”: 61%; treadmill: 40%) than the maximum walking speed overground. Subjects achieved these faster speeds by initially increasing both step length and cadence and, once individuals stopped increasing their step length, by only increasing cadence. Conclusions With post-stroke hemiplegia, individuals are able to walk at faster speeds than their SMWS overground, when provided with a safe environment that provides external forces that requires them to attempt dynamic stability maintenance at higher gait speeds. Therefore, this study suggests the possibility that, given the appropriate conditions, people post-stroke can be trained at higher speeds than previously attempted. PMID:23057500
NASA Astrophysics Data System (ADS)
Rocha, Humberto; Dias, Joana M.; Ferreira, Brígida C.; Lopes, Maria C.
2013-05-01
Generally, the inverse planning of radiation therapy consists mainly of the fluence optimization. The beam angle optimization (BAO) in intensity-modulated radiation therapy (IMRT) consists of selecting appropriate radiation incidence directions and may influence the quality of the IMRT plans, both to enhance better organ sparing and to improve tumor coverage. However, in clinical practice, most of the time, beam directions continue to be manually selected by the treatment planner without objective and rigorous criteria. The goal of this paper is to introduce a novel approach that uses beam’s-eye-view dose ray tracing metrics within a pattern search method framework in the optimization of the highly non-convex BAO problem. Pattern search methods are derivative-free optimization methods that require a few function evaluations to progress and converge and have the ability to better avoid local entrapment. The pattern search method framework is composed of a search step and a poll step at each iteration. The poll step performs a local search in a mesh neighborhood and ensures the convergence to a local minimizer or stationary point. The search step provides the flexibility for a global search since it allows searches away from the neighborhood of the current iterate. Beam’s-eye-view dose metrics assign a score to each radiation beam direction and can be used within the pattern search framework furnishing a priori knowledge of the problem so that directions with larger dosimetric scores are tested first. A set of clinical cases of head-and-neck tumors treated at the Portuguese Institute of Oncology of Coimbra is used to discuss the potential of this approach in the optimization of the BAO problem.
Nine Steps to a Successful Lighting Retrofit.
ERIC Educational Resources Information Center
Ries, Jack
1998-01-01
Presents the steps needed to successfully design a lighting retrofit of school classrooms. Tips cover budgeting, technology, financing, contractor selection, assessing area function, and choosing a light source. (GR)
Steps Toward Effective Production of Speech (STEPS): No. 6--Rewards and How to Use Them.
ERIC Educational Resources Information Center
Sheeley, Eugene C.; McQuiddy, Doris
This guide, part of a series of booklets for parents of deaf-blind children developed by Project STEP (Steps Toward Effective Production of Speech), considers the use of rewards in shaping or changing the behavior of deaf-blind children. The types of rewards (e.g., food, drink, touch, action, something to listen to or look at) and selection of…
Bracci, S; Caruso, O; Galeotti, M; Iannaccone, R; Magrini, D; Picchi, D; Pinna, D; Porcinai, S
2015-06-15
This paper demonstrates that an educated methodology based on both non-invasive and micro invasive techniques in a two-step approach is a powerful tool to characterize the materials and stratigraphies of an Egyptian coffin, which was restored several times. This coffin, belonging to a certain Mesiset, is now located at the Museo Civico Archeologico of Bologna (inventory number MCABo EG 1963). Scholars attributed it to the late 22nd/early 25th dynasty by stylistic comparison. The first step of the diagnostic approach applied imaging techniques on the whole surface in order to select measurements spots and to unveil both original and restored areas. Images and close microscopic examination of the polychrome surface allowed selecting representative areas to be investigated in situ by portable spectroscopic techniques: X-ray Fluorescence (XRF), Fiber Optic Reflectance Spectroscopy (FORS) and Fourier Transform Infrared spectroscopy (FTIR). After the analysis of the results coming from the first step, very few selected samples were taken to clarify the stratigraphy of the polychrome layers. The first step, based on the combination of imaging and spectroscopic techniques in a totally non-invasive modality, is quite unique in the literature on Egyptian coffins and enabled us to reveal many differences in the ground layer's composition and to identify a remarkable number of pigments in the original and restored areas. This work offered also a chance to check the limitations of the non-invasive approach applied on a complex case, namely the right localization of different materials in the stratigraphy and the identification of binding media. Indeed, to dissolve any remaining doubts on superimposed layers belonging to different interventions, it was necessary to sample few micro-fragments in some selected areas and analyze them prepared as cross-sections. The original ground layer is made of calcite, while the restored areas show the presence of either a mixture of calcite and silicates or a gypsum ground, overlapped by lead white. The original pigments were identified as orpiment, cinnabar and red clay, Egyptian blue and green copper based pigments. Some other pigments, such as white lead, Naples yellow, cerulean blue and azurite were only found in the restored areas. Copyright © 2015 Elsevier B.V. All rights reserved.
Deryke, C Andrew; Du, Xiaoli; Nicolau, David P
2006-09-01
The increasingly recognized prevalence of first-step parC mutants in Streptococcus pneumoniae and the development of de novo resistance while on fluoroquinolone therapy are of concern. Previous work by our group demonstrated the ability of moxifloxacin, but not levofloxacin, to eradicate parC mutants. The objective of this experiment was to determine whether these fluoroquinolone antibiotics provided equivalent bacterial kill when similar AUC/MICs were examined. An in vitro pharmacodynamic model was used to simulate the epithelial lining fluid (ELF) concentrations following oral administration of levofloxacin 500 mg once daily and moxifloxacin 400 mg once daily in older adults. In addition, a range of AUC/MICs were also modelled, including levofloxacin 750 mg once daily. Five different S. pneumoniae containing first-step parC mutations and one isolate without mutations were tested for 48 h and time-kill curves were constructed. Samples at 0, 24 and 48 h were collected for phenotypic and genotypic profiling. HPLC was used to verify that target exposures were achieved. The isolate without a parC mutation displayed a 4 log reduction in cfu after treatment with levofloxacin 500 mg and did not select for resistance. In all five isolates containing first-step parC mutations, resistance emerged within 48 h with a > or =16-fold increase in MIC and the acquisition of a gyrA mutant. Increasing the exposure of levofloxacin to approximately 750 mg dose still led to > or =16-fold increase in MIC at 48 h in two of the four isolates containing parC mutations. On the other hand, moxifloxacin 400 mg sustained bacterial killing against the two isolates tested without the selection of resistant mutants. It appears that the critical AUC/MIC necessary to prevent the acquisition of resistance for levofloxacin is 200 and approximately 400 for moxifloxacin. Due to suboptimal exposures, once-daily oral regimens of levofloxacin at both 500 and 750 mg inconsistently led to bactericidal activity and the frequent acquisition of a second-step gyrA mutation in S. pneumoniae isolates already containing a first-step parC mutation. Conversely, once-daily moxifloxacin 400 mg provides exposures that vastly exceed the apparent efficacy breakpoint and did not select for second-step mutants until exposures were decreased 4-fold. As a result of these data and the emerging literature involving mutations in the pneumococcus, caution should be exercised when the respiratory fluoroquinolones are used to treat patients infected with S. pneumoniae suspected of having parC mutations.
Selective robust optimization: A new intensity-modulated proton therapy optimization strategy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Yupeng; Niemela, Perttu; Siljamaki, Sami
2015-08-15
Purpose: To develop a new robust optimization strategy for intensity-modulated proton therapy as an important step in translating robust proton treatment planning from research to clinical applications. Methods: In selective robust optimization, a worst-case-based robust optimization algorithm is extended, and terms of the objective function are selectively computed from either the worst-case dose or the nominal dose. Two lung cancer cases and one head and neck cancer case were used to demonstrate the practical significance of the proposed robust planning strategy. The lung cancer cases had minimal tumor motion less than 5 mm, and, for the demonstration of the methodology,more » are assumed to be static. Results: Selective robust optimization achieved robust clinical target volume (CTV) coverage and at the same time increased nominal planning target volume coverage to 95.8%, compared to the 84.6% coverage achieved with CTV-based robust optimization in one of the lung cases. In the other lung case, the maximum dose in selective robust optimization was lowered from a dose of 131.3% in the CTV-based robust optimization to 113.6%. Selective robust optimization provided robust CTV coverage in the head and neck case, and at the same time improved controls over isodose distribution so that clinical requirements may be readily met. Conclusions: Selective robust optimization may provide the flexibility and capability necessary for meeting various clinical requirements in addition to achieving the required plan robustness in practical proton treatment planning settings.« less
Signatures of ecological processes in microbial community time series.
Faust, Karoline; Bauchinger, Franziska; Laroche, Béatrice; de Buyl, Sophie; Lahti, Leo; Washburne, Alex D; Gonze, Didier; Widder, Stefanie
2018-06-28
Growth rates, interactions between community members, stochasticity, and immigration are important drivers of microbial community dynamics. In sequencing data analysis, such as network construction and community model parameterization, we make implicit assumptions about the nature of these drivers and thereby restrict model outcome. Despite apparent risk of methodological bias, the validity of the assumptions is rarely tested, as comprehensive procedures are lacking. Here, we propose a classification scheme to determine the processes that gave rise to the observed time series and to enable better model selection. We implemented a three-step classification scheme in R that first determines whether dependence between successive time steps (temporal structure) is present in the time series and then assesses with a recently developed neutrality test whether interactions between species are required for the dynamics. If the first and second tests confirm the presence of temporal structure and interactions, then parameters for interaction models are estimated. To quantify the importance of temporal structure, we compute the noise-type profile of the community, which ranges from black in case of strong dependency to white in the absence of any dependency. We applied this scheme to simulated time series generated with the Dirichlet-multinomial (DM) distribution, Hubbell's neutral model, the generalized Lotka-Volterra model and its discrete variant (the Ricker model), and a self-organized instability model, as well as to human stool microbiota time series. The noise-type profiles for all but DM data clearly indicated distinctive structures. The neutrality test correctly classified all but DM and neutral time series as non-neutral. The procedure reliably identified time series for which interaction inference was suitable. Both tests were required, as we demonstrated that all structured time series, including those generated with the neutral model, achieved a moderate to high goodness of fit to the Ricker model. We present a fast and robust scheme to classify community structure and to assess the prevalence of interactions directly from microbial time series data. The procedure not only serves to determine ecological drivers of microbial dynamics, but also to guide selection of appropriate community models for prediction and follow-up analysis.
NASA Astrophysics Data System (ADS)
Nichols, Leannah M.
Commercially pure titanium can take up to six months to successfully manufacture a six-inch in diameter ingot in which can be shipped to be melted and shaped into other useful components. The applications to the corrosion-resistant, light weight, strong metal are endless, yet so is the manufacturing processing time. At a cost of around $80 per pound of certain grades of titanium powder, the everyday consumer cannot afford to use titanium in the many ways it is beneficial simply because the number of processing steps it takes to manufacture consumes too much time, energy, and labor. In this research, the steps it takes from the raw powder form to the final part are proposed to be reduced from 4-8 steps to only 2 steps utilizing a new technology that may even improve upon the titanium properties at the same time as it is reducing the number of steps of manufacture. The two-step procedure involves selecting a cylindrical or rectangular die and punch to compress a small amount of commercially pure titanium to a strong-enough compact for transportation to the friction stir welder to be consolidated. Friction stir welding invented in 1991 in the United Kingdom uses a tool, similar to a drill bit, to approach a sample and gradually plunge into the material at a certain rotation rate of between 100 to 2,100 RPM. In the second step, the friction stir welder is used to process the titanium powder held in a tight holder to consolidate into a harder titanium form. The resulting samples are cut to expose the cross section and then grinded, polished, and cleaned to be observed and tested using scanning electron microscopy (SEM), electron dispersive spectroscopy (EDS), and a Vickers microhardness tester. The results were that the thicker the sample, the harder the resulting consolidated sample peaking at 2 to 3 times harder than that of the original commercially pure titanium in solid form at a peak value of 435.9 hardness and overall average of 251.13 hardness. The combined results of the SEM and EDS have shown that the mixing of the sample holder material, titanium, and tool material were not of a large amount and therefore proves the feasibility of this study. This study should be continued to lessen the labor, energy, and cost of the production of titanium to therefore allow titanium to be improved upon and be more efficient for many applications across many industries.
NASA Astrophysics Data System (ADS)
Siniscalchi, Agata; Romano, Gerardo; Barracano, Fabio; Balasco, Marianna; Tripaldi, Simona
2017-04-01
Analyzing a 4 years of a single site MT continuous monitoring data, a systematic variation of the MT transfer function estimates was observed in the [20-100 s] period range that was shown to be connected to the global geomagnetic activity, Ap index (Romano et al., 2014). The monitored period, from 2007 to 2011, includes the global minimum of solar activity which occurred in 2009 (low MT source amplitude). It was shown that the impedance robust estimations tend to stabilize when the Ap index exceed a value of 10. In order to exclude a possible dependence of the observed fluctuation on the presence of a local cultural noise source, for a shorter period ( 2 months) the monitoring data were also processed by using a remote site. Recently Chave (2012) demonstrated that MT data can be described by alpha stable distribution family that is characterized by four-parameters that must be empirically determined. The Gaussian distribution belongs to this family as a special case when one of the four parameter, α the tail thickness, is equal to 2. Following Chave (2016), MT data are typically stably distributed with the empirical observation that 0.8 ≤α ≤1.8. In order to better understand the observed dependence of the MT continuous monitoring on the global geomagnetic activity, here we present the results a re-analysis of the MT monitoring data with a two steps processing. In the first step, we characterize the time series of the Alpha Stable Distribution Parameters (ASDP) as obtained from the whole processing of the dataset with the aim of checking for possible connections between these last and the Ap index. In the second step, we estimate the ASDP by using only the samples which satisfy the mathematical range of existence of the normalized WAL (Weaver et al.,2000) considering these last as a diagnostic tool to detect which segments of the time series in the frequency domain are strongly contaminated by noise (WAL selection criterion). The comparison between the results of the two above mentioned steps, allow us to understand how the WAL based selection criterion performs.
Perricone, Marianne; Bevilacqua, Antonio; Corbo, Maria Rosaria; Sinigaglia, Milena
2014-04-01
The main topic of this research was to select some suitable functional starter cultures for cereal-based food or beverages. This aim was achieved through a step-by step approach focused on the technological characterization, as well as on the evaluation of the probiotic traits of yeasts; the technological characterization relied on the assessment of enzymatic activities (catalase, urease, β-glucosidase), growth under various conditions (pH, temperature, addition of salt, lactic and acetic acids) and leavening ability. The results of this step were used as input data for a Principal Component Analysis; thus, the most technologically relevant 18 isolates underwent a second selection for their probiotic traits (survival at pH 2.5 and with bile salts added, antibiotic resistance, antimicrobial activity towards foodborne pathogens, hydrophobic properties and biofilm production) and were identified through genotyping. Two isolates (Saccharomyces cerevisiae strain 2 and S. cerevisiae strain 4) were selected and analyzed in the last step for the simulation of the gastric transit; these isolates showed a trend similar to S. cerevisiae var. boulardii ATCC MYA-796, a commercial probiotic yeast used as control. Copyright © 2013 Elsevier Ltd. All rights reserved.
Even-Desrumeaux, Klervi; Nevoltris, Damien; Lavaut, Marie Noelle; Alim, Karima; Borg, Jean-Paul; Audebert, Stéphane; Kerfelec, Brigitte; Baty, Daniel; Chames, Patrick
2014-01-01
Phage display is a well-established procedure to isolate binders against a wide variety of antigens that can be performed on purified antigens, but also on intact cells. As selection steps are performed in vitro, it is possible to focus the outcome of the selection on relevant epitopes by performing some additional steps, such as depletion or competitive elutions. However in practice, the efficiency of these steps is often limited and can lead to inconsistent results. We have designed a new selection method named masked selection, based on the blockade of unwanted epitopes to favor the targeting of relevant ones. We demonstrate the efficiency and flexibility of this method by selecting single-domain antibodies against a specific portion of a fusion protein, by selecting binders against several members of the seven transmembrane receptor family using transfected HEK cells, or by selecting binders against unknown breast cancer markers not expressed on normal samples. The relevance of this approach for antibody-based therapies was further validated by the identification of four of these markers, Epithelial cell adhesion molecule, Transferrin receptor 1, Metastasis cell adhesion molecule, and Sushi containing domain 2, using immunoprecipitation and mass spectrometry. This new phage display strategy can be applied to any type of antibody fragments or alternative scaffolds, and is especially suited for the rapid discovery and identification of cell surface markers. PMID:24361863
Burst switching without guard interval in all-optical software-define star intra-data center network
NASA Astrophysics Data System (ADS)
Ji, Philip N.; Wang, Ting
2014-02-01
Optical switching has been introduced in intra-data center networks (DCNs) to increase capacity and to reduce power consumption. Recently we proposed a star MIMO OFDM-based all-optical DCN with burst switching and software-defined networking. Here, we introduce the control procedure for the star DCN in detail for the first time. The timing, signaling, and operation are described for each step to achieve efficient bandwidth resource utilization. Furthermore, the guidelines for the burst assembling period selection that allows burst switching without guard interval are discussed. The star all-optical DCN offers flexible and efficient control for next-generation data center application.
NASA Technical Reports Server (NTRS)
Shih, T. I.-P.; Smith, G. E.; Springer, G. S.; Rimon, Y.
1983-01-01
A method is presented for formulating the boundary conditions in implicit finite-difference form needed for obtaining solutions to the compressible Navier-Stokes equations by the Beam and Warming implicit factored method. The usefulness of the method was demonstrated (a) by establishing the boundary conditions applicable to the analysis of the flow inside an axisymmetric piston-cylinder configuration and (b) by calculating velocities and mass fractions inside the cylinder for different geometries and different operating conditions. Stability, selection of time step and grid sizes, and computer time requirements are discussed in reference to the piston-cylinder problem analyzed.
A random rule model of surface growth
NASA Astrophysics Data System (ADS)
Mello, Bernardo A.
2015-02-01
Stochastic models of surface growth are usually based on randomly choosing a substrate site to perform iterative steps, as in the etching model, Mello et al. (2001) [5]. In this paper I modify the etching model to perform sequential, instead of random, substrate scan. The randomicity is introduced not in the site selection but in the choice of the rule to be followed in each site. The change positively affects the study of dynamic and asymptotic properties, by reducing the finite size effect and the short-time anomaly and by increasing the saturation time. It also has computational benefits: better use of the cache memory and the possibility of parallel implementation.
Kennedy, Quinn; Taylor, Joy; Noda, Art; Yesavage, Jerome; Lazzeroni, Laura C.
2015-01-01
Understanding the possible effects of the number of practice sessions (practice) and time between practice sessions (interval) among middle-aged and older adults in real world tasks has important implications for skill maintenance. Prior training and cognitive ability may impact practice and interval effects on real world tasks. In this study, we took advantage of existing practice data from five simulated flights among 263 middle-aged and older pilots with varying levels of flight expertise (defined by FAA proficiency ratings). We developed a new STEP (Simultaneous Time Effects on Practice) model to: (1) model the simultaneous effects of practice and interval on performance of the five flights, and (2) examine the effects of selected covariates (age, flight expertise, and three composite measures of cognitive ability). The STEP model demonstrated consistent positive practice effects, negative interval effects, and predicted covariate effects. Age negatively moderated the beneficial effects of practice. Additionally, cognitive processing speed and intra-individual variability (IIV) in processing speed moderated the benefits of practice and/or the negative influence of interval for particular flight performance measures. Expertise did not interact with either practice or interval. Results indicate that practice and interval effects occur in simulated flight tasks. However, processing speed and IIV may influence these effects, even among high functioning adults. Results have implications for the design and assessment of training interventions targeted at middle-aged and older adults for complex real world tasks. PMID:26280383
Bayesian SEM for Specification Search Problems in Testing Factorial Invariance.
Shi, Dexin; Song, Hairong; Liao, Xiaolan; Terry, Robert; Snyder, Lori A
2017-01-01
Specification search problems refer to two important but under-addressed issues in testing for factorial invariance: how to select proper reference indicators and how to locate specific non-invariant parameters. In this study, we propose a two-step procedure to solve these issues. Step 1 is to identify a proper reference indicator using the Bayesian structural equation modeling approach. An item is selected if it is associated with the highest likelihood to be invariant across groups. Step 2 is to locate specific non-invariant parameters, given that a proper reference indicator has already been selected in Step 1. A series of simulation analyses show that the proposed method performs well under a variety of data conditions, and optimal performance is observed under conditions of large magnitude of non-invariance, low proportion of non-invariance, and large sample sizes. We also provide an empirical example to demonstrate the specific procedures to implement the proposed method in applied research. The importance and influences are discussed regarding the choices of informative priors with zero mean and small variances. Extensions and limitations are also pointed out.
Four Practical Steps to Buying Copiers.
ERIC Educational Resources Information Center
Sturgeon, Julie
1999-01-01
Presents practical steps for avoiding overbuying when selecting copiers for university administration. Evaluating copying needs, eliminating excessive features, examining the dealer's capabilities, and being patient for the right price are discussed. (GR)
The Relaxation of Vicinal (001) with ZigZag [110] Steps
NASA Astrophysics Data System (ADS)
Hawkins, Micah; Hamouda, Ajmi Bh; González-Cabrera, Diego Luis; Einstein, Theodore L.
2012-02-01
This talk presents a kinetic Monte Carlo study of the relaxation dynamics of [110] steps on a vicinal (001) simple cubic surface. This system is interesting because [110] steps have different elementary excitation energetics and favor step diffusion more than close-packed [100] steps. In this talk we show how this leads to relaxation dynamics showing greater fluctuations on a shorter time scale for [110] steps as well as 2-bond breaking processes being rate determining in contrast to 3-bond breaking processes for [100] steps. The existence of a steady state is shown via the convergence of terrace width distributions at times much longer than the relaxation time. In this time regime excellent fits to the modified generalized Wigner distribution (as well as to the Berry-Robnik model when steps can overlap) were obtained. Also, step-position correlation function data show diffusion-limited increase for small distances along the step as well as greater average step displacement for zigzag steps compared to straight steps for somewhat longer distances along the step. Work supported by NSF-MRSEC Grant DMR 05-20471 as well as a DOE-CMCSN Grant.
Protein attributes contribute to halo-stability, bioinformatics approach
2011-01-01
Halophile proteins can tolerate high salt concentrations. Understanding halophilicity features is the first step toward engineering halostable crops. To this end, we examined protein features contributing to the halo-toleration of halophilic organisms. We compared more than 850 features for halophilic and non-halophilic proteins with various screening, clustering, decision tree, and generalized rule induction models to search for patterns that code for halo-toleration. Up to 251 protein attributes selected by various attribute weighting algorithms as important features contribute to halo-stability; from them 14 attributes selected by 90% of models and the count of hydrogen gained the highest value (1.0) in 70% of attribute weighting models, showing the importance of this attribute in feature selection modeling. The other attributes mostly were the frequencies of di-peptides. No changes were found in the numbers of groups when K-Means and TwoStep clustering modeling were performed on datasets with or without feature selection filtering. Although the depths of induced trees were not high, the accuracies of trees were higher than 94% and the frequency of hydrophobic residues pointed as the most important feature to build trees. The performance evaluation of decision tree models had the same values and the best correctness percentage recorded with the Exhaustive CHAID and CHAID models. We did not find any significant difference in the percent of correctness, performance evaluation, and mean correctness of various decision tree models with or without feature selection. For the first time, we analyzed the performance of different screening, clustering, and decision tree algorithms for discriminating halophilic and non-halophilic proteins and the results showed that amino acid composition can be used to discriminate between halo-tolerant and halo-sensitive proteins. PMID:21592393
Collet-Brose, Justine
2016-01-01
The aim of this study was, at the assay development stage and thus with an appropriate degree of rigor, to select the most appropriate technology platform and sample pretreatment procedure for a clinical ADA assay. Thus, ELISA, MSD, Gyrolab, and AlphaLISA immunoassay platforms were evaluated in association with target depletion and acid dissociation sample pretreatment steps. An acid dissociation step successfully improved the drug tolerance for all 4 technology platforms and the required drug tolerance was achieved with the Gyrolab and MSD platforms. The target tolerance was shown to be better for the ELISA format, where an acid dissociation treatment step alone was sufficient to achieve the desired target tolerance. However, inclusion of a target depletion step in conjunction with the acid treatment raised the target tolerance to the desired level for all of the technologies. A higher sensitivity was observed for the MSD and Gyrolab assays and the ELISA, MSD, and Gyrolab all displayed acceptable interdonor variability. This study highlights the usefulness of evaluating the performance of different assay platforms at an early stage in the assay development process to aid in the selection of the best fit-for-purpose technology platform and sample pretreatment steps. PMID:27243038
Integration Of Space Weather Into Space Situational Awareness
NASA Astrophysics Data System (ADS)
Reeves, G.
2010-09-01
Rapid assessment of space weather effects on satellites is a critical step in anomaly resolution and satellite threat assessment. That step, however, is often hindered by a number of factors including timely collection and delivery of space weather data and the inherent complexity of space weather information. As part of a larger, integrated space situational awareness program, Los Alamos National Laboratory has developed prototype operational space weather tools that run in real time and present operators with customized, user-specific information. The Dynamic Radiation Environment Assimilation Model (DREAM) focuses on the penetrating radiation environment from natural or nuclear-produced radiation belts. The penetrating radiation environment is highly dynamic and highly orbitdependent. Operators often must rely only on line plots of 2 MeV electron flux from the NOAA geosynchronous GOES satellites which is then assumed to be representative of the environment at the satellite of interest. DREAM uses data assimilation to produce a global, real-time, energy dependent specification. User tools are built around a distributed service oriented architecture (SOA) which allows operators to select any satellite from the space catalog and examine the environment for that specific satellite and time of interest. Depending on the application operators may need to examine instantaneous dose rates and/or dose accumulated over various lengths of time. Further, different energy thresholds can be selected depending on the shielding on the satellite or instrument of interest. In order to rapidly assess the probability that space weather effects, the current conditions can be compared against the historical distribution of radiation levels for that orbit. In the simplest operation a user would select a satellite and time of interest and immediately see if the environmental conditions were typical, elevated, or extreme based on how often those conditions occur in that orbit. This allows users to rapidly rule in or out environmental causes of anomalies. The same user interface can also allow users to drill down for more detailed quantitative information. DREAM can be run either from a distributed web-based user interface or as a stand-alone application for secure operations. We will discuss the underlying structure of the DREAM model and demonstrate the user interface that we have developed. We will also discuss future development plans for DREAM and how the same paradigm can be applied to integrating other space environment information into operational SSA systems.
Surface-Chemistry-Mediated Control of Individual Magnetic Helical Microswimmers in a Swarm.
Wang, Xiaopu; Hu, Chengzhi; Schurz, Lukas; De Marco, Carmela; Chen, Xiangzhong; Pané, Salvador; Nelson, Bradley J
2018-05-31
Magnetic helical microswimmers, also known as artificial bacterial flagella (ABFs), perform 3D navigation in various liquids under low-strength rotating magnetic fields by converting rotational motion to translational motion. ABFs have been widely studied as carriers for targeted delivery and release of drugs and cells. For in vivo/ in vitro therapeutic applications, control over individual groups of swimmers within a swarm is necessary for several biomedical applications such as drug delivery or small-scale surgery. In this work, we present the selective control of individual swimmers in a swarm of geometrically and magnetically identical ABFs by modifying their surface chemistry. We confirm experimentally and analytically that the forward/rotational velocity ratio of ABFs is independent of their surface coatings when the swimmers are operated below their step-out frequency (the frequency requiring the entire available magnetic torque to maintain synchronous rotation). We also show that ABFs with hydrophobic surfaces exhibit larger step-out frequencies and higher maximum forward velocities compared to their hydrophilic counterparts. Thus, selective control of a group of swimmers within a swarm of ABFs can be achieved by operating the selected ABFs at a frequency that is below their step-out frequencies but higher than the step-out frequencies of unselected ABFs. The feasibility of this method is investigated in water and in biologically relevant solutions. Selective control is also demonstrated inside a Y-shaped microfluidic channel. Our results present a systematic approach for realizing selective control within a swarm of magnetic helical microswimmers.
Consistency of internal fluxes in a hydrological model running at multiple time steps
NASA Astrophysics Data System (ADS)
Ficchi, Andrea; Perrin, Charles; Andréassian, Vazken
2016-04-01
Improving hydrological models remains a difficult task and many ways can be explored, among which one can find the improvement of spatial representation, the search for more robust parametrization, the better formulation of some processes or the modification of model structures by trial-and-error procedure. Several past works indicate that model parameters and structure can be dependent on the modelling time step, and there is thus some rationale in investigating how a model behaves across various modelling time steps, to find solutions for improvements. Here we analyse the impact of data time step on the consistency of the internal fluxes of a rainfall-runoff model run at various time steps, by using a large data set of 240 catchments. To this end, fine time step hydro-climatic information at sub-hourly resolution is used as input of a parsimonious rainfall-runoff model (GR) that is run at eight different model time steps (from 6 minutes to one day). The initial structure of the tested model (i.e. the baseline) corresponds to the daily model GR4J (Perrin et al., 2003), adapted to be run at variable sub-daily time steps. The modelled fluxes considered are interception, actual evapotranspiration and intercatchment groundwater flows. Observations of these fluxes are not available, but the comparison of modelled fluxes at multiple time steps gives additional information for model identification. The joint analysis of flow simulation performance and consistency of internal fluxes at different time steps provides guidance to the identification of the model components that should be improved. Our analysis indicates that the baseline model structure is to be modified at sub-daily time steps to warrant the consistency and realism of the modelled fluxes. For the baseline model improvement, particular attention is devoted to the interception model component, whose output flux showed the strongest sensitivity to modelling time step. The dependency of the optimal model complexity on time step is also analysed. References: Perrin, C., Michel, C., Andréassian, V., 2003. Improvement of a parsimonious model for streamflow simulation. Journal of Hydrology, 279(1-4): 275-289. DOI:10.1016/S0022-1694(03)00225-7
Asymmetry of short-term control of spatio-temporal gait parameters during treadmill walking
NASA Astrophysics Data System (ADS)
Kozlowska, Klaudia; Latka, Miroslaw; West, Bruce J.
2017-03-01
Optimization of energy cost determines average values of spatio-temporal gait parameters such as step duration, step length or step speed. However, during walking, humans need to adapt these parameters at every step to respond to exogenous and/or endogenic perturbations. While some neurological mechanisms that trigger these responses are known, our understanding of the fundamental principles governing step-by-step adaptation remains elusive. We determined the gait parameters of 20 healthy subjects with right-foot preference during treadmill walking at speeds of 1.1, 1.4 and 1.7 m/s. We found that when the value of the gait parameter was conspicuously greater (smaller) than the mean value, it was either followed immediately by a smaller (greater) value of the contralateral leg (interleg control), or the deviation from the mean value decreased during the next movement of ipsilateral leg (intraleg control). The selection of step duration and the selection of step length during such transient control events were performed in unique ways. We quantified the symmetry of short-term control of gait parameters and observed the significant dominance of the right leg in short-term control of all three parameters at higher speeds (1.4 and 1.7 m/s).
LOSITAN: a workbench to detect molecular adaptation based on a Fst-outlier method.
Antao, Tiago; Lopes, Ana; Lopes, Ricardo J; Beja-Pereira, Albano; Luikart, Gordon
2008-07-28
Testing for selection is becoming one of the most important steps in the analysis of multilocus population genetics data sets. Existing applications are difficult to use, leaving many non-trivial, error-prone tasks to the user. Here we present LOSITAN, a selection detection workbench based on a well evaluated Fst-outlier detection method. LOSITAN greatly facilitates correct approximation of model parameters (e.g., genome-wide average, neutral Fst), provides data import and export functions, iterative contour smoothing and generation of graphics in a easy to use graphical user interface. LOSITAN is able to use modern multi-core processor architectures by locally parallelizing fdist, reducing computation time by half in current dual core machines and with almost linear performance gains in machines with more cores. LOSITAN makes selection detection feasible to a much wider range of users, even for large population genomic datasets, by both providing an easy to use interface and essential functionality to complete the whole selection detection process.
NASA Astrophysics Data System (ADS)
Locke, Clayton R.; Kobayashi, Tohru; Midorikawa, Katsumi
2017-01-01
Odd-mass-selective ionization of palladium for purposes of resource recycling and management of long-lived fission products can be achieved by exploiting transition selection rules in a well-established three-step excitation process. In this conventional scheme, circularly polarized lasers of the same handedness excite isotopes via two intermediate 2D5/2 core states, and a third laser is then used for ionization via autoionizing Rydberg states. We propose an alternative excitation scheme via intermediate 2D3/2 core states before the autoionizing Rydberg state, improving ionization efficiency by over 130 times. We confirm high selectivity and measure odd-mass isotopes of >99.7(3)% of the total ionized product. We have identified and measured the relative ionization efficiency of the series of Rydberg states that converge to upper ionization limit of the 4 d 9(2D3/2) level, and identify the most efficient excitation is via the Rydberg state at 67668.18(10) cm-1.
NASA Technical Reports Server (NTRS)
Batina, John T.
1990-01-01
Improved algorithms for the solution of the time-dependent Euler equations are presented for unsteady aerodynamic analysis involving unstructured dynamic meshes. The improvements have been developed recently to the spatial and temporal discretizations used by unstructured grid flow solvers. The spatial discretization involves a flux-split approach which is naturally dissipative and captures shock waves sharply with at most one grid point within the shock structure. The temporal discretization involves an implicit time-integration shceme using a Gauss-Seidel relaxation procedure which is computationally efficient for either steady or unsteady flow problems. For example, very large time steps may be used for rapid convergence to steady state, and the step size for unsteady cases may be selected for temporal accuracy rather than for numerical stability. Steady and unsteady flow results are presented for the NACA 0012 airfoil to demonstrate applications of the new Euler solvers. The unsteady results were obtained for the airfoil pitching harmonically about the quarter chord. The resulting instantaneous pressure distributions and lift and moment coefficients during a cycle of motion compare well with experimental data. The paper presents a description of the Euler solvers along with results and comparisons which assess the capability.
NASA Technical Reports Server (NTRS)
Batina, John T.
1990-01-01
Improved algorithm for the solution of the time-dependent Euler equations are presented for unsteady aerodynamic analysis involving unstructured dynamic meshes. The improvements were developed recently to the spatial and temporal discretizations used by unstructured grid flow solvers. The spatial discretization involves a flux-split approach which is naturally dissipative and captures shock waves sharply with at most one grid point within the shock structure. The temporal discretization involves an implicit time-integration scheme using a Gauss-Seidel relaxation procedure which is computationally efficient for either steady or unsteady flow problems. For example, very large time steps may be used for rapid convergence to steady state, and the step size for unsteady cases may be selected for temporal accuracy rather than for numerical stability. Steady and unsteady flow results are presented for the NACA 0012 airfoil to demonstrate applications of the new Euler solvers. The unsteady results were obtained for the airfoil pitching harmonically about the quarter chord. The resulting instantaneous pressure distributions and lift and moment coefficients during a cycle of motion compare well with experimental data. A description of the Euler solvers is presented along with results and comparisons which assess the capability.
NASA Astrophysics Data System (ADS)
Lee, Ji-Seok; Song, Ki-Won
2015-11-01
The objective of the present study is to systematically elucidate the time-dependent rheological behavior of concentrated xanthan gum systems in complicated step-shear flow fields. Using a strain-controlled rheometer (ARES), step-shear flow behaviors of a concentrated xanthan gum model solution have been experimentally investigated in interrupted shear flow fields with a various combination of different shear rates, shearing times and rest times, and step-incremental and step-reductional shear flow fields with various shearing times. The main findings obtained from this study are summarized as follows. (i) In interrupted shear flow fields, the shear stress is sharply increased until reaching the maximum stress at an initial stage of shearing times, and then a stress decay towards a steady state is observed as the shearing time is increased in both start-up shear flow fields. The shear stress is suddenly decreased immediately after the imposed shear rate is stopped, and then slowly decayed during the period of a rest time. (ii) As an increase in rest time, the difference in the maximum stress values between the two start-up shear flow fields is decreased whereas the shearing time exerts a slight influence on this behavior. (iii) In step-incremental shear flow fields, after passing through the maximum stress, structural destruction causes a stress decay behavior towards a steady state as an increase in shearing time in each step shear flow region. The time needed to reach the maximum stress value is shortened as an increase in step-increased shear rate. (iv) In step-reductional shear flow fields, after passing through the minimum stress, structural recovery induces a stress growth behavior towards an equilibrium state as an increase in shearing time in each step shear flow region. The time needed to reach the minimum stress value is lengthened as a decrease in step-decreased shear rate.
NASA Astrophysics Data System (ADS)
Rößler, Thomas; Stein, Olaf; Heng, Yi; Baumeister, Paul; Hoffmann, Lars
2018-02-01
The accuracy of trajectory calculations performed by Lagrangian particle dispersion models (LPDMs) depends on various factors. The optimization of numerical integration schemes used to solve the trajectory equation helps to maximize the computational efficiency of large-scale LPDM simulations. We analyzed global truncation errors of six explicit integration schemes of the Runge-Kutta family, which we implemented in the Massive-Parallel Trajectory Calculations (MPTRAC) advection module. The simulations were driven by wind fields from operational analysis and forecasts of the European Centre for Medium-Range Weather Forecasts (ECMWF) at T1279L137 spatial resolution and 3 h temporal sampling. We defined separate test cases for 15 distinct regions of the atmosphere, covering the polar regions, the midlatitudes, and the tropics in the free troposphere, in the upper troposphere and lower stratosphere (UT/LS) region, and in the middle stratosphere. In total, more than 5000 different transport simulations were performed, covering the months of January, April, July, and October for the years 2014 and 2015. We quantified the accuracy of the trajectories by calculating transport deviations with respect to reference simulations using a fourth-order Runge-Kutta integration scheme with a sufficiently fine time step. Transport deviations were assessed with respect to error limits based on turbulent diffusion. Independent of the numerical scheme, the global truncation errors vary significantly between the different regions. Horizontal transport deviations in the stratosphere are typically an order of magnitude smaller compared with the free troposphere. We found that the truncation errors of the six numerical schemes fall into three distinct groups, which mostly depend on the numerical order of the scheme. Schemes of the same order differ little in accuracy, but some methods need less computational time, which gives them an advantage in efficiency. The selection of the integration scheme and the appropriate time step should possibly take into account the typical altitude ranges as well as the total length of the simulations to achieve the most efficient simulations. However, trying to summarize, we recommend the third-order Runge-Kutta method with a time step of 170 s or the midpoint scheme with a time step of 100 s for efficient simulations of up to 10 days of simulation time for the specific ECMWF high-resolution data set considered in this study. Purely stratospheric simulations can use significantly larger time steps of 800 and 1100 s for the midpoint scheme and the third-order Runge-Kutta method, respectively.
Bağda, Esra; Altundağ, Huseyin; Tüzen, Mustafa; Soylak, Mustafa
2017-08-01
In the present study, a simple, mono step deep eutectic solvent (DES) extraction was developed for selective extraction of copper from sediment samples. The optimization of all experimental parameters, e.g. DES type, sample/DES ratio, contact time and temperature were performed with using BCR-280 R (lake sediment certified reference material). The limit of detection (LOD) and the limit of quantification (LOQ) were found as 1.2 and 3.97 µg L -1 , respectively. The RSD of the procedure was 7.5%. The proposed extraction method was applied to river and lake sediments sampled from Serpincik, Çeltek, Kızılırmak (Fadl and Tecer region of the river), Sivas-Turkey.
Cakic, Suzana; Lacnjevac, Caslav; Stamenkovic, Jakov; Ristic, Nikola; Takic, Ljiljana; Barac, Miroljub; Gligoric, Miladin
2007-01-01
Two kinds of aqueous acrylic polyols (single step and multi step synthesis type) have been investigated for their performance in the two-component aqueous polyurethane application, by using more selective catalysts. The aliphatic polyfunctional isocyanates based on hexamethylen diisocyanates have been employed as suitable hardeners. The complex of zirconium, commercially known as K-KAT®XC-6212, and manganese (III) complexes with mixed ligands based on the derivative of maleic acid have been used as catalysts in this study. Both of the aqueous polyols give good results, in terms of application and hardness, when elevated temperatures and more selective catalysts are applied. A more selective catalyst promotes the reaction between the isocyanate and polyol component. This increases the percentage of urethane bonds and the degree of hardness in the films formed from the two components of aqueous polyurethane lacquers. The polyol based on the single step synthesis route is favourable concerning potlife and hardness. The obtained results show that the performance of the two-component aqueous polyurethane coatings depends on the polymer structure of the polyols as well as on the selectivity of the employed catalyst.
Selective removal of cesium by ammonium molybdophosphate - polyacrylonitrile bead and membrane.
Ding, Dahu; Zhang, Zhenya; Chen, Rongzhi; Cai, Tianming
2017-02-15
The selective removal of radionuclides with extremely low concentrations from environmental medium remains a big challenge. Ammonium molybdophosphate possess considerable selectivity towards cesium ion (Cs + ) due to the specific ion exchange between Cs + and NH 4 + . Ammonium molybdophosphate - polyacrylonitrile (AMP-PAN) membrane was successfully prepared for the first time in this study. Efficient removal of Cs + (95.7%, 94.1% and 91.3% of 1mgL -1 ) from solutions with high ionic strength (400mgL -1 of Na + , Ca 2+ or K + ) was achieved by AMP-PAN composite. Multilayer chemical adsorption process was testified through kinetic and isotherm studies. The estimated maximum adsorption capacities even reached 138.9±21.3mgg -1 . Specifically, the liquid film diffusion was identified as the rate-limiting step throughout the removal process. Finally, AMP-PAN membrane could eliminate Cs + from water effectively through the filtration adsorption process. Copyright © 2016 Elsevier B.V. All rights reserved.
Large forging manufacturing process
Thamboo, Samuel V.; Yang, Ling
2002-01-01
A process for forging large components of Alloy 718 material so that the components do not exhibit abnormal grain growth includes the steps of: a) providing a billet with an average grain size between ASTM 0 and ASTM 3; b) heating the billet to a temperature of between 1750.degree. F. and 1800.degree. F.; c) upsetting the billet to obtain a component part with a minimum strain of 0.125 in at least selected areas of the part; d) reheating the component part to a temperature between 1750.degree. F. and 1800.degree. F.; e) upsetting the component part to a final configuration such that said selected areas receive no strains between 0.01 and 0.125; f) solution treating the component part at a temperature of between 1725.degree. F. and 1750.degree. F.; and g) aging the component part over predetermined times at different temperatures. A modified process achieves abnormal grain growth in selected areas of a component where desirable.
Selectively manipulable acoustic-powered microswimmers
Ahmed, Daniel; Lu, Mengqian; Nourhani, Amir; Lammert, Paul E.; Stratton, Zak; Muddana, Hari S.; Crespi, Vincent H.; Huang, Tony Jun
2015-01-01
Selective actuation of a single microswimmer from within a diverse group would be a first step toward collaborative guided action by a group of swimmers. Here we describe a new class of microswimmer that accomplishes this goal. Our swimmer design overcomes the commonly-held design paradigm that microswimmers must use non-reciprocal motion to achieve propulsion; instead, the swimmer is propelled by oscillatory motion of an air bubble trapped within the swimmer's polymer body. This oscillatory motion is driven by the application of a low-power acoustic field, which is biocompatible with biological samples and with the ambient liquid. This acoustically-powered microswimmer accomplishes controllable and rapid translational and rotational motion, even in highly viscous liquids (with viscosity 6,000 times higher than that of water). And by using a group of swimmers each with a unique bubble size (and resulting unique resonance frequencies), selective actuation of a single swimmer from among the group can be readily achieved. PMID:25993314
2010-01-01
Background Numerous pen devices are available to administer recombinant Human Growth Hormone (rhGH), and both patients and health plans have varying issues to consider when selecting a particular product and device for daily use. Therefore, the present study utilized multi-dimensional product analysis to assess potential time involvement, required weekly administration steps, and utilization costs relative to daily rhGH administration. Methods Study objectives were to conduct 1) Time-and-Motion (TM) simulations in a randomized block design that allowed time and steps comparisons related to rhGH preparation, administration and storage, and 2) a Cost Minimization Analysis (CMA) relative to opportunity and supply costs. Nurses naïve to rhGH administration and devices were recruited to evaluate four rhGH pen devices (2 in liquid form, 2 requiring reconstitution) via TM simulations. Five videotaped and timed trials for each product were evaluated based on: 1) Learning (initial use instructions), 2) Preparation (arrange device for use), 3) Administration (actual simulation manikin injection), and 4) Storage (maintain product viability between doses), in addition to assessment of steps required for weekly use. The CMA applied micro-costing techniques related to opportunity costs for caregivers (categorized as wages), non-drug medical supplies, and drug product costs. Results Norditropin® NordiFlex and Norditropin® NordiPen (NNF and NNP, Novo Nordisk, Inc., Bagsværd, Denmark) took less weekly Total Time (p < 0.05) to use than either of the comparator products, Genotropin® Pen (GTP, Pfizer, Inc, New York, New York) or HumatroPen® (HTP, Eli Lilly and Company, Indianapolis, Indiana). Time savings were directly related to differences in new package Preparation times (NNF (1.35 minutes), NNP (2.48 minutes) GTP (4.11 minutes), HTP (8.64 minutes), p < 0.05)). Administration and Storage times were not statistically different. NNF (15.8 minutes) and NNP (16.2 minutes) also took less time to Learn than HTP (24.0 minutes) and GTP (26.0 minutes), p < 0.05). The number of weekly required administration steps was also least with NNF and NNP. Opportunity cost savings were greater in devices that were easier to prepare for use; GTP represented an 11.8% drug product savings over NNF, NNP and HTP at time of study. Overall supply costs represented <1% of drug costs for all devices. Conclusions Time-and-motion simulation data used to support a micro-cost analysis demonstrated that the pen device with the greater time demand has highest net costs. PMID:20377905
Nickman, Nancy A; Haak, Sandra W; Kim, Jaewhan
2010-04-08
Numerous pen devices are available to administer recombinant Human Growth Hormone (rhGH), and both patients and health plans have varying issues to consider when selecting a particular product and device for daily use. Therefore, the present study utilized multi-dimensional product analysis to assess potential time involvement, required weekly administration steps, and utilization costs relative to daily rhGH administration. Study objectives were to conduct 1) Time-and-Motion (TM) simulations in a randomized block design that allowed time and steps comparisons related to rhGH preparation, administration and storage, and 2) a Cost Minimization Analysis (CMA) relative to opportunity and supply costs. Nurses naïve to rhGH administration and devices were recruited to evaluate four rhGH pen devices (2 in liquid form, 2 requiring reconstitution) via TM simulations. Five videotaped and timed trials for each product were evaluated based on: 1) Learning (initial use instructions), 2) Preparation (arrange device for use), 3) Administration (actual simulation manikin injection), and 4) Storage (maintain product viability between doses), in addition to assessment of steps required for weekly use. The CMA applied micro-costing techniques related to opportunity costs for caregivers (categorized as wages), non-drug medical supplies, and drug product costs. Norditropin(R) NordiFlex and Norditropin(R) NordiPen (NNF and NNP, Novo Nordisk, Inc., Bagsvaerd, Denmark) took less weekly Total Time (p < 0.05) to use than either of the comparator products, Genotropin(R) Pen (GTP, Pfizer, Inc, New York, New York) or HumatroPen(R) (HTP, Eli Lilly and Company, Indianapolis, Indiana). Time savings were directly related to differences in new package Preparation times (NNF (1.35 minutes), NNP (2.48 minutes) GTP (4.11 minutes), HTP (8.64 minutes), p < 0.05)). Administration and Storage times were not statistically different. NNF (15.8 minutes) and NNP (16.2 minutes) also took less time to Learn than HTP (24.0 minutes) and GTP (26.0 minutes), p < 0.05). The number of weekly required administration steps was also least with NNF and NNP. Opportunity cost savings were greater in devices that were easier to prepare for use; GTP represented an 11.8% drug product savings over NNF, NNP and HTP at time of study. Overall supply costs represented <1% of drug costs for all devices. Time-and-motion simulation data used to support a micro-cost analysis demonstrated that the pen device with the greater time demand has highest net costs.
A novel adaptive, real-time algorithm to detect gait events from wearable sensors.
Chia Bejarano, Noelia; Ambrosini, Emilia; Pedrocchi, Alessandra; Ferrigno, Giancarlo; Monticone, Marco; Ferrante, Simona
2015-05-01
A real-time, adaptive algorithm based on two inertial and magnetic sensors placed on the shanks was developed for gait-event detection. For each leg, the algorithm detected the Initial Contact (IC), as the minimum of the flexion/extension angle, and the End Contact (EC) and the Mid-Swing (MS), as minimum and maximum of the angular velocity, respectively. The algorithm consisted of calibration, real-time detection, and step-by-step update. Data collected from 22 healthy subjects (21 to 85 years) walking at three self-selected speeds were used to validate the algorithm against the GaitRite system. Comparable levels of accuracy and significantly lower detection delays were achieved with respect to other published methods. The algorithm robustness was tested on ten healthy subjects performing sudden speed changes and on ten stroke subjects (43 to 89 years). For healthy subjects, F1-scores of 1 and mean detection delays lower than 14 ms were obtained. For stroke subjects, F1-scores of 0.998 and 0.944 were obtained for IC and EC, respectively, with mean detection delays always below 31 ms. The algorithm accurately detected gait events in real time from a heterogeneous dataset of gait patterns and paves the way for the design of closed-loop controllers for customized gait trainings and/or assistive devices.
Three-dimensional planning in craniomaxillofacial surgery
Rubio-Palau, Josep; Prieto-Gundin, Alejandra; Cazalla, Asteria Albert; Serrano, Miguel Bejarano; Fructuoso, Gemma Garcia; Ferrandis, Francisco Parri; Baró, Alejandro Rivera
2016-01-01
Introduction: Three-dimensional (3D) planning in oral and maxillofacial surgery has become a standard in the planification of a variety of conditions such as dental implants and orthognathic surgery. By using custom-made cutting and positioning guides, the virtual surgery is exported to the operating room, increasing precision and improving results. Materials and Methods: We present our experience in the treatment of craniofacial deformities with 3D planning. Software to plan the different procedures has been selected for each case, depending on the procedure (Nobel Clinician, Kodak 3DS, Simplant O&O, Dolphin 3D, Timeus, Mimics and 3-Matic). The treatment protocol is exposed step by step from virtual planning, design, and printing of the cutting and positioning guides to patients’ outcomes. Conclusions: 3D planning reduces the surgical time and allows predicting possible difficulties and complications. On the other hand, it increases preoperative planning time and needs a learning curve. The only drawback is the cost of the procedure. At present, the additional preoperative work can be justified because of surgical time reduction and more predictable results. In the future, the cost and time investment will be reduced. 3D planning is here to stay. It is already a fact in craniofacial surgery and the investment is completely justified by the risk reduction and precise results. PMID:28299272
Three-dimensional planning in craniomaxillofacial surgery.
Rubio-Palau, Josep; Prieto-Gundin, Alejandra; Cazalla, Asteria Albert; Serrano, Miguel Bejarano; Fructuoso, Gemma Garcia; Ferrandis, Francisco Parri; Baró, Alejandro Rivera
2016-01-01
Three-dimensional (3D) planning in oral and maxillofacial surgery has become a standard in the planification of a variety of conditions such as dental implants and orthognathic surgery. By using custom-made cutting and positioning guides, the virtual surgery is exported to the operating room, increasing precision and improving results. We present our experience in the treatment of craniofacial deformities with 3D planning. Software to plan the different procedures has been selected for each case, depending on the procedure (Nobel Clinician, Kodak 3DS, Simplant O&O, Dolphin 3D, Timeus, Mimics and 3-Matic). The treatment protocol is exposed step by step from virtual planning, design, and printing of the cutting and positioning guides to patients' outcomes. 3D planning reduces the surgical time and allows predicting possible difficulties and complications. On the other hand, it increases preoperative planning time and needs a learning curve. The only drawback is the cost of the procedure. At present, the additional preoperative work can be justified because of surgical time reduction and more predictable results. In the future, the cost and time investment will be reduced. 3D planning is here to stay. It is already a fact in craniofacial surgery and the investment is completely justified by the risk reduction and precise results.
van Blerk, G N; Leibach, L; Mabunda, A; Chapman, A; Louw, D
2011-01-01
A real-time PCR assay combined with a pre-enrichment step for the specific and rapid detection of Salmonella in water samples is described. Following amplification of the invA gene target, High Resolution Melt (HRM) curve analysis was used to discriminate between products formed and to positively identify invA amplification. The real-time PCR assay was evaluated for specificity and sensitivity. The assay displayed 100% specificity for Salmonella and combined with a 16-18 h non-selective pre-enrichment step, the assay proved to be highly sensitive with a detection limit of 1.0 CFU/ml for surface water samples. The detection assay also demonstrated a high intra-run and inter-run repeatability with very little variation in invA amplicon melting temperature. When applied to water samples received routinely by the laboratory, the assay showed the presence of Salmonella in particularly surface water and treated effluent samples. Using the HRM based assay, the time required for Salmonella detection was drastically shortened to less than 24 h compared to several days when using standard culturing methods. This assay provides a useful tool for routine water quality monitoring as well as for quick screening during disease outbreaks.
NASA Astrophysics Data System (ADS)
Shams Esfand Abadi, Mohammad; AbbasZadeh Arani, Seyed Ali Asghar
2011-12-01
This paper extends the recently introduced variable step-size (VSS) approach to the family of adaptive filter algorithms. This method uses prior knowledge of the channel impulse response statistic. Accordingly, optimal step-size vector is obtained by minimizing the mean-square deviation (MSD). The presented algorithms are the VSS affine projection algorithm (VSS-APA), the VSS selective partial update NLMS (VSS-SPU-NLMS), the VSS-SPU-APA, and the VSS selective regressor APA (VSS-SR-APA). In VSS-SPU adaptive algorithms the filter coefficients are partially updated which reduce the computational complexity. In VSS-SR-APA, the optimal selection of input regressors is performed during the adaptation. The presented algorithms have good convergence speed, low steady state mean square error (MSE), and low computational complexity features. We demonstrate the good performance of the proposed algorithms through several simulations in system identification scenario.
Connectivity among subpopulations of Louisiana black bears as estimated by a step selection function
Clark, Joseph D.; Jared S. Laufenberg,; Maria Davidson,; Jennifer L. Murrow,
2015-01-01
Habitat fragmentation is a fundamental cause of population decline and increased risk of extinction for many wildlife species; animals with large home ranges and small population sizes are particularly sensitive. The Louisiana black bear (Ursus americanus luteolus) exists only in small, isolated subpopulations as a result of land clearing for agriculture, but the relative potential for inter-subpopulation movement by Louisiana black bears has not been quantified, nor have characteristics of effective travel routes between habitat fragments been identified. We placed and monitored global positioning system (GPS) radio collars on 8 female and 23 male bears located in 4 subpopulations in Louisiana, which included a reintroduced subpopulation located between 2 of the remnant subpopulations. We compared characteristics of sequential radiolocations of bears (i.e., steps) with steps that were possible but not chosen by the bears to develop step selection function models based on conditional logistic regression. The probability of a step being selected by a bear increased as the distance to natural land cover and agriculture at the end of the step decreased and as distance from roads at the end of a step increased. To characterize connectivity among subpopulations, we used the step selection models to create 4,000 hypothetical correlated random walks for each subpopulation representing potential dispersal events to estimate the proportion that intersected adjacent subpopulations (hereafter referred to as successful dispersals). Based on the models, movement paths for males intersected all adjacent subpopulations but paths for females intersected only the most proximate subpopulations. Cross-validation and genetic and independent observation data supported our findings. Our models also revealed that successful dispersals were facilitated by a reintroduced population located between 2 distant subpopulations. Successful dispersals for males were dependent on natural land cover in private ownership. The addition of hypothetical 1,000-m- or 3,000-m-wide corridors between the 4 study areas had minimal effects on connectivity among subpopulations. For females, our model suggested that habitat between subpopulations would probably have to be permanently occupied for demographic rescue to occur. Thus, the establishment of stepping-stone populations, such as the reintroduced population that we studied, may be a more effective conservation measure than long corridors without a population presence in between.
Gold glyconanoparticles as new tools in antiadhesive therapy.
Rojo, Javier; Díaz, Vicente; de la Fuente, Jesús M; Segura, Inmaculada; Barrientos, Africa G; Riese, Hans H; Bernad, Antonio; Penadés, Soledad
2004-03-05
Gold glyconanoparticles (GNPs) have been prepared as new multivalent tools that mimic glycosphingolipids on the cell surface. GNPs are highly soluble under physiological conditions, stable against enzymatic degradation and nontoxic. Thereby GNPs open up a novel promising multivalent platform for biological applications. It has recently been demonstrated that specific tumor-associated carbohydrate antigens (glycosphingolipids and glycoproteins) are involved in the initial step of tumor spreading. A mouse melanoma model was selected to test glyconanoparticles as possible inhibitors of experimental lung metastasis. A carbohydrate-carbohydrate interaction is proposed as the first recognition step for this process. Glyconanoparticles presenting lactose (lacto-GNPs) have been used successfully to significantly reduce the progression of experimental metastasis. This result shows for the first time a clear biological effect of lacto-GNPs, demonstrating the potential application of this glyconanotechnology in biological processes.
Lipase-Catalyzed Kinetic Resolution of Novel Antifungal N-Substituted Benzimidazole Derivatives.
Łukowska-Chojnacka, Edyta; Staniszewska, Monika; Bondaryk, Małgorzata; Maurin, Jan K; Bretner, Maria
2016-04-01
A series of new N-substituted benzimidazole derivatives was synthesized and their antifungal activity against Candida albicans was evaluated. The chemical step included synthesis of appropriate ketones containing benzimidazole ring, reduction of ketones to the racemic alcohols, and acetylation of alcohols to the esters. All benzimidazole derivatives were obtained with satisfactory yields and in relatively short times. All synthesized compounds exhibit significant antifungal activity against Candida albicans 900028 ATCC (% cell inhibition at 0.25 μg concentration > 98%). Additionally, racemic mixtures of alcohols were separated by lipase-catalyzed kinetic resolution. In the enzymatic step a transesterification reaction was applied and the influence of a lipase type and solvent on the enantioselectivity of the reaction was studied. The most selective enzymes were Novozyme SP 435 and lipase Amano AK from Pseudomonas fluorescens (E > 100). © 2016 Wiley Periodicals, Inc.
Boron-carbide-aluminum and boron-carbide-reactive metal cermets
Halverson, Danny C.; Pyzik, Aleksander J.; Aksay, Ilhan A.
1986-01-01
Hard, tough, lightweight boron-carbide-reactive metal composites, particularly boron-carbide-aluminum composites, are produced. These composites have compositions with a plurality of phases. A method is provided, including the steps of wetting and reacting the starting materials, by which the microstructures in the resulting composites can be controllably selected. Starting compositions, reaction temperatures, reaction times, and reaction atmospheres are parameters for controlling the process and resulting compositions. The ceramic phases are homogeneously distributed in the metal phases and adhesive forces at ceramic-metal interfaces are maximized. An initial consolidation step is used to achieve fully dense composites. Microstructures of boron-carbide-aluminum cermets have been produced with modulus of rupture exceeding 110 ksi and fracture toughness exceeding 12 ksi.sqroot.in. These composites and methods can be used to form a variety of structural elements.
Numerical prediction of fire resistance of RC beams
NASA Astrophysics Data System (ADS)
Serega, Szymon; Wosatko, Adam
2018-01-01
Fire resistance of different structural members is an important issue of their strength and durability. A simple but effective tool to investigate multi-span reinforced concrete beams exposed to fire is discussed in the paper. Assumptions and simplifications of the theory as well as numerical aspects are briefly reviewed. Two steps of nonlinear finite element analysis and two levels of observation are distinguished. The first step is the solution of transient heat transfer problem in representative two-dimensional reinforced concrete cross-section of a beam. The second part is a nonlinear mechanical analysis of the whole beam. All spans are uniformly loaded, but an additional time-dependent thermal load due to fire acts on selected ones. Global changes of curvature and bending moment functions induce deterioration of the stiffness. Benchmarks are shown to confirm the correctness of the model.
Electrochemical biosensors for hormone analyses.
Bahadır, Elif Burcu; Sezgintürk, Mustafa Kemal
2015-06-15
Electrochemical biosensors have a unique place in determination of hormones due to simplicity, sensitivity, portability and ease of operation. Unlike chromatographic techniques, electrochemical techniques used do not require pre-treatment. Electrochemical biosensors are based on amperometric, potentiometric, impedimetric, and conductometric principle. Amperometric technique is a commonly used one. Although electrochemical biosensors offer a great selectivity and sensitivity for early clinical analysis, the poor reproducible results, difficult regeneration steps remain primary challenges to the commercialization of these biosensors. This review summarizes electrochemical (amperometric, potentiometric, impedimetric and conductometric) biosensors for hormone detection for the first time in the literature. After a brief description of the hormones, the immobilization steps and analytical performance of these biosensors are summarized. Linear ranges, LODs, reproducibilities, regenerations of developed biosensors are compared. Future outlooks in this area are also discussed. Copyright © 2014 Elsevier B.V. All rights reserved.
Boron-carbide-aluminum and boron-carbide-reactive metal cermets. [B/sub 4/C-Al
Halverson, D.C.; Pyzik, A.J.; Aksay, I.A.
1985-05-06
Hard, tough, lighweight boron-carbide-reactive metal composites, particularly boron-carbide-aluminum composites, are produced. These composites have compositions with a plurality of phases. A method is provided, including the steps of wetting and reacting the starting materials, by which the microstructures in the resulting composites can be controllably selected. Starting compositions, reaction temperatures, reaction times, and reaction atmospheres are parameters for controlling the process and resulting compositions. The ceramic phases are homogeneously distributed in the metal phases and adhesive forces at ceramic-metal interfaces are maximized. An initial consolidated step is used to achieve fully dense composites. Microstructures of boron-carbide-aluminum cermets have been produced with modules of rupture exceeding 110 ksi and fracture toughness exceeding 12 ksi..sqrt..in. These composites and methods can be used to form a variety of structural elements.
Using diurnal temperature signals to infer vertical groundwater-surface water exchange
Irvine, Dylan J.; Briggs, Martin A.; Lautz, Laura K.; Gordon, Ryan P.; McKenzie, Jeffrey M.; Cartwright, Ian
2017-01-01
Heat is a powerful tracer to quantify fluid exchange between surface water and groundwater. Temperature time series can be used to estimate pore water fluid flux, and techniques can be employed to extend these estimates to produce detailed plan-view flux maps. Key advantages of heat tracing include cost-effective sensors and ease of data collection and interpretation, without the need for expensive and time-consuming laboratory analyses or induced tracers. While the collection of temperature data in saturated sediments is relatively straightforward, several factors influence the reliability of flux estimates that are based on time series analysis (diurnal signals) of recorded temperatures. Sensor resolution and deployment are particularly important in obtaining robust flux estimates in upwelling conditions. Also, processing temperature time series data involves a sequence of complex steps, including filtering temperature signals, selection of appropriate thermal parameters, and selection of the optimal analytical solution for modeling. This review provides a synthesis of heat tracing using diurnal temperature oscillations, including details on optimal sensor selection and deployment, data processing, model parameterization, and an overview of computing tools available. Recent advances in diurnal temperature methods also provide the opportunity to determine local saturated thermal diffusivity, which can improve the accuracy of fluid flux modeling and sensor spacing, which is related to streambed scour and deposition. These parameters can also be used to determine the reliability of flux estimates from the use of heat as a tracer.
Amezquita-Sanchez, Juan P; Adeli, Anahita; Adeli, Hojjat
2016-05-15
Mild cognitive impairment (MCI) is a cognitive disorder characterized by memory impairment, greater than expected by age. A new methodology is presented to identify MCI patients during a working memory task using MEG signals. The methodology consists of four steps: In step 1, the complete ensemble empirical mode decomposition (CEEMD) is used to decompose the MEG signal into a set of adaptive sub-bands according to its contained frequency information. In step 2, a nonlinear dynamics measure based on permutation entropy (PE) analysis is employed to analyze the sub-bands and detect features to be used for MCI detection. In step 3, an analysis of variation (ANOVA) is used for feature selection. In step 4, the enhanced probabilistic neural network (EPNN) classifier is applied to the selected features to distinguish between MCI and healthy patients. The usefulness and effectiveness of the proposed methodology are validated using the sensed MEG data obtained experimentally from 18 MCI and 19 control patients. Copyright © 2016 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Smith, Varina Campbell
The role of growth steps in inducing disequilibrium is investigated in crystals of vesuvianite from the Jeffrey mine, Asbestos, Quebec, using optical microscopy, atomic force microscopy, electron microprobe analysis, and single-crystal X-ray diffraction. The selective uptake of elements Fe and Al by asymmetric growth-steps on three crystallographic forms, {100}, {110}, and {121}, is documented. The prisms {100} and {110} show hillocks that display kinetically controlled oscillatory zoning along growth steps parallel to <010> and <11¯1>, but not on vicinal faces defined by [001] steps. Sector-specific zoning of extinction angles and 2V angles indicate different degrees of optical dissymmetrization in crystals spanning a range of growth habits. Unit-cell parameters and the presence of violating reflections confirm sectoral deviations from P4/nnc symmetry in the prismatic sectors. The partial loss of three glide planes follows the pattern expected from order of the cations Al and Fe induced by tangential selectivity at the edge of non-equivalent steps during layer-by-layer growth.
Garcia, Justine; Yang, ZhiLin; Mongrain, Rosaire; Leask, Richard L; Lachapelle, Kevin
2018-01-01
3D printing is a new technology in constant evolution. It has rapidly expanded and is now being used in health education. Patient-specific models with anatomical fidelity created from imaging dataset have the potential to significantly improve the knowledge and skills of a new generation of surgeons. This review outlines five technical steps required to complete a printed model: They include (1) selecting the anatomical area of interest, (2) the creation of the 3D geometry, (3) the optimisation of the file for the printing and the appropriate selection of (4) the 3D printer and (5) materials. All of these steps require time, expertise and money. A thorough understanding of educational needs is therefore essential in order to optimise educational value. At present, most of the available printing materials are rigid and therefore not optimum for flexibility and elasticity unlike biological tissue. We believe that the manipuation and tuning of material properties through the creation of composites and/or blending materials will eventually allow for the creation of patient-specific models which have both anatomical and tissue fidelity. PMID:29354281
Garcia, Justine; Yang, ZhiLin; Mongrain, Rosaire; Leask, Richard L; Lachapelle, Kevin
2018-01-01
3D printing is a new technology in constant evolution. It has rapidly expanded and is now being used in health education. Patient-specific models with anatomical fidelity created from imaging dataset have the potential to significantly improve the knowledge and skills of a new generation of surgeons. This review outlines five technical steps required to complete a printed model: They include (1) selecting the anatomical area of interest, (2) the creation of the 3D geometry, (3) the optimisation of the file for the printing and the appropriate selection of (4) the 3D printer and (5) materials. All of these steps require time, expertise and money. A thorough understanding of educational needs is therefore essential in order to optimise educational value. At present, most of the available printing materials are rigid and therefore not optimum for flexibility and elasticity unlike biological tissue. We believe that the manipuation and tuning of material properties through the creation of composites and/or blending materials will eventually allow for the creation of patient-specific models which have both anatomical and tissue fidelity.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hoang Duc, Albert K., E-mail: albert.hoangduc.ucl@gmail.com; McClelland, Jamie; Modat, Marc
Purpose: The aim of this study was to assess whether clinically acceptable segmentations of organs at risk (OARs) in head and neck cancer can be obtained automatically and efficiently using the novel “similarity and truth estimation for propagated segmentations” (STEPS) compared to the traditional “simultaneous truth and performance level estimation” (STAPLE) algorithm. Methods: First, 6 OARs were contoured by 2 radiation oncologists in a dataset of 100 patients with head and neck cancer on planning computed tomography images. Each image in the dataset was then automatically segmented with STAPLE and STEPS using those manual contours. Dice similarity coefficient (DSC) wasmore » then used to compare the accuracy of these automatic methods. Second, in a blind experiment, three separate and distinct trained physicians graded manual and automatic segmentations into one of the following three grades: clinically acceptable as determined by universal delineation guidelines (grade A), reasonably acceptable for clinical practice upon manual editing (grade B), and not acceptable (grade C). Finally, STEPS segmentations graded B were selected and one of the physicians manually edited them to grade A. Editing time was recorded. Results: Significant improvements in DSC can be seen when using the STEPS algorithm on large structures such as the brainstem, spinal canal, and left/right parotid compared to the STAPLE algorithm (all p < 0.001). In addition, across all three trained physicians, manual and STEPS segmentation grades were not significantly different for the brainstem, spinal canal, parotid (right/left), and optic chiasm (all p > 0.100). In contrast, STEPS segmentation grades were lower for the eyes (p < 0.001). Across all OARs and all physicians, STEPS produced segmentations graded as well as manual contouring at a rate of 83%, giving a lower bound on this rate of 80% with 95% confidence. Reduction in manual interaction time was on average 61% and 93% when automatic segmentations did and did not, respectively, require manual editing. Conclusions: The STEPS algorithm showed better performance than the STAPLE algorithm in segmenting OARs for radiotherapy of the head and neck. It can automatically produce clinically acceptable segmentation of OARs, with results as relevant as manual contouring for the brainstem, spinal canal, the parotids (left/right), and optic chiasm. A substantial reduction in manual labor was achieved when using STEPS even when manual editing was necessary.« less
An adaptive time-stepping strategy for solving the phase field crystal model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Zhengru, E-mail: zrzhang@bnu.edu.cn; Ma, Yuan, E-mail: yuner1022@gmail.com; Qiao, Zhonghua, E-mail: zqiao@polyu.edu.hk
2013-09-15
In this work, we will propose an adaptive time step method for simulating the dynamics of the phase field crystal (PFC) model. The numerical simulation of the PFC model needs long time to reach steady state, and then large time-stepping method is necessary. Unconditionally energy stable schemes are used to solve the PFC model. The time steps are adaptively determined based on the time derivative of the corresponding energy. It is found that the use of the proposed time step adaptivity cannot only resolve the steady state solution, but also the dynamical development of the solution efficiently and accurately. Themore » numerical experiments demonstrate that the CPU time is significantly saved for long time simulations.« less
Evaluation of 1983 selective speed enforcement projects in Virginia.
DOT National Transportation Integrated Search
1985-01-01
This report describes and evaluates Virginia's 1983 selective speed enforcement projects. These projects are one of the various types of highway safety programs, classified as selective traffic enforcement projects (STEPs) partially funded by the fed...
Selecting the Administrative Computing Executive.
ERIC Educational Resources Information Center
Bielec, John A.
1985-01-01
Important steps in the computing administrator selection process are outlined, including: reviewing the administrative computing organization, determining a search methodology, selecting a search or screening committee, narrowing the candidate pool, scheduling interviews and evaluating candidates, and conducting negotiations. (MSE)
Ewing, Robert G.; Atkinson, David A.; Clowers, Brian H.
2015-09-01
A method for selective detection of volatile and non-volatile explosives in a mass spectrometer or ion mobility spectrometer at a parts-per-quadrillion level without preconcentration is disclosed. The method comprises the steps of ionizing a carrier gas with an ionization source to form reactant ions or reactant adduct ions comprising nitrate ions (NO.sub.3.sup.-); selectively reacting the reactant ions or reactant adduct ions with at least one volatile or non-volatile explosive analyte at a carrier gas pressure of at least about 100 Ton in a reaction region disposed between the ionization source and an ion detector, the reaction region having a length which provides a residence time (tr) for reactant ions therein of at least about 0.10 seconds, wherein the selective reaction yields product ions comprising reactant ions or reactant adduct ions that are selectively bound to the at least one explosive analyte when present therein; and detecting product ions with the ion detector to determine presence or absence of the at least one explosive analyte.
Bíró, Oszkár; Koczka, Gergely; Preis, Kurt
2014-01-01
An efficient finite element method to take account of the nonlinearity of the magnetic materials when analyzing three-dimensional eddy current problems is presented in this paper. The problem is formulated in terms of vector and scalar potentials approximated by edge and node based finite element basis functions. The application of Galerkin techniques leads to a large, nonlinear system of ordinary differential equations in the time domain. The excitations are assumed to be time-periodic and the steady-state periodic solution is of interest only. This is represented either in the frequency domain as a finite Fourier series or in the time domain as a set of discrete time values within one period for each finite element degree of freedom. The former approach is the (continuous) harmonic balance method and, in the latter one, discrete Fourier transformation will be shown to lead to a discrete harmonic balance method. Due to the nonlinearity, all harmonics, both continuous and discrete, are coupled to each other. The harmonics would be decoupled if the problem were linear, therefore, a special nonlinear iteration technique, the fixed-point method is used to linearize the equations by selecting a time-independent permeability distribution, the so-called fixed-point permeability in each nonlinear iteration step. This leads to uncoupled harmonics within these steps. As industrial applications, analyses of large power transformers are presented. The first example is the computation of the electromagnetic field of a single-phase transformer in the time domain with the results compared to those obtained by traditional time-stepping techniques. In the second application, an advanced model of the same transformer is analyzed in the frequency domain by the harmonic balance method with the effect of the presence of higher harmonics on the losses investigated. Finally a third example tackles the case of direct current (DC) bias in the coils of a single-phase transformer. PMID:24829517
Bíró, Oszkár; Koczka, Gergely; Preis, Kurt
2014-05-01
An efficient finite element method to take account of the nonlinearity of the magnetic materials when analyzing three-dimensional eddy current problems is presented in this paper. The problem is formulated in terms of vector and scalar potentials approximated by edge and node based finite element basis functions. The application of Galerkin techniques leads to a large, nonlinear system of ordinary differential equations in the time domain. The excitations are assumed to be time-periodic and the steady-state periodic solution is of interest only. This is represented either in the frequency domain as a finite Fourier series or in the time domain as a set of discrete time values within one period for each finite element degree of freedom. The former approach is the (continuous) harmonic balance method and, in the latter one, discrete Fourier transformation will be shown to lead to a discrete harmonic balance method. Due to the nonlinearity, all harmonics, both continuous and discrete, are coupled to each other. The harmonics would be decoupled if the problem were linear, therefore, a special nonlinear iteration technique, the fixed-point method is used to linearize the equations by selecting a time-independent permeability distribution, the so-called fixed-point permeability in each nonlinear iteration step. This leads to uncoupled harmonics within these steps. As industrial applications, analyses of large power transformers are presented. The first example is the computation of the electromagnetic field of a single-phase transformer in the time domain with the results compared to those obtained by traditional time-stepping techniques. In the second application, an advanced model of the same transformer is analyzed in the frequency domain by the harmonic balance method with the effect of the presence of higher harmonics on the losses investigated. Finally a third example tackles the case of direct current (DC) bias in the coils of a single-phase transformer.
NASA Astrophysics Data System (ADS)
Densmore, Jeffery D.; Warsa, James S.; Lowrie, Robert B.; Morel, Jim E.
2009-09-01
The Fokker-Planck equation is a widely used approximation for modeling the Compton scattering of photons in high energy density applications. In this paper, we perform a stability analysis of three implicit time discretizations for the Compton-Scattering Fokker-Planck equation. Specifically, we examine (i) a Semi-Implicit (SI) scheme that employs backward-Euler differencing but evaluates temperature-dependent coefficients at their beginning-of-time-step values, (ii) a Fully Implicit (FI) discretization that instead evaluates temperature-dependent coefficients at their end-of-time-step values, and (iii) a Linearized Implicit (LI) scheme, which is developed by linearizing the temperature dependence of the FI discretization within each time step. Our stability analysis shows that the FI and LI schemes are unconditionally stable and cannot generate oscillatory solutions regardless of time-step size, whereas the SI discretization can suffer from instabilities and nonphysical oscillations for sufficiently large time steps. With the results of this analysis, we present time-step limits for the SI scheme that prevent undesirable behavior. We test the validity of our stability analysis and time-step limits with a set of numerical examples.
Nutt, John G.; Horak, Fay B.
2011-01-01
Background. This study asked whether older adults were more likely than younger adults to err in the initial direction of their anticipatory postural adjustment (APA) prior to a step (indicating a motor program error), whether initial motor program errors accounted for reaction time differences for step initiation, and whether initial motor program errors were linked to inhibitory failure. Methods. In a stepping task with choice reaction time and simple reaction time conditions, we measured forces under the feet to quantify APA onset and step latency and we used body kinematics to quantify forward movement of center of mass and length of first step. Results. Trials with APA errors were almost three times as common for older adults as for younger adults, and they were nine times more likely in choice reaction time trials than in simple reaction time trials. In trials with APA errors, step latency was delayed, correlation between APA onset and step latency was diminished, and forward motion of the center of mass prior to the step was increased. Participants with more APA errors tended to have worse Stroop interference scores, regardless of age. Conclusions. The results support the hypothesis that findings of slow choice reaction time step initiation in older adults are attributable to inclusion of trials with incorrect initial motor preparation and that these errors are caused by deficits in response inhibition. By extension, the results also suggest that mixing of trials with correct and incorrect initial motor preparation might explain apparent choice reaction time slowing with age in upper limb tasks. PMID:21498431
Molecular dynamics based enhanced sampling of collective variables with very large time steps.
Chen, Pei-Yang; Tuckerman, Mark E
2018-01-14
Enhanced sampling techniques that target a set of collective variables and that use molecular dynamics as the driving engine have seen widespread application in the computational molecular sciences as a means to explore the free-energy landscapes of complex systems. The use of molecular dynamics as the fundamental driver of the sampling requires the introduction of a time step whose magnitude is limited by the fastest motions in a system. While standard multiple time-stepping methods allow larger time steps to be employed for the slower and computationally more expensive forces, the maximum achievable increase in time step is limited by resonance phenomena, which inextricably couple fast and slow motions. Recently, we introduced deterministic and stochastic resonance-free multiple time step algorithms for molecular dynamics that solve this resonance problem and allow ten- to twenty-fold gains in the large time step compared to standard multiple time step algorithms [P. Minary et al., Phys. Rev. Lett. 93, 150201 (2004); B. Leimkuhler et al., Mol. Phys. 111, 3579-3594 (2013)]. These methods are based on the imposition of isokinetic constraints that couple the physical system to Nosé-Hoover chains or Nosé-Hoover Langevin schemes. In this paper, we show how to adapt these methods for collective variable-based enhanced sampling techniques, specifically adiabatic free-energy dynamics/temperature-accelerated molecular dynamics, unified free-energy dynamics, and by extension, metadynamics, thus allowing simulations employing these methods to employ similarly very large time steps. The combination of resonance-free multiple time step integrators with free-energy-based enhanced sampling significantly improves the efficiency of conformational exploration.
Molecular dynamics based enhanced sampling of collective variables with very large time steps
NASA Astrophysics Data System (ADS)
Chen, Pei-Yang; Tuckerman, Mark E.
2018-01-01
Enhanced sampling techniques that target a set of collective variables and that use molecular dynamics as the driving engine have seen widespread application in the computational molecular sciences as a means to explore the free-energy landscapes of complex systems. The use of molecular dynamics as the fundamental driver of the sampling requires the introduction of a time step whose magnitude is limited by the fastest motions in a system. While standard multiple time-stepping methods allow larger time steps to be employed for the slower and computationally more expensive forces, the maximum achievable increase in time step is limited by resonance phenomena, which inextricably couple fast and slow motions. Recently, we introduced deterministic and stochastic resonance-free multiple time step algorithms for molecular dynamics that solve this resonance problem and allow ten- to twenty-fold gains in the large time step compared to standard multiple time step algorithms [P. Minary et al., Phys. Rev. Lett. 93, 150201 (2004); B. Leimkuhler et al., Mol. Phys. 111, 3579-3594 (2013)]. These methods are based on the imposition of isokinetic constraints that couple the physical system to Nosé-Hoover chains or Nosé-Hoover Langevin schemes. In this paper, we show how to adapt these methods for collective variable-based enhanced sampling techniques, specifically adiabatic free-energy dynamics/temperature-accelerated molecular dynamics, unified free-energy dynamics, and by extension, metadynamics, thus allowing simulations employing these methods to employ similarly very large time steps. The combination of resonance-free multiple time step integrators with free-energy-based enhanced sampling significantly improves the efficiency of conformational exploration.
An automated multi-scale network-based scheme for detection and location of seismic sources
NASA Astrophysics Data System (ADS)
Poiata, N.; Aden-Antoniow, F.; Satriano, C.; Bernard, P.; Vilotte, J. P.; Obara, K.
2017-12-01
We present a recently developed method - BackTrackBB (Poiata et al. 2016) - allowing to image energy radiation from different seismic sources (e.g., earthquakes, LFEs, tremors) in different tectonic environments using continuous seismic records. The method exploits multi-scale frequency-selective coherence in the wave field, recorded by regional seismic networks or local arrays. The detection and location scheme is based on space-time reconstruction of the seismic sources through an imaging function built from the sum of station-pair time-delay likelihood functions, projected onto theoretical 3D time-delay grids. This imaging function is interpreted as the location likelihood of the seismic source. A signal pre-processing step constructs a multi-band statistical representation of the non stationary signal, i.e. time series, by means of higher-order statistics or energy envelope characteristic functions. Such signal-processing is designed to detect in time signal transients - of different scales and a priori unknown predominant frequency - potentially associated with a variety of sources (e.g., earthquakes, LFE, tremors), and to improve the performance and the robustness of the detection-and-location location step. The initial detection-location, based on a single phase analysis with the P- or S-phase only, can then be improved recursively in a station selection scheme. This scheme - exploiting the 3-component records - makes use of P- and S-phase characteristic functions, extracted after a polarization analysis of the event waveforms, and combines the single phase imaging functions with the S-P differential imaging functions. The performance of the method is demonstrated here in different tectonic environments: (1) analysis of the one year long precursory phase of 2014 Iquique earthquake in Chile; (2) detection and location of tectonic tremor sources and low-frequency earthquakes during the multiple episodes of tectonic tremor activity in southwestern Japan.
Melzer, Itshak; Goldring, Melissa; Melzer, Yehudit; Green, Elad; Tzedek, Irit
2010-12-01
If balance is lost, quick step execution can prevent falls. Research has shown that speed of voluntary stepping was able to predict future falls in old adults. The aim of the study was to investigate voluntary stepping behavior, as well as to compare timing and leg push-off force-time relation parameters of involved and uninvolved legs in stroke survivors during single- and dual-task conditions. We also aimed to compare timing and leg push-off force-time relation parameters between stroke survivors and healthy individuals in both task conditions. Ten stroke survivors performed a voluntary step execution test with their involved and uninvolved legs under two conditions: while focusing only on the stepping task and while a separate attention-demanding task was performed simultaneously. Temporal parameters related to the step time were measured including the duration of the step initiation phase, the preparatory phase, the swing phase, and the total step time. In addition, force-time parameters representing the push-off power during stepping were calculated from ground reaction data and compared with 10 healthy controls. The involved legs of stroke survivors had a significantly slower stepping time than uninvolved legs due to increased swing phase duration during both single- and dual-task conditions. For dual compared to single task, the stepping time increased significantly due to a significant increase in the duration of step initiation. In general, the force time parameters were significantly different in both legs of stroke survivors as compared to healthy controls, with no significant effect of dual compared with single-task conditions in both groups. The inability of stroke survivors to swing the involved leg quickly may be the most significant factor contributing to the large number of falls to the paretic side. The results suggest that stroke survivors were unable to rapidly produce muscle force in fast actions. This may be the mechanism of delayed execution of a fast step when balance is lost, thus increasing the likelihood of falls in stroke survivors. Copyright © 2010 Elsevier Ltd. All rights reserved.
Energy Data Management Manual for the Wastewater Treatment Sector
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lemar, Paul; De Fontaine, Andre
Energy efficiency has become a higher priority within the wastewater treatment sector, with facility operators and state and local governments ramping up efforts to reduce energy costs and improve environmental performance. Across the country, municipal wastewater treatment plants are estimated to consume more than 30 terawatt hours per year of electricity, which equates to about $2 billion in annual electric costs. Electricity alone can constitute 25% to 40% of a wastewater treatment plant’s annual operating budget and make up a significant portion of a given municipality’s total energy bill. These energy needs are expected to grow over time, driven bymore » population growth and increasingly stringent water quality requirements. The purpose of this document is to describe the benefits of energy data management, explain how it can help drive savings when linked to a strong energy management program, and provide clear, step-by-step guidance to wastewater treatment plants on how to appropriately track energy performance. It covers the basics of energy data management and related concepts and describes different options for key steps, recognizing that a single approach may not work for all agencies. Wherever possible, the document calls out simpler, less time-intensive approaches to help smaller plants with more limited resources measure and track energy performance. Reviews of key, publicly available energy-tracking tools are provided to help organizations select a tool that makes the most sense for them. Finally, this document describes additional steps wastewater treatment plant operators can take to build on their energy data management systems and further accelerate energy savings.« less
Watkins, Stephanie; Jonsson-Funk, Michele; Brookhart, M Alan; Rosenberg, Steven A; O'Shea, T Michael; Daniels, Julie
2014-05-01
Children born very low birth weight (VLBW) are at an increased risk of delayed development of motor skills. Physical and occupational therapy services may reduce this risk. Among VLBW children, we evaluated whether receipt of physical or occupational therapy services between 9 months and 2 years of age is associated with improved preschool age motor ability. Using data from the Early Childhood Longitudinal Study Birth Cohort we estimated the association between receipt of therapy and the following preschool motor milestones: skipping eight consecutive steps, hopping five times, standing on one leg for 10 seconds, walking backwards six steps on a line, and jumping distance. We used propensity score methods to adjust for differences in baseline characteristics between children who did and did not receive physical or occupational therapy, since children receiving therapy may be at higher risk of impairment. We applied propensity score weights and modeled the estimated effect of therapy on the distance that the child jumped using linear regression. We modeled all other end points using logistic regression. Treated VLBW children were 1.70 times as likely to skip eight steps (RR 1.70, 95 % CI 0.84, 3.44) compared to the untreated group and 30 % more likely to walk six steps backwards (RR 1.30, 95 % CI 0.63, 2.71), although these differences were not statistically significant. We found little effect of therapy on other endpoints. Providing therapy to VLBW children during early childhood may improve select preschool motor skills involving complex motor planning.
Li, Daojin; Yin, Danyang; Chen, Yang; Liu, Zhen
2017-05-19
Protein phosphorylation is a major post-translational modification, which plays a vital role in cellular signaling of numerous biological processes. Mass spectrometry (MS) has been an essential tool for the analysis of protein phosphorylation, for which it is a key step to selectively enrich phosphopeptides from complex biological samples. In this study, metal-organic frameworks (MOFs)-based monolithic capillary has been successfully prepared as an effective sorbent for the selective enrichment of phosphopeptides and has been off-line coupled with matrix-assisted laser desorption ionization-time-of-flight mass spectrometry (MALDI-TOF MS) for efficient analysis of phosphopeptides. Using š-casein as a representative phosphoprotein, efficient phosphorylation analysis by this off-line platform was verified. Phosphorylation analysis of a nonfat milk sample was also demonstrated. Through introducing large surface areas and highly ordered pores of MOFs into monolithic column, the MOFs-based monolithic capillary exhibited several significant advantages, such as excellent selectivity toward phosphopeptides, superb tolerance to interference and simple operation procedure. Because of these highly desirable properties, the MOFs-based monolithic capillary could be a useful tool for protein phosphorylation analysis. Copyright © 2016 Elsevier B.V. All rights reserved.
Bieri, Stefan; Ilias, Yara; Bicchi, Carlo; Veuthey, Jean-Luc; Christen, Philippe
2006-04-21
An effective combination of focused microwave-assisted extraction (FMAE) with solid-phase microextraction (SPME) prior to gas chromatography (GC) is described for the selective extraction and quantitative analysis of cocaine from coca leaves (Erythroxylum coca). This approach required switching from an organic extraction solvent to an aqueous medium more compatible with SPME liquid sampling. SPME was performed in the direct immersion mode with a universal 100 microm polydimethylsiloxane (PDMS) coated fibre. Parameters influencing this extraction step, such as solution pH, sampling time and temperature are discussed. Furthermore, the overall extraction process takes into account the stability of cocaine in alkaline aqueous solutions at different temperatures. Cocaine degradation rate was determined by capillary electrophoresis using the short end injection procedure. In the selected extraction conditions, less than 5% of cocaine was degraded after 60 min. From a qualitative point of view, a significant gain in selectivity was obtained with the incorporation of SPME in the extraction procedure. As a consequence of SPME clean-up, shorter columns could be used and analysis time was reduced to 6 min compared to 35 min with conventional GC. Quantitative results led to a cocaine content of 0.70 +/- 0.04% in dry leaves (RSD <5%) which agreed with previous investigations.
ERIC Educational Resources Information Center
Musgrave, Chuck; Spencer-Workman, Sarah
2000-01-01
Provides a nine-step process in designing athletic facility laundry rooms that are attractive and functional. Steps include determining the level of laundry services needed, ensuring adequate storage and compatible delivery systems, selecting laundry equipment, and choosing suitable flooring. (GR)
Choosing a Microcomputer for Use as a Teaching Aid.
ERIC Educational Resources Information Center
Visniesky, Cheryl; Hocking, Joan
A step-by-step guide to the selection of a microcomputer system is provided for educators having made the decision to implement computer-assisted instruction. The first step is to clarify reasons for using a microcomputer rather than conventional instructional materials. Next, the degree of use (e.g., types of courses and number of departments…
Comparisons and Selections of Features and Classifiers for Short Text Classification
NASA Astrophysics Data System (ADS)
Wang, Ye; Zhou, Zhi; Jin, Shan; Liu, Debin; Lu, Mi
2017-10-01
Short text is considerably different from traditional long text documents due to its shortness and conciseness, which somehow hinders the applications of conventional machine learning and data mining algorithms in short text classification. According to traditional artificial intelligence methods, we divide short text classification into three steps, namely preprocessing, feature selection and classifier comparison. In this paper, we have illustrated step-by-step how we approach our goals. Specifically, in feature selection, we compared the performance and robustness of the four methods of one-hot encoding, tf-idf weighting, word2vec and paragraph2vec, and in the classification part, we deliberately chose and compared Naive Bayes, Logistic Regression, Support Vector Machine, K-nearest Neighbor and Decision Tree as our classifiers. Then, we compared and analysed the classifiers horizontally with each other and vertically with feature selections. Regarding the datasets, we crawled more than 400,000 short text files from Shanghai and Shenzhen Stock Exchanges and manually labeled them into two classes, the big and the small. There are eight labels in the big class, and 59 labels in the small class.
NASA Astrophysics Data System (ADS)
Mehrvand, Masoud; Baghanam, Aida Hosseini; Razzaghzadeh, Zahra; Nourani, Vahid
2017-04-01
Since statistical downscaling methods are the most largely used models to study hydrologic impact studies under climate change scenarios, nonlinear regression models known as Artificial Intelligence (AI)-based models such as Artificial Neural Network (ANN) and Support Vector Machine (SVM) have been used to spatially downscale the precipitation outputs of Global Climate Models (GCMs). The study has been carried out using GCM and station data over GCM grid points located around the Peace-Tampa Bay watershed weather stations. Before downscaling with AI-based model, correlation coefficient values have been computed between a few selected large-scale predictor variables and local scale predictands to select the most effective predictors. The selected predictors are then assessed considering grid location for the site in question. In order to increase AI-based downscaling model accuracy pre-processing has been developed on precipitation time series. In this way, the precipitation data derived from various GCM data analyzed thoroughly to find the highest value of correlation coefficient between GCM-based historical data and station precipitation data. Both GCM and station precipitation time series have been assessed by comparing mean and variances over specific intervals. Results indicated that there is similar trend between GCM and station precipitation data; however station data has non-stationary time series while GCM data does not. Finally AI-based downscaling model have been applied to several GCMs with selected predictors by targeting local precipitation time series as predictand. The consequences of recent step have been used to produce multiple ensembles of downscaled AI-based models.
Supervised Learning Applied to Air Traffic Trajectory Classification
NASA Technical Reports Server (NTRS)
Bosson, Christabelle S.; Nikoleris, Tasos
2018-01-01
Given the recent increase of interest in introducing new vehicle types and missions into the National Airspace System, a transition towards a more autonomous air traffic control system is required in order to enable and handle increased density and complexity. This paper presents an exploratory effort of the needed autonomous capabilities by exploring supervised learning techniques in the context of aircraft trajectories. In particular, it focuses on the application of machine learning algorithms and neural network models to a runway recognition trajectory-classification study. It investigates the applicability and effectiveness of various classifiers using datasets containing trajectory records for a month of air traffic. A feature importance and sensitivity analysis are conducted to challenge the chosen time-based datasets and the ten selected features. The study demonstrates that classification accuracy levels of 90% and above can be reached in less than 40 seconds of training for most machine learning classifiers when one track data point, described by the ten selected features at a particular time step, per trajectory is used as input. It also shows that neural network models can achieve similar accuracy levels but at higher training time costs.
2014-01-01
In fabrication of nano- and quantum devices, it is sometimes critical to position individual dopants at certain sites precisely to obtain the specific or enhanced functionalities. With first-principles simulations, we propose a method for substitutional doping of individual atom at a certain position on a stepped metal surface by single-atom manipulation. A selected atom at the step of Al (111) surface could be extracted vertically with an Al trimer-apex tip, and then the dopant atom will be positioned to this site. The details of the entire process including potential energy curves are given, which suggests the reliability of the proposed single-atom doping method. PMID:24899871
Chen, Chang; Zhang, Jinhu; Dong, Guofeng; Shao, Hezhu; Ning, Bo-Yuan; Zhao, Li; Ning, Xi-Jing; Zhuang, Jun
2014-01-01
In fabrication of nano- and quantum devices, it is sometimes critical to position individual dopants at certain sites precisely to obtain the specific or enhanced functionalities. With first-principles simulations, we propose a method for substitutional doping of individual atom at a certain position on a stepped metal surface by single-atom manipulation. A selected atom at the step of Al (111) surface could be extracted vertically with an Al trimer-apex tip, and then the dopant atom will be positioned to this site. The details of the entire process including potential energy curves are given, which suggests the reliability of the proposed single-atom doping method.
Analysis on burnup step effect for evaluating reactor criticality and fuel breeding ratio
DOE Office of Scientific and Technical Information (OSTI.GOV)
Saputra, Geby; Purnama, Aditya Rizki; Permana, Sidik
Criticality condition of the reactors is one of the important factors for evaluating reactor operation and nuclear fuel breeding ratio is another factor to show nuclear fuel sustainability. This study analyzes the effect of burnup steps and cycle operation step for evaluating the criticality condition of the reactor as well as the performance of nuclear fuel breeding or breeding ratio (BR). Burnup step is performed based on a day step analysis which is varied from 10 days up to 800 days and for cycle operation from 1 cycle up to 8 cycles reactor operations. In addition, calculation efficiency based onmore » the variation of computer processors to run the analysis in term of time (time efficiency in the calculation) have been also investigated. Optimization method for reactor design analysis which is used a large fast breeder reactor type as a reference case was performed by adopting an established reactor design code of JOINT-FR. The results show a criticality condition becomes higher for smaller burnup step (day) and for breeding ratio becomes less for smaller burnup step (day). Some nuclides contribute to make better criticality when smaller burnup step due to individul nuclide half-live. Calculation time for different burnup step shows a correlation with the time consuming requirement for more details step calculation, although the consuming time is not directly equivalent with the how many time the burnup time step is divided.« less
ERIC Educational Resources Information Center
Bavli, Özhan
2016-01-01
The aim of this study was to investigate the effects of eight weeks of step aerobic exercises on static balance, flexibility and selected basketball skills in young basketball players. A total of 20 basketball players (average age 16.1 ± 0.7 years and average sporting age 4.1 ± 0.7 years) voluntarily joined the study. Participants were randomly…
Two-year Randomized Clinical Trial of Self-etching Adhesives and Selective Enamel Etching.
Pena, C E; Rodrigues, J A; Ely, C; Giannini, M; Reis, A F
2016-01-01
The aim of this randomized, controlled prospective clinical trial was to evaluate the clinical effectiveness of restoring noncarious cervical lesions with two self-etching adhesive systems applied with or without selective enamel etching. A one-step self-etching adhesive (Xeno V(+)) and a two-step self-etching system (Clearfil SE Bond) were used. The effectiveness of phosphoric acid selective etching of enamel margins was also evaluated. Fifty-six cavities were restored with each adhesive system and divided into two subgroups (n=28; etch and non-etch). All 112 cavities were restored with the nanohybrid composite Esthet.X HD. The clinical effectiveness of restorations was recorded in terms of retention, marginal integrity, marginal staining, caries recurrence, and postoperative sensitivity after 3, 6, 12, 18, and 24 months (modified United States Public Health Service). The Friedman test detected significant differences only after 18 months for marginal staining in the groups Clearfil SE non-etch (p=0.009) and Xeno V(+) etch (p=0.004). One restoration was lost during the trial (Xeno V(+) etch; p>0.05). Although an increase in marginal staining was recorded for groups Clearfil SE non-etch and Xeno V(+) etch, the clinical effectiveness of restorations was considered acceptable for the single-step and two-step self-etching systems with or without selective enamel etching in this 24-month clinical trial.
Geometric mean for subspace selection.
Tao, Dacheng; Li, Xuelong; Wu, Xindong; Maybank, Stephen J
2009-02-01
Subspace selection approaches are powerful tools in pattern classification and data visualization. One of the most important subspace approaches is the linear dimensionality reduction step in the Fisher's linear discriminant analysis (FLDA), which has been successfully employed in many fields such as biometrics, bioinformatics, and multimedia information management. However, the linear dimensionality reduction step in FLDA has a critical drawback: for a classification task with c classes, if the dimension of the projected subspace is strictly lower than c - 1, the projection to a subspace tends to merge those classes, which are close together in the original feature space. If separate classes are sampled from Gaussian distributions, all with identical covariance matrices, then the linear dimensionality reduction step in FLDA maximizes the mean value of the Kullback-Leibler (KL) divergences between different classes. Based on this viewpoint, the geometric mean for subspace selection is studied in this paper. Three criteria are analyzed: 1) maximization of the geometric mean of the KL divergences, 2) maximization of the geometric mean of the normalized KL divergences, and 3) the combination of 1 and 2. Preliminary experimental results based on synthetic data, UCI Machine Learning Repository, and handwriting digits show that the third criterion is a potential discriminative subspace selection method, which significantly reduces the class separation problem in comparing with the linear dimensionality reduction step in FLDA and its several representative extensions.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wetzstein, M.; Nelson, Andrew F.; Naab, T.
2009-10-01
We present a numerical code for simulating the evolution of astrophysical systems using particles to represent the underlying fluid flow. The code is written in Fortran 95 and is designed to be versatile, flexible, and extensible, with modular options that can be selected either at the time the code is compiled or at run time through a text input file. We include a number of general purpose modules describing a variety of physical processes commonly required in the astrophysical community and we expect that the effort required to integrate additional or alternate modules into the code will be small. Inmore » its simplest form the code can evolve the dynamical trajectories of a set of particles in two or three dimensions using a module which implements either a Leapfrog or Runge-Kutta-Fehlberg integrator, selected by the user at compile time. The user may choose to allow the integrator to evolve the system using individual time steps for each particle or with a single, global time step for all. Particles may interact gravitationally as N-body particles, and all or any subset may also interact hydrodynamically, using the smoothed particle hydrodynamic (SPH) method by selecting the SPH module. A third particle species can be included with a module to model massive point particles which may accrete nearby SPH or N-body particles. Such particles may be used to model, e.g., stars in a molecular cloud. Free boundary conditions are implemented by default, and a module may be selected to include periodic boundary conditions. We use a binary 'Press' tree to organize particles for rapid access in gravity and SPH calculations. Modules implementing an interface with special purpose 'GRAPE' hardware may also be selected to accelerate the gravity calculations. If available, forces obtained from the GRAPE coprocessors may be transparently substituted for those obtained from the tree, or both tree and GRAPE may be used as a combination GRAPE/tree code. The code may be run without modification on single processors or in parallel using OpenMP compiler directives on large-scale, shared memory parallel machines. We present simulations of several test problems, including a merger simulation of two elliptical galaxies with 800,000 particles. In comparison to the Gadget-2 code of Springel, the gravitational force calculation, which is the most costly part of any simulation including self-gravity, is {approx}4.6-4.9 times faster with VINE when tested on different snapshots of the elliptical galaxy merger simulation when run on an Itanium 2 processor in an SGI Altix. A full simulation of the same setup with eight processors is a factor of 2.91 faster with VINE. The code is available to the public under the terms of the Gnu General Public License.« less
NASA Astrophysics Data System (ADS)
Wetzstein, M.; Nelson, Andrew F.; Naab, T.; Burkert, A.
2009-10-01
We present a numerical code for simulating the evolution of astrophysical systems using particles to represent the underlying fluid flow. The code is written in Fortran 95 and is designed to be versatile, flexible, and extensible, with modular options that can be selected either at the time the code is compiled or at run time through a text input file. We include a number of general purpose modules describing a variety of physical processes commonly required in the astrophysical community and we expect that the effort required to integrate additional or alternate modules into the code will be small. In its simplest form the code can evolve the dynamical trajectories of a set of particles in two or three dimensions using a module which implements either a Leapfrog or Runge-Kutta-Fehlberg integrator, selected by the user at compile time. The user may choose to allow the integrator to evolve the system using individual time steps for each particle or with a single, global time step for all. Particles may interact gravitationally as N-body particles, and all or any subset may also interact hydrodynamically, using the smoothed particle hydrodynamic (SPH) method by selecting the SPH module. A third particle species can be included with a module to model massive point particles which may accrete nearby SPH or N-body particles. Such particles may be used to model, e.g., stars in a molecular cloud. Free boundary conditions are implemented by default, and a module may be selected to include periodic boundary conditions. We use a binary "Press" tree to organize particles for rapid access in gravity and SPH calculations. Modules implementing an interface with special purpose "GRAPE" hardware may also be selected to accelerate the gravity calculations. If available, forces obtained from the GRAPE coprocessors may be transparently substituted for those obtained from the tree, or both tree and GRAPE may be used as a combination GRAPE/tree code. The code may be run without modification on single processors or in parallel using OpenMP compiler directives on large-scale, shared memory parallel machines. We present simulations of several test problems, including a merger simulation of two elliptical galaxies with 800,000 particles. In comparison to the Gadget-2 code of Springel, the gravitational force calculation, which is the most costly part of any simulation including self-gravity, is ~4.6-4.9 times faster with VINE when tested on different snapshots of the elliptical galaxy merger simulation when run on an Itanium 2 processor in an SGI Altix. A full simulation of the same setup with eight processors is a factor of 2.91 faster with VINE. The code is available to the public under the terms of the Gnu General Public License.
Well-balanced compressible cut-cell simulation of atmospheric flow.
Klein, R; Bates, K R; Nikiforakis, N
2009-11-28
Cut-cell meshes present an attractive alternative to terrain-following coordinates for the representation of topography within atmospheric flow simulations, particularly in regions of steep topographic gradients. In this paper, we present an explicit two-dimensional method for the numerical solution on such meshes of atmospheric flow equations including gravitational sources. This method is fully conservative and allows for time steps determined by the regular grid spacing, avoiding potential stability issues due to arbitrarily small boundary cells. We believe that the scheme is unique in that it is developed within a dimensionally split framework, in which each coordinate direction in the flow is solved independently at each time step. Other notable features of the scheme are: (i) its conceptual and practical simplicity, (ii) its flexibility with regard to the one-dimensional flux approximation scheme employed, and (iii) the well-balancing of the gravitational sources allowing for stable simulation of near-hydrostatic flows. The presented method is applied to a selection of test problems including buoyant bubble rise interacting with geometry and lee-wave generation due to topography.
Student’s thinking process in solving word problems in geometry
NASA Astrophysics Data System (ADS)
Khasanah, V. N.; Usodo, B.; Subanti, S.
2018-05-01
This research aims to find out the thinking process of seventh grade of Junior High School in solve word problem solving of geometry. This research was descriptive qualitative research. The subject of the research was selected based on sex and differences in mathematical ability. Data collection was done based on student’s work test, interview, and observation. The result of the research showed that there was no difference of thinking process between male and female with high mathematical ability, and there were differences of thinking process between male and female with moderate and low mathematical ability. Also, it was found that male with moderate mathematical ability took a long time in the step of making problem solving plans. While female with moderate mathematical ability took a long time in the step of understanding the problems. The importance of knowing the thinking process of students in solving word problem solving were that the teacher knows the difficulties faced by students and to minimize the occurrence of the same error in problem solving. Teacher could prepare the right learning strategies which more appropriate with student’s thinking process.
A model of icebergs and sea ice in a joint continuum framework
NASA Astrophysics Data System (ADS)
Vaňková, Irena; Holland, David M.
2017-04-01
The ice mélange, a mixture of sea ice and icebergs, often present in front of tidewater glaciers in Greenland or ice shelves in Antarctica, can have a profound effect on the dynamics of the ice-ocean system. The current inability to numerically model the ice mélange motivates a new modeling approach proposed here. A continuum sea-ice model is taken as a starting point and icebergs are represented as thick and compact pieces of sea ice held together by large tensile and shear strength selectively introduced into the sea ice rheology. In order to modify the rheology correctly, a semi-Lagrangian time stepping scheme is introduced and at each time step a Lagrangian grid is constructed such that iceberg shape is preserved exactly. With the proposed treatment, sea ice and icebergs are considered a single fluid with spatially varying rheological properties, mutual interactions are thus automatically included without the need of further parametrization. An important advantage of the presented framework for an ice mélange model is its potential to be easily included in existing climate models.
Inhibitory ability of children with developmental dyscalculia.
Zhang, Huaiying; Wu, Hanrong
2011-02-01
Inhibitory ability of children with developmental dyscalculia (DD) was investigated to explore the cognitive mechanism underlying DD. According to the definition of developmental dyscalculia, 19 children with DD-only and 10 children with DD&RD (DD combined with reading disability) were selected step by step, children in two control groups were matched with children in case groups by gender and age, and the match ratio was 1:1. Psychological testing software named DMDX was used to measure inhibitory ability of the subjects. The differences of reaction time in number Stroop tasks and differences of accuracy in incongruent condition of color-word Stroop tasks and object inhibition tasks between DD-only children and their controls reached significant levels (P<0.05), and the differences of reaction time in number Stroop tasks between dyscalculic and normal children did not disappear after controlling the non-executive components. The difference of accuracy in color-word incongruent tasks between children with DD&RD and normal children reached significant levels (P<0.05). Children with DD-only confronted with general inhibitory deficits, while children with DD&RD confronted with word inhibitory deficits only.
Abildsnes, Eirik; Rohde, Gudrun; Berntsen, Sveinung; Stea, Tonje H
2017-03-10
Many adolescents do not reach the recommended levels of physical activity (PA), and students attending vocational studies are less committed to take part in physical education (PE) than other students. The purpose of the present study was twofold: 1) to examine differences in physical activity, diet, smoking habits, sleep and screen time among Norwegian vocational high school students who selected either a PE model focusing on PA skills, technique and improvement of physical performance ("Sports enjoyment") or more on health, play and having fun when participating in PE lessons ("Motion enjoyment"), and 2) to explore the students' experiences with PE programs. In this mixed methods study 181 out of 220 invited students (82%) comprising 141 (78%) girls and 40 (22%) boys attending vocational studies of Restaurant and Food Processing (24%), Design, Arts and Crafts (27%) or Healthcare, Childhood and Youth Development (49%) were recruited for participation in the new PE program. PA level, sedentary time and sleep were objectively recorded using the SenseWear Armband Mini. A self-report questionnaire was used to assess dietary habits, smoking and snuffing habits, use of alcohol, screen use and active transportation. Four focus group interviews with 23 students (12 boys) were conducted to explore how the students experienced the new PE program. Students attending "Motion enjoyment" accrued less steps/day compared to the "Sports enjoyment" group (6661 (5514, 7808) vs.9167 (7945, 10390) steps/day) and reported higher screen use (mean, 3.1; 95% CI, 2.8, 3.5) vs. 2.4 (2.0, 2.9) hours/day). Compared to those attending "Sports enjoyment", a higher number of students attending "Motion enjoyment" reported an irregular meal pattern (adjusted odds ratio, 5.40; 95% confidence interval (CI), 2.28, 12.78), and being a current smoker (12.22 (1.62, 107.95)). The students participating in the focus group interviews emphasized the importance of having competent and engaging teachers, being able to influence the content of the PE program themselves, and that PE classes should include a variety of fun activities. Students selecting "Motion enjoyment" accrued less steps/day and reported overall more unhealthy lifestyle habits, including higher screen time, a more irregular meal pattern and a higher number were current smokers, compared to those selecting "Sports enjoyment". Program evaluation revealed that both groups of students valued competent PE teachers and having influence on the content of the PE program.
Bayesian functional integral method for inferring continuous data from discrete measurements.
Heuett, William J; Miller, Bernard V; Racette, Susan B; Holloszy, John O; Chow, Carson C; Periwal, Vipul
2012-02-08
Inference of the insulin secretion rate (ISR) from C-peptide measurements as a quantification of pancreatic β-cell function is clinically important in diseases related to reduced insulin sensitivity and insulin action. ISR derived from C-peptide concentration is an example of nonparametric Bayesian model selection where a proposed ISR time-course is considered to be a "model". An inferred value of inaccessible continuous variables from discrete observable data is often problematic in biology and medicine, because it is a priori unclear how robust the inference is to the deletion of data points, and a closely related question, how much smoothness or continuity the data actually support. Predictions weighted by the posterior distribution can be cast as functional integrals as used in statistical field theory. Functional integrals are generally difficult to evaluate, especially for nonanalytic constraints such as positivity of the estimated parameters. We propose a computationally tractable method that uses the exact solution of an associated likelihood function as a prior probability distribution for a Markov-chain Monte Carlo evaluation of the posterior for the full model. As a concrete application of our method, we calculate the ISR from actual clinical C-peptide measurements in human subjects with varying degrees of insulin sensitivity. Our method demonstrates the feasibility of functional integral Bayesian model selection as a practical method for such data-driven inference, allowing the data to determine the smoothing timescale and the width of the prior probability distribution on the space of models. In particular, our model comparison method determines the discrete time-step for interpolation of the unobservable continuous variable that is supported by the data. Attempts to go to finer discrete time-steps lead to less likely models. Copyright © 2012 Biophysical Society. Published by Elsevier Inc. All rights reserved.
Real-time PCR and its application to mumps rapid diagnosis.
Jin, L; Feng, Y; Parry, R; Cui, A; Lu, Y
2007-11-01
A real-time polymerase chain reaction assay was initially developed in China to detect mumps genome. The primers and TaqMan-MGB probe were selected from regions of the hemagglutinin gene of mumps virus. The primers and probe for the real-time PCR were evaluated by both laboratories in China and in the UK using three different pieces of equipment, LightCycler (Roche), MJ DNA Engine Option 2 (BIO-RAD) and TaqMan (ABI Prism) on different samples. The reaction was performed with either a one-step (China) or two-step (UK) process. The sensitivity (10 copies) was estimated using a serial dilution of constructed mumps-plasmid DNA and a linear standard curve was obtained between 10 and 10(7) DNA copies/reaction, which can be used to quantify viral loads. The detection limit on cell culture-grown virus was approximately 2 pfu/ml with a two-step assay on TaqMan, which was equivalent to the sensitivity of the nested PCR routinely used in the UK. The specificity was proved by testing a range of respiratory viruses and several genotypes of mumps strains. The concentration of primers and probe is 22 pmol and 6.25 or 7 pmol respectively for a 25 microl reaction. The assay took 3 hr from viral RNA extraction to complete the detection using any of the three pieces of equipment. Three hundred forty-one (35 in China and 306 in the UK) clinical specimens were tested, the results showing that this real-time PCR assay is suitable for rapid and accurate detection of mumps virus RNA in various types of clinical specimens. (c) 2007 Wiley-Liss, Inc.
McTwo: a two-step feature selection algorithm based on maximal information coefficient.
Ge, Ruiquan; Zhou, Manli; Luo, Youxi; Meng, Qinghan; Mai, Guoqin; Ma, Dongli; Wang, Guoqing; Zhou, Fengfeng
2016-03-23
High-throughput bio-OMIC technologies are producing high-dimension data from bio-samples at an ever increasing rate, whereas the training sample number in a traditional experiment remains small due to various difficulties. This "large p, small n" paradigm in the area of biomedical "big data" may be at least partly solved by feature selection algorithms, which select only features significantly associated with phenotypes. Feature selection is an NP-hard problem. Due to the exponentially increased time requirement for finding the globally optimal solution, all the existing feature selection algorithms employ heuristic rules to find locally optimal solutions, and their solutions achieve different performances on different datasets. This work describes a feature selection algorithm based on a recently published correlation measurement, Maximal Information Coefficient (MIC). The proposed algorithm, McTwo, aims to select features associated with phenotypes, independently of each other, and achieving high classification performance of the nearest neighbor algorithm. Based on the comparative study of 17 datasets, McTwo performs about as well as or better than existing algorithms, with significantly reduced numbers of selected features. The features selected by McTwo also appear to have particular biomedical relevance to the phenotypes from the literature. McTwo selects a feature subset with very good classification performance, as well as a small feature number. So McTwo may represent a complementary feature selection algorithm for the high-dimensional biomedical datasets.
NASA Astrophysics Data System (ADS)
Dohe, S.; Sherlock, V.; Hase, F.; Gisi, M.; Robinson, J.; Sepúlveda, E.; Schneider, M.; Blumenstock, T.
2013-08-01
The Total Carbon Column Observing Network (TCCON) has been established to provide ground-based remote sensing measurements of the column-averaged dry air mole fractions (DMF) of key greenhouse gases. To ensure network-wide consistency, biases between Fourier transform spectrometers at different sites have to be well controlled. Errors in interferogram sampling can introduce significant biases in retrievals. In this study we investigate a two-step scheme to correct these errors. In the first step the laser sampling error (LSE) is estimated by determining the sampling shift which minimises the magnitude of the signal intensity in selected, fully absorbed regions of the solar spectrum. The LSE is estimated for every day with measurements which meet certain selection criteria to derive the site-specific time series of the LSEs. In the second step, this sequence of LSEs is used to resample all the interferograms acquired at the site, and hence correct the sampling errors. Measurements acquired at the Izaña and Lauder TCCON sites are used to demonstrate the method. At both sites the sampling error histories show changes in LSE due to instrument interventions (e.g. realignment). Estimated LSEs are in good agreement with sampling errors inferred from the ratio of primary and ghost spectral signatures in optically bandpass-limited tungsten lamp spectra acquired at Lauder. The original time series of Xair and XCO2 (XY: column-averaged DMF of the target gas Y) at both sites show discrepancies of 0.2-0.5% due to changes in the LSE associated with instrument interventions or changes in the measurement sample rate. After resampling, discrepancies are reduced to 0.1% or less at Lauder and 0.2% at Izaña. In the latter case, coincident changes in interferometer alignment may also have contributed to the residual difference. In the future the proposed method will be used to correct historical spectra at all TCCON sites.
The general alcoholics anonymous tools of recovery: the adoption of 12-step practices and beliefs.
Greenfield, Brenna L; Tonigan, J Scott
2013-09-01
Working the 12 steps is widely prescribed for Alcoholics Anonymous (AA) members although the relative merits of different methods for measuring step work have received minimal attention and even less is known about how step work predicts later substance use. The current study (1) compared endorsements of step work on an face-valid or direct measure, the Alcoholics Anonymous Inventory (AAI), with an indirect measure of step work, the General Alcoholics Anonymous Tools of Recovery (GAATOR); (2) evaluated the underlying factor structure of the GAATOR and changes in step work over time; (3) examined changes in the endorsement of step work over time; and (4) investigated how, if at all, 12-step work predicted later substance use. New AA affiliates (N = 130) completed assessments at intake, 3, 6, and 9 months. Significantly more participants endorsed step work on the GAATOR than on the AAI for nine of the 12 steps. An exploratory factor analysis revealed a two-factor structure for the GAATOR comprising behavioral step work and spiritual step work. Behavioral step work did not change over time, but was predicted by having a sponsor, while Spiritual step work decreased over time and increases were predicted by attending 12-step meetings or treatment. Behavioral step work did not prospectively predict substance use. In contrast, spiritual step work predicted percent days abstinent. Behavioral step work and spiritual step work appear to be conceptually distinct components of step work that have distinct predictors and unique impacts on outcomes. PsycINFO Database Record (c) 2013 APA, all rights reserved.
Method of freeform fabrication by selective gelation of powder suspensions
Baskaran, S.; Graff, G.L.
1997-12-09
The present invention is a novel method for freeform fabrication. Specifically, the method of solid freeform fabrication has the steps of: (a) preparing a slurry by mixing powder particles with a suspension medium and a gelling polysaccharide; (b) making a layer by depositing an amount of said powder slurry in a confined region; (c) hardening a selected portion of the layer by applying a gelling agent to the selected portion; and (d) repeating steps (b) and (c) to make successive layers and forming a layered object. In many applications, it is desirable to remove unhardened material followed by heating to remove gellable polysaccharide then sintering. 2 figs.
Method of freeform fabrication by selective gelation of powder suspensions
Baskaran, Suresh; Graff, Gordon L.
1997-01-01
The present invention is a novel method for freeform fabrication. Specifically, the method of solid freeform fabrication has the steps of: (a) preparing a slurry by mixing powder particles with a suspension medium and a gelling polysaccharide; (b) making a layer by depositing an amount of said powder slurry in a confined region; (c) hardening a selected portion of the layer by applying a gelling agent to the selected portion; and (d) repeating steps (b) and (c) to make successive layers and forming a layered object. In many applications, it is desirable to remove unhardened material followed by heating to remove gellable polysaccharide then sintering.
Two-step chlorination: A new approach to disinfection of a primary sewage effluent.
Li, Yu; Yang, Mengting; Zhang, Xiangru; Jiang, Jingyi; Liu, Jiaqi; Yau, Cie Fu; Graham, Nigel J D; Li, Xiaoyan
2017-01-01
Sewage disinfection aims at inactivating pathogenic microorganisms and preventing the transmission of waterborne diseases. Chlorination is extensively applied for disinfecting sewage effluents. The objective of achieving a disinfection goal and reducing disinfectant consumption and operational costs remains a challenge in sewage treatment. In this study, we have demonstrated that, for the same chlorine dosage, a two-step addition of chlorine (two-step chlorination) was significantly more efficient in disinfecting a primary sewage effluent than a one-step addition of chlorine (one-step chlorination), and shown how the two-step chlorination was optimized with respect to time interval and dosage ratio. Two-step chlorination of the sewage effluent attained its highest disinfection efficiency at a time interval of 19 s and a dosage ratio of 5:1. Compared to one-step chlorination, two-step chlorination enhanced the disinfection efficiency by up to 0.81- or even 1.02-log for two different chlorine doses and contact times. An empirical relationship involving disinfection efficiency, time interval and dosage ratio was obtained by best fitting. Mechanisms (including a higher overall Ct value, an intensive synergistic effect, and a shorter recovery time) were proposed for the higher disinfection efficiency of two-step chlorination in the sewage effluent disinfection. Annual chlorine consumption costs in one-step and two-step chlorination of the primary sewage effluent were estimated. Compared to one-step chlorination, two-step chlorination reduced the cost by up to 16.7%. Copyright © 2016 Elsevier Ltd. All rights reserved.
Teaching with Historical Novels: A Four-Step Approach.
ERIC Educational Resources Information Center
Smith, John A; Dobson, Dorothy
1993-01-01
Asserts that the use of historical novels in the elementary curriculum is becoming increasingly popular. Provides a four-step process that guides instruction using novels. Includes recommendations for selecting the novels, preteaching activities, and enrichment activities. (CFR)
Criado-García, Laura; Arce, Lourdes
2016-09-01
A new sample extraction procedure based on micro-solid-phase extraction (μSPE) using a mixture of sorbents of different polarities (polymeric reversed-phase sorbent HLB, silica-based sorbent C18, and multiwalled carbon nanotubes) was applied to extract benzene, toluene, butyraldehyde, benzaldehyde, and tolualdehyde present in saliva to avoid interference from moisture and matrix components and enhance sensitivity and selectivity of the ion mobility spectrometry (IMS) methodology proposed. The extraction of target analytes from saliva samples by using μSPE were followed by the desorption step carried out in the headspace vials placed in the autosampler of the IMS device. Then, 200 μL of headspace was injected into the GC column coupled to the IMS for its analysis. The method was fully validated in terms of sensitivity, precision, and recovery. The LODs and LOQs obtained, when analytes were dissolved in saliva samples to consider the matrix effect, were within the range of 0.38-0.49 and 1.26-1.66 μg mL(-1), respectively. The relative standard deviations were <3.5 % for retention time and drift time values, which indicate that the method proposed can be applied to determine toxic compounds in saliva samples. Graphical abstract Summary of steps followed in the experimental set up of this work.
TACT: A Set of MSC/PATRAN- and MSC/NASTRAN- based Modal Correlation Tools
NASA Technical Reports Server (NTRS)
Marlowe, Jill M.; Dixon, Genevieve D.
1998-01-01
This paper describes the functionality and demonstrates the utility of the Test Analysis Correlation Tools (TACT), a suite of MSC/PATRAN Command Language (PCL) tools which automate the process of correlating finite element models to modal survey test data. The initial release of TACT provides a basic yet complete set of tools for performing correlation totally inside the PATRAN/NASTRAN environment. Features include a step-by-step menu structure, pre-test accelerometer set evaluation and selection, analysis and test result export/import in Universal File Format, calculation of frequency percent difference and cross-orthogonality correlation results using NASTRAN, creation and manipulation of mode pairs, and five different ways of viewing synchronized animations of analysis and test modal results. For the PATRAN-based analyst, TACT eliminates the repetitive, time-consuming and error-prone steps associated with transferring finite element data to a third-party modal correlation package, which allows the analyst to spend more time on the more challenging task of model updating. The usefulness of this software is presented using a case history, the correlation for a NASA Langley Research Center (LaRC) low aspect ratio research wind tunnel model. To demonstrate the improvements that TACT offers the MSC/PATRAN- and MSC/DIASTRAN- based structural analysis community, a comparison of the modal correlation process using TACT within PATRAN versus external third-party modal correlation packages is presented.
Drive piston assembly for a valve actuator assembly
Sun, Zongxuan
2010-02-23
A drive piston assembly is provided that is operable to selectively open a poppet valve. The drive piston assembly includes a cartridge defining a generally stepped bore. A drive piston is movable within the generally stepped bore and a boost sleeve is coaxially disposed with respect to the drive piston. A main fluid chamber is at least partially defined by the generally stepped bore, drive piston, and boost sleeve. First and second feedback chambers are at least partially defined by the drive piston and each are disposed at opposite ends of the drive piston. At least one of the drive piston and the boost sleeve is sufficiently configured to move within the generally stepped bore in response to fluid pressure within the main fluid chamber to selectively open the poppet valve. A valve actuator assembly and engine are also provided incorporating the disclosed drive piston assembly.
Siah, A; Dohoo, C; McKenna, P; Delaporte, M; Berthe, F C J
2008-09-01
The transcripts involved in the molecular mechanisms of haemic neoplasia in relation to the haemocyte ploidy status of the soft-shell clam, Mya arenaria, have yet to be identified. For this purpose, real-time quantitative RT-PCR constitutes a sensitive and efficient technique, which can help determine the gene expression involved in haemocyte tetraploid status in clams affected by haemic neoplasia. One of the critical steps in comparing transcription profiles is the stability of selected housekeeping genes, as well as an accurate normalization. In this study, we selected five reference genes, S18, L37, EF1, EF2 and actin, generally used as single control genes. Their expression was analyzed by real-time quantitative RT-PCR at different levels of haemocyte ploidy status in order to select the most stable genes. Using the geNorm software, our results showed that L37, EF1 and S18 represent the most stable gene expressions related to various ploidy status ranging from 0 to 78% of tetraploid haemocytes in clams sampled in North River (Prince Edward Island, Canada). However, actin gene expression appeared to be highly regulated. Hence, using it as a housekeeping gene in tetraploid haemocytes can result in inaccurate data. To compare gene expression levels related to haemocyte ploidy status in Mya arenaria, using L37, EF1 and S18 as housekeeping genes for accurate normalization is therefore recommended.
Multi-step process for concentrating magnetic particles in waste sludges
Watson, John L.
1990-01-01
This invention involves a multi-step, multi-force process for dewatering sludges which have high concentrations of magnetic particles, such as waste sludges generated during steelmaking. This series of processing steps involves (1) mixing a chemical flocculating agent with the sludge; (2) allowing the particles to aggregate under non-turbulent conditions; (3) subjecting the mixture to a magnetic field which will pull the magnetic aggregates in a selected direction, causing them to form a compacted sludge; (4) preferably, decanting the clarified liquid from the compacted sludge; and (5) using filtration to convert the compacted sludge into a cake having a very high solids content. Steps 2 and 3 should be performed simultaneously. This reduces the treatment time and increases the extent of flocculation and the effectiveness of the process. As partially formed aggregates with active flocculating groups are pulled through the mixture by the magnetic field, they will contact other particles and form larger aggregates. This process can increase the solids concentration of steelmaking sludges in an efficient and economic manner, thereby accomplishing either of two goals: (a) it can convert hazardous wastes into economic resources for recycling as furnace feed material, or (b) it can dramatically reduce the volume of waste material which must be disposed.
Multi-step process for concentrating magnetic particles in waste sludges
Watson, J.L.
1990-07-10
This invention involves a multi-step, multi-force process for dewatering sludges which have high concentrations of magnetic particles, such as waste sludges generated during steelmaking. This series of processing steps involves (1) mixing a chemical flocculating agent with the sludge; (2) allowing the particles to aggregate under non-turbulent conditions; (3) subjecting the mixture to a magnetic field which will pull the magnetic aggregates in a selected direction, causing them to form a compacted sludge; (4) preferably, decanting the clarified liquid from the compacted sludge; and (5) using filtration to convert the compacted sludge into a cake having a very high solids content. Steps 2 and 3 should be performed simultaneously. This reduces the treatment time and increases the extent of flocculation and the effectiveness of the process. As partially formed aggregates with active flocculating groups are pulled through the mixture by the magnetic field, they will contact other particles and form larger aggregates. This process can increase the solids concentration of steelmaking sludges in an efficient and economic manner, thereby accomplishing either of two goals: (a) it can convert hazardous wastes into economic resources for recycling as furnace feed material, or (b) it can dramatically reduce the volume of waste material which must be disposed. 7 figs.
ovoD Co-selection: A Method for Enriching CRISPR/Cas9-Edited Alleles in Drosophila.
Ewen-Campen, Ben; Perrimon, Norbert
2018-06-22
Screening for successful CRISPR/Cas9 editing events remains a time consuming technical bottleneck in the field of Drosophila genome editing. This step can be particularly laborious for events that do not cause a visible phenotype, or those which occur at relatively low frequency. A promising strategy to enrich for desired CRISPR events is to co-select for an independent CRISPR event that produces an easily detectable phenotype. Here, we describe a simple negative co-selection strategy involving CRISPR-editing of a dominant female sterile allele, ovo D1 In this system (" ovo D co-selection"), the only functional germ cells in injected females are those that have been edited at the ovo D1 locus, and thus all offspring of these flies have undergone editing of at least one locus. We demonstrate that ovo D co-selection can be used to enrich for knock-out mutagenesis via nonhomologous end-joining (NHEJ), and for knock-in alleles via homology-directed repair (HDR). Altogether, our results demonstrate that ovoD co-selection reduces the amount of screening necessary to isolate desired CRISPR events in Drosophila. Copyright © 2018, G3: Genes, Genomes, Genetics.
Reactive Collision Avoidance Algorithm
NASA Technical Reports Server (NTRS)
Scharf, Daniel; Acikmese, Behcet; Ploen, Scott; Hadaegh, Fred
2010-01-01
The reactive collision avoidance (RCA) algorithm allows a spacecraft to find a fuel-optimal trajectory for avoiding an arbitrary number of colliding spacecraft in real time while accounting for acceleration limits. In addition to spacecraft, the technology can be used for vehicles that can accelerate in any direction, such as helicopters and submersibles. In contrast to existing, passive algorithms that simultaneously design trajectories for a cluster of vehicles working to achieve a common goal, RCA is implemented onboard spacecraft only when an imminent collision is detected, and then plans a collision avoidance maneuver for only that host vehicle, thus preventing a collision in an off-nominal situation for which passive algorithms cannot. An example scenario for such a situation might be when a spacecraft in the cluster is approaching another one, but enters safe mode and begins to drift. Functionally, the RCA detects colliding spacecraft, plans an evasion trajectory by solving the Evasion Trajectory Problem (ETP), and then recovers after the collision is avoided. A direct optimization approach was used to develop the algorithm so it can run in real time. In this innovation, a parameterized class of avoidance trajectories is specified, and then the optimal trajectory is found by searching over the parameters. The class of trajectories is selected as bang-off-bang as motivated by optimal control theory. That is, an avoiding spacecraft first applies full acceleration in a constant direction, then coasts, and finally applies full acceleration to stop. The parameter optimization problem can be solved offline and stored as a look-up table of values. Using a look-up table allows the algorithm to run in real time. Given a colliding spacecraft, the properties of the collision geometry serve as indices of the look-up table that gives the optimal trajectory. For multiple colliding spacecraft, the set of trajectories that avoid all spacecraft is rapidly searched on-line. The optimal avoidance trajectory is implemented as a receding-horizon model predictive control law. Therefore, at each time step, the optimal avoidance trajectory is found and the first time step of its acceleration is applied. At the next time step of the control computer, the problem is re-solved and the new first time step is again applied. This continual updating allows the RCA algorithm to adapt to a colliding spacecraft that is making erratic course changes.
NASA Technical Reports Server (NTRS)
Chao, W. C.
1982-01-01
With appropriate modifications, a recently proposed explicit-multiple-time-step scheme (EMTSS) is incorporated into the UCLA model. In this scheme, the linearized terms in the governing equations that generate the gravity waves are split into different vertical modes. Each mode is integrated with an optimal time step, and at periodic intervals these modes are recombined. The other terms are integrated with a time step dictated by the CFL condition for low-frequency waves. This large time step requires a special modification of the advective terms in the polar region to maintain stability. Test runs for 72 h show that EMTSS is a stable, efficient and accurate scheme.
Martin, Jeffrey D.; Eberle, Michael; Nakagaki, Naomi
2011-01-01
This report updates a previously published water-quality dataset of 44 commonly used pesticides and 8 pesticide degradates suitable for a national assessment of trends in pesticide concentrations in streams of the United States. Water-quality samples collected from January 1992 through September 2010 at stream-water sites of the U.S. Geological Survey (USGS) National Water-Quality Assessment (NAWQA) Program and the National Stream Quality Accounting Network (NASQAN) were compiled, reviewed, selected, and prepared for trend analysis. The principal steps in data review for trend analysis were to (1) identify analytical schedule, (2) verify sample-level coding, (3) exclude inappropriate samples or results, (4) review pesticide detections per sample, (5) review high pesticide concentrations, and (6) review the spatial and temporal extent of NAWQA pesticide data and selection of analytical methods for trend analysis. The principal steps in data preparation for trend analysis were to (1) select stream-water sites for trend analysis, (2) round concentrations to a consistent level of precision for the concentration range, (3) identify routine reporting levels used to report nondetections unaffected by matrix interference, (4) reassign the concentration value for routine nondetections to the maximum value of the long-term method detection level (maxLT-MDL), (5) adjust concentrations to compensate for temporal changes in bias of recovery of the gas chromatography/mass spectrometry (GCMS) analytical method, and (6) identify samples considered inappropriate for trend analysis. Samples analyzed at the USGS National Water Quality Laboratory (NWQL) by the GCMS analytical method were the most extensive in time and space and, consequently, were selected for trend analysis. Stream-water sites with 3 or more water years of data with six or more samples per year were selected for pesticide trend analysis. The selection criteria described in the report produced a dataset of 21,988 pesticide samples at 212 stream-water sites. Only 21,144 pesticide samples, however, are considered appropriate for trend analysis.
GOTHIC: Gravitational oct-tree code accelerated by hierarchical time step controlling
NASA Astrophysics Data System (ADS)
Miki, Yohei; Umemura, Masayuki
2017-04-01
The tree method is a widely implemented algorithm for collisionless N-body simulations in astrophysics well suited for GPU(s). Adopting hierarchical time stepping can accelerate N-body simulations; however, it is infrequently implemented and its potential remains untested in GPU implementations. We have developed a Gravitational Oct-Tree code accelerated by HIerarchical time step Controlling named GOTHIC, which adopts both the tree method and the hierarchical time step. The code adopts some adaptive optimizations by monitoring the execution time of each function on-the-fly and minimizes the time-to-solution by balancing the measured time of multiple functions. Results of performance measurements with realistic particle distribution performed on NVIDIA Tesla M2090, K20X, and GeForce GTX TITAN X, which are representative GPUs of the Fermi, Kepler, and Maxwell generation of GPUs, show that the hierarchical time step achieves a speedup by a factor of around 3-5 times compared to the shared time step. The measured elapsed time per step of GOTHIC is 0.30 s or 0.44 s on GTX TITAN X when the particle distribution represents the Andromeda galaxy or the NFW sphere, respectively, with 224 = 16,777,216 particles. The averaged performance of the code corresponds to 10-30% of the theoretical single precision peak performance of the GPU.
An Efficient Pattern Mining Approach for Event Detection in Multivariate Temporal Data
Batal, Iyad; Cooper, Gregory; Fradkin, Dmitriy; Harrison, James; Moerchen, Fabian; Hauskrecht, Milos
2015-01-01
This work proposes a pattern mining approach to learn event detection models from complex multivariate temporal data, such as electronic health records. We present Recent Temporal Pattern mining, a novel approach for efficiently finding predictive patterns for event detection problems. This approach first converts the time series data into time-interval sequences of temporal abstractions. It then constructs more complex time-interval patterns backward in time using temporal operators. We also present the Minimal Predictive Recent Temporal Patterns framework for selecting a small set of predictive and non-spurious patterns. We apply our methods for predicting adverse medical events in real-world clinical data. The results demonstrate the benefits of our methods in learning accurate event detection models, which is a key step for developing intelligent patient monitoring and decision support systems. PMID:26752800
ERIC Educational Resources Information Center
VanVoorhis, Richard W.; Miller, Kenneth L.; Miller, Susan M.; Stull, Judith C.
2015-01-01
The Stepping Stones Positive Parenting Program (Stepping Stones Triple P; SSTP) was designed for caregivers of children with disabilities to improve select parental variables such as parenting styles, parental satisfaction, and parental competency, and to reduce parental stress and child problem behaviors. This study focused on SSTP training for…
2005-03-01
team-wide accountability and rewards Functional focus Group accountability and rewards Employee-owner interest conflicts Process focus Lack of...Collaborative and cross-functional work Incompatible IT Need to share Compartmentalization of functional groups Localized decision making Centralized...Steps are: • Step 1: Analyze Corporate Strategic Objectives Using SWOT (Strengths, Weaknesses, Opportunities, Threats) Methodology • Step 2
McCrorie, P Rw; Duncan, E; Granat, M H; Stansfield, B W
2012-11-01
Evidence suggests that behaviours such as standing are beneficial for our health. Unfortunately, little is known of the prevalence of this state, its importance in relation to time spent stepping or variation across seasons. The aim of this study was to quantify, in young adolescents, the prevalence and seasonal changes in time spent upright and not stepping (UNSt(time)) as well as time spent upright and stepping (USt(time)), and their contribution to overall upright time (U(time)). Thirty-three adolescents (12.2 ± 0.3 y) wore the activPAL activity monitor during four school days on two occasions: November/December (winter) and May/June (summer). UNSt(time) contributed 60% of daily U(time) at winter (Mean = 196 min) and 53% at summer (Mean = 171 min); a significant seasonal effect, p < 0.001. USt(time) was significantly greater in summer compared to winter (153 min versus 131 min, p < 0.001). The effects in UNSt(time) could be explained through significant seasonal differences during the school hours (09:00-16:00), whereas the effects in USt(time) could be explained through significant seasonal differences in the evening period (16:00-22:00). Adolescents spent a greater amount of time upright and not stepping than they did stepping, in both winter and summer. The observed seasonal effects for both UNSt(time) and USt(time) provide important information for behaviour change intervention programs.
Kennedy, Quinn; Taylor, Joy; Noda, Art; Yesavage, Jerome; Lazzeroni, Laura C
2015-09-01
Understanding the possible effects of the number of practice sessions (practice) and time between practice sessions (interval) among middle-aged and older adults in real-world tasks has important implications for skill maintenance. Prior training and cognitive ability may impact practice and interval effects on real-world tasks. In this study, we took advantage of existing practice data from 5 simulated flights among 263 middle-aged and older pilots with varying levels of flight expertise (defined by U.S. Federal Aviation Administration proficiency ratings). We developed a new Simultaneous Time Effects on Practice (STEP) model: (a) to model the simultaneous effects of practice and interval on performance of the 5 flights, and (b) to examine the effects of selected covariates (i.e., age, flight expertise, and 3 composite measures of cognitive ability). The STEP model demonstrated consistent positive practice effects, negative interval effects, and predicted covariate effects. Age negatively moderated the beneficial effects of practice. Additionally, cognitive processing speed and intraindividual variability (IIV) in processing speed moderated the benefits of practice and/or the negative influence of interval for particular flight performance measures. Expertise did not interact with practice or interval. Results indicated that practice and interval effects occur in simulated flight tasks. However, processing speed and IIV may influence these effects, even among high-functioning adults. Results have implications for the design and assessment of training interventions targeted at middle-aged and older adults for complex real-world tasks. (c) 2015 APA, all rights reserved).
Ultrafast optical technique for the characterization of altered materials
Maris, H.J.
1998-01-06
Disclosed herein is a method and a system for non-destructively examining a semiconductor sample having at least one localized region underlying a surface through into which a selected chemical species has been implanted or diffused. A first step induces at least one transient time-varying change in optical constants of the sample at a location at or near to a surface of the sample. A second step measures a response of the sample to an optical probe beam, either pulsed or continuous wave, at least during a time that the optical constants are varying. A third step associates the measured response with at least one of chemical species concentration, chemical species type, implant energy, a presence or absence of an introduced chemical species region at the location, and a presence or absence of implant-related damage. The method and apparatus in accordance with this invention can be employed in conjunction with a measurement of one or more of the following effects arising from a time-dependent change in the optical constants of the sample due to the application of at least one pump pulse: (a) a change in reflected intensity; (b) a change in transmitted intensity; (c) a change in a polarization state of the reflected and/or transmitted light; (d) a change in the optical phase of the reflected and/or transmitted light; (e) a change in direction of the reflected and/or transmitted light; and (f) a change in optical path length between the sample`s surface and a detector. 22 figs.
Ultrafast optical technique for the characterization of altered materials
Maris, Humphrey J.
1998-01-01
Disclosed herein is a method and a system for non-destructively examining a semiconductor sample (30) having at least one localized region underlying a surface (30a) through into which a selected chemical species has been implanted or diffused. A first step induces at least one transient time-varying change in optical constants of the sample at a location at or near to a surface of the sample. A second step measures a response of the sample to an optical probe beam, either pulsed or continuous wave, at least during a time that the optical constants are varying. A third step associates the measured response with at least one of chemical species concentration, chemical species type, implant energy, a presence or absence of an introduced chemical species region at the location, and a presence or absence of implant-related damage. The method and apparatus in accordance with this invention can be employed in conjunction with a measurement of one or more of the following effects arising from a time-dependent change in the optical constants of the sample due to the application of at least one pump pulse: (a) a change in reflected intensity; (b) a change in transmitted intensity; (c) a change in a polarization state of the reflected and/or transmitted light; (d) a change in the optical phase of the reflected and/or transmitted light; (e) a change in direction of the reflected and/or transmitted light; and (f) a change in optical path length between the sample's surface and a detector.
Yang, Wen-Chieh; Hsu, Wei-Li; Wu, Ruey-Meei; Lin, Kwan-Hwa
2016-10-01
Turning difficulty is common in people with Parkinson disease (PD). The clock-turn strategy is a cognitive movement strategy to improve turning performance in people with PD despite its effects are unverified. Therefore, this study aimed to investigate the effects of the clock-turn strategy on the pattern of turning steps, turning performance, and freezing of gait during a narrow turning, and how these effects were influenced by concurrent performance of a cognitive task (dual task). Twenty-five people with PD were randomly assigned to the clock-turn or usual-turn group. Participants performed the Timed Up and Go test with and without concurrent cognitive task during the medication OFF period. The clock-turn group performed the Timed Up and Go test using the clock-turn strategy, whereas participants in the usual-turn group performed in their usual manner. Measurements were taken during the 180° turn of the Timed Up and Go test. The pattern of turning steps was evaluated by step time variability and step time asymmetry. Turning performance was evaluated by turning time and number of turning steps. The number and duration of freezing of gait were calculated by video review. The clock-turn group had lower step time variability and step time asymmetry than the usual-turn group. Furthermore, the clock-turn group turned faster with fewer freezing of gait episodes than the usual-turn group. Dual task increased the step time variability and step time asymmetry in both groups but did not affect turning performance and freezing severity. The clock-turn strategy reduces turning time and freezing of gait during turning, probably by lowering step time variability and asymmetry. Dual task compromises the effects of the clock-turn strategy, suggesting a competition for attentional resources.Video Abstract available for more insights from the authors (see Supplemental Digital Content 1, http://links.lww.com/JNPT/A141).
Sigmund, Erik; El Ansari, Walid; Sigmundová, Dagmar
2012-07-29
Globally, efforts aimed at the prevention of childhood obesity have led to the implementation of a range of school-based interventions. This study assessed whether augmenting physical activity (PA) within the school setting resulted in increased daily PA and decreased overweight/obesity levels in 6-9-year-old children. Across the first to third primary school years, PA of 84 girls and 92 boys was objectively monitored five times (each for seven successive days) using Yamax pedometer (step counts) and Caltrac accelerometer (activity energy expenditure AEE - kcal/kg per day). Four schools were selected to participate in the research (2 intervention, 2 controls), comprising intervention (43 girls, 45 boys) and control children (41 girls, 47 boys). The study was non-randomized and the intervention schools were selected on the basis of existing PA-conducive environment. Analyses of variance (ANOVA) for repeated measures examined the PA programme and gender effects on the step counts and AEE. Logistic regression (Enter method) determined the obesity and overweight occurrence prospect over the course of implementation of the PA intervention. There was a significant increase of school-based PA during schooldays in intervention children (from ≈ 1718 to ≈ 3247 steps per day; and from 2.1 to ≈ 3.6 Kcal/Kg per day) in comparison with the control children. Increased school-based PA of intervention children during schooldays contributed to them achieving >10,500 steps and >10.5 Kcal/Kg per school day across the 2 years of the study, and resulted in a stop of the decline in PA levels that is known to be associated with the increasing age of children. Increased school-based PA had also positive impact on leisure time PA of schooldays and on PA at weekends of intervention children. One year after the start of the PA intervention, the odds of being overweight or obese in the intervention children was almost three times lower than that of control children (p < 0.005), and these odds steadily decreased with the duration of the intervention. The findings suggest that school-based PA (Physical Education lessons, PA during short breaks and longer recesses, PA at after-school nursery) in compatible active environments (child-friendly gym and school playground, corridors with movement and playing around corners and for games) has a vital role in obesity and overweight reduction among younger pupils.
2012-01-01
Background Globally, efforts aimed at the prevention of childhood obesity have led to the implementation of a range of school-based interventions. This study assessed whether augmenting physical activity (PA) within the school setting resulted in increased daily PA and decreased overweight/obesity levels in 6-9-year-old children. Methods Across the first to third primary school years, PA of 84 girls and 92 boys was objectively monitored five times (each for seven successive days) using Yamax pedometer (step counts) and Caltrac accelerometer (activity energy expenditure AEE - kcal/kg per day). Four schools were selected to participate in the research (2 intervention, 2 controls), comprising intervention (43 girls, 45 boys) and control children (41 girls, 47 boys). The study was non-randomized and the intervention schools were selected on the basis of existing PA-conducive environment. Analyses of variance (ANOVA) for repeated measures examined the PA programme and gender effects on the step counts and AEE. Logistic regression (Enter method) determined the obesity and overweight occurrence prospect over the course of implementation of the PA intervention. Results There was a significant increase of school-based PA during schooldays in intervention children (from ≈ 1718 to ≈ 3247 steps per day; and from 2.1 to ≈ 3.6 Kcal/Kg per day) in comparison with the control children. Increased school-based PA of intervention children during schooldays contributed to them achieving >10,500 steps and >10.5 Kcal/Kg per school day across the 2 years of the study, and resulted in a stop of the decline in PA levels that is known to be associated with the increasing age of children. Increased school-based PA had also positive impact on leisure time PA of schooldays and on PA at weekends of intervention children. One year after the start of the PA intervention, the odds of being overweight or obese in the intervention children was almost three times lower than that of control children (p < 0.005), and these odds steadily decreased with the duration of the intervention. Conclusions The findings suggest that school-based PA (Physical Education lessons, PA during short breaks and longer recesses, PA at after-school nursery) in compatible active environments (child-friendly gym and school playground, corridors with movement and playing around corners and for games) has a vital role in obesity and overweight reduction among younger pupils. PMID:22892226
Masurier, Nicolas; Aruta, Roberta; Gaumet, Vincent; Denoyelle, Séverine; Moreau, Emmanuel; Lisowski, Vincent; Martinez, Jean; Maillard, Ludovic T
2012-04-06
A series of 20 optically pure 3,4-dihydro-5H-pyrido[1',2':1,2]imidazo[4,5-d][1,3]diazepin-5-ones which form a new family of azaheterocycle-fused [1,3]diazepines were synthesized in four steps with 17-66% overall yields. The key step consists of a selective C-acylation reaction of easily accessible 2-aminoimidazo[1,2-a]pyridine at C-3.
Efficient feature subset selection with probabilistic distance criteria. [pattern recognition
NASA Technical Reports Server (NTRS)
Chittineni, C. B.
1979-01-01
Recursive expressions are derived for efficiently computing the commonly used probabilistic distance measures as a change in the criteria both when a feature is added to and when a feature is deleted from the current feature subset. A combinatorial algorithm for generating all possible r feature combinations from a given set of s features in (s/r) steps with a change of a single feature at each step is presented. These expressions can also be used for both forward and backward sequential feature selection.
Recruitment of new physicians, part II: the interview.
Harolds, Jay A
2013-06-01
A careful, expertly done recruitment process is very important in having a successful group. Selecting a search committee, deciding what characteristics the group wants in a new person, evaluating the candidate's curriculum vitae, speaking to the individual on the phone or during a meeting, and calling references are important steps in selecting the top candidates for a group. The interview at the practice site is the next step, and it is critical. Many tips for planning and conducting a successful interview are given in this article.
2010-01-01
Background Fluoroquinolones are potent antimicrobial agents used for the treatment of a wide variety of community- and nosocomial- infections. However, resistance to fluoroquinolones in Enterobacteriaceae is increasingly reported. Studies assessing the ability of fluoroquinolones to select for resistance have often used antimicrobial concentrations quite different from those actually acquired at the site of infection. The present study compared the ability to select for resistance of levofloxacin, ciprofloxacin and prulifloxacin at concentrations observed in vivo in twenty strains of Escherichia coli and Klebsiella spp. isolated from patients with respiratory and urinary infections. The frequencies of spontaneous single-step mutations at plasma peak and trough antibiotic concentrations were calculated. Multi-step selection of resistance was evaluated by performing 10 serial cultures on agar plates containing a linear gradient from trough to peak antimicrobial concentrations, followed by 10 subcultures on antibiotic-free agar. E. coli resistant strains selected after multi-step selection were characterized for DNA mutations by sequencing gyrA, gyrB, parC and parE genes. Results Frequencies of mutations for levofloxacin and ciprofloxacin were less than 10-11 at peak concentration, while for prulifloxacin they ranged from <10-11 to 10-5. The lowest number of resistant mutants after multistep selection was selected by levofloxacin followed by ciprofloxacin and prulifloxacin. Both ciprofloxacin- and prulifloxacin-resistant mutants presented mutations in gyrA and parC, while levofloxacin resistance was found associated only to mutations in gyrA. Conclusions Among the tested fluoroquinolones, levofloxacin was the most capable of limiting the occurrence of resistance. PMID:20409341
ATCA follow-up of blazar candidates in the H-ATLAS fields
NASA Astrophysics Data System (ADS)
Massardi, Marcella; Ricci, Roberto; de Zotti, Gianfranco; White, Glenn; Michalowski, Michal; Ivison, Rob; Baes, Maarten; Lapi, Andrea; Temi, Pasquale; Lopez-Caniego, Marcos; Herranz, Diego; Seymour, Nick; Gonzalez-Nuevo, Joaquin; Bonavera, Laura; Negrello, Mattia
2012-04-01
The Herschel-ATLAS (H-ATLAS) survey that is covering 550 sq.deg. in 5 bands from 100 to 500 micron, allows for the first time a flux limited selection of blazars at sub-mm wavelengths. This wavelength range is particularly interesting because it is where the most luminous blazars are expected to show the synchrotron peak. The peak frequency and luminosity carry key information on the blazar physics. However, blazars constitute a tiny fraction of H-ATLAS sources and therefore picking them up isn't easy. A criterion to efficiently select candidate blazars exploiting the roughly flat blazar continuum spectrum from radio to sub-mm wavelengths has been devised by Lopez-Caniego et al. (in prep.). Multifrequency radio follow-up is however a necessary step to assess the nature of candidates. We propose to complete the validation of candidates in the H-ATLAS equatorial fields (partly done during few hours of ATCA DDT allocated time and with Medicina radiotelescope observations) and to extend the investigation to the Southern (SGP) fields reconstructing the blazars SED between 1.1 and 40 GHz. This will provide the first statistically significant blazar sample selected at sub-mm wavelengths.
Neighbourhood environment correlates of physical activity: a study of eight Czech regional towns.
Sigmundová, Dagmar; El Ansari, Walid; Sigmund, Erik
2011-02-01
An adequate amount of physical activity (PA) is a key factor that is associated with good health. This study assessed socio-environmental factors associated with meeting the health recommendations for PA (achieving 10,000 steps per day). In total, 1,653 respondents randomly selected from across eight regional towns (each >90,000 inhabitants) in the Czech Republic participated in the study. The ANEWS questionnaire assessed the environment in neighbourhoods, and participants' weekly PA was objectively monitored (Yamax Digiwalker SW-700 pedometer). About 24% of participants were sufficiently active, 27% were highly active; 28% participants were overweight and 5% were obese. Although BMI was significantly inversely associated with the daily step counts achieved only in females, for both genders, BMI was generally not significantly associated with the criterion of achieving 10,000 steps per day during the week. Increased BMI in both genders was accompanied with a decline in participation in organized PA and with increasing age. As regards to the demographic/lifestyle factors, for females, more participation in organized PA was significantly positively correlated with the achieved daily step counts. In contrast, older age and higher BMI (for females) and smoking (for males) were significantly negatively correlated with the achieved daily step counts. In terms of the environmental aspects, pleasant environments were significantly positively correlated to daily step counts for both genders. Additionally, for males, better residencies (more family homes rather than apartment blocks) in the neighbourhood were significantly positively correlated with their daily step counts. For females, less accessibility of shops and non-sport facilities (depending on walking distance in minutes) were significantly negatively correlated to the achieved daily step counts. Individuals who lived in pleasant neighbourhoods, with better access to shops and who participated in organized PA (≥ 2 times a week) tended to meet the recommendations for health-enhancing PA levels. The creation of physical activity-friendly environments could be associated with enhancing people's achieved daily step counts and meeting the health criteria for PA.
NASA Astrophysics Data System (ADS)
Hoepfer, Matthias
Over the last two decades, computer modeling and simulation have evolved as the tools of choice for the design and engineering of dynamic systems. With increased system complexities, modeling and simulation become essential enablers for the design of new systems. Some of the advantages that modeling and simulation-based system design allows for are the replacement of physical tests to ensure product performance, reliability and quality, the shortening of design cycles due to the reduced need for physical prototyping, the design for mission scenarios, the invoking of currently nonexisting technologies, and the reduction of technological and financial risks. Traditionally, dynamic systems are modeled in a monolithic way. Such monolithic models include all the data, relations and equations necessary to represent the underlying system. With increased complexity of these models, the monolithic model approach reaches certain limits regarding for example, model handling and maintenance. Furthermore, while the available computer power has been steadily increasing according to Moore's Law (a doubling in computational power every 10 years), the ever-increasing complexities of new models have negated the increased resources available. Lastly, modern systems and design processes are interdisciplinary, enforcing the necessity to make models more flexible to be able to incorporate different modeling and design approaches. The solution to bypassing the shortcomings of monolithic models is cosimulation. In a very general sense, co-simulation addresses the issue of linking together different dynamic sub-models to a model which represents the overall, integrated dynamic system. It is therefore an important enabler for the design of interdisciplinary, interconnected, highly complex dynamic systems. While a basic co-simulation setup can be very easy, complications can arise when sub-models display behaviors such as algebraic loops, singularities, or constraints. This work frames the co-simulation approach to modeling and simulation. It lays out the general approach to dynamic system co-simulation, and gives a comprehensive overview of what co-simulation is and what it is not. It creates a taxonomy of the requirements and limits of co-simulation, and the issues arising with co-simulating sub-models. Possible solutions towards resolving the stated problems are investigated to a certain depth. A particular focus is given to the issue of time stepping. It will be shown that for dynamic models, the selection of the simulation time step is a crucial issue with respect to computational expense, simulation accuracy, and error control. The reasons for this are discussed in depth, and a time stepping algorithm for co-simulation with unknown dynamic sub-models is proposed. Motivations and suggestions for the further treatment of selected issues are presented.