Thermal modeling of cogging process using finite element method
NASA Astrophysics Data System (ADS)
Khaled, Mahmoud; Ramadan, Mohamad; Fourment, Lionel
2016-10-01
Among forging processes, incremental processes are those where the work piece undergoes several thermal and deformation steps with small increment of deformation. They offer high flexibility in terms of the work piece size since they allow shaping wide range of parts from small to large size. Since thermal treatment is essential to obtain the required shape and quality, this paper presents the thermal modeling of incremental processes. The finite element discretization, spatial and temporal, is exposed. Simulation is performed using commercial software Forge 3. Results show the thermal behavior at the beginning and at the end of the process.
A triangular thin shell finite element: Nonlinear analysis. [structural analysis
NASA Technical Reports Server (NTRS)
Thomas, G. R.; Gallagher, R. H.
1975-01-01
Aspects of the formulation of a triangular thin shell finite element which pertain to geometrically nonlinear (small strain, finite displacement) behavior are described. The procedure for solution of the resulting nonlinear algebraic equations combines a one-step incremental (tangent stiffness) approach with one iteration in the Newton-Raphson mode. A method is presented which permits a rational estimation of step size in this procedure. Limit points are calculated by means of a superposition scheme coupled to the incremental side of the solution procedure while bifurcation points are calculated through a process of interpolation of the determinants of the tangent-stiffness matrix. Numerical results are obtained for a flat plate and two curved shell problems and are compared with alternative solutions.
Computer Processing Of Tunable-Diode-Laser Spectra
NASA Technical Reports Server (NTRS)
May, Randy D.
1991-01-01
Tunable-diode-laser spectrometer measuring transmission spectrum of gas operates under control of computer, which also processes measurement data. Measurements in three channels processed into spectra. Computer controls current supplied to tunable diode laser, stepping it through small increments of wavelength while processing spectral measurements at each step. Program includes library of routines for general manipulation and plotting of spectra, least-squares fitting of direct-transmission and harmonic-absorption spectra, and deconvolution for determination of laser linewidth and for removal of instrumental broadening of spectral lines.
Systems Engineering and Integration (SE and I)
NASA Technical Reports Server (NTRS)
Chevers, ED; Haley, Sam
1990-01-01
The issue of technology advancement and future space transportation vehicles is addressed. The challenge is to develop systems which can be evolved and improved in small incremental steps where each increment reduces present cost, improves, reliability, or does neither but sets the stage for a second incremental upgrade that does. Future requirements are interface standards for commercial off the shelf products to aid in the development of integrated facilities; enhanced automated code generation system slightly coupled to specification and design documentation; modeling tools that support data flow analysis; and shared project data bases consisting of technical characteristics cast information, measurement parameters, and reusable software programs. Topics addressed include: advanced avionics development strategy; risk analysis and management; tool quality management; low cost avionics; cost estimation and benefits; computer aided software engineering; computer systems and software safety; system testability; and advanced avionics laboratories - and rapid prototyping. This presentation is represented by viewgraphs only.
A table of intensity increments.
DOT National Transportation Integrated Search
1966-01-01
Small intensity increments can be produced by adding larger intensity increments. A table is presented covering the range of small intensity increments from 0.008682 through 6.020 dB in 60 large intensity increments of 1 dB.
Creation of a small high-throughput screening facility.
Flak, Tod
2009-01-01
The creation of a high-throughput screening facility within an organization is a difficult task, requiring a substantial investment of time, money, and organizational effort. Major issues to consider include the selection of equipment, the establishment of data analysis methodologies, and the formation of a group having the necessary competencies. If done properly, it is possible to build a screening system in incremental steps, adding new pieces of equipment and data analysis modules as the need grows. Based upon our experience with the creation of a small screening service, we present some guidelines to consider in planning a screening facility.
Dynamics of nonlinear feedback control.
Snippe, H P; van Hateren, J H
2007-05-01
Feedback control in neural systems is ubiquitous. Here we study the mathematics of nonlinear feedback control. We compare models in which the input is multiplied by a dynamic gain (multiplicative control) with models in which the input is divided by a dynamic attenuation (divisive control). The gain signal (resp. the attenuation signal) is obtained through a concatenation of an instantaneous nonlinearity and a linear low-pass filter operating on the output of the feedback loop. For input steps, the dynamics of gain and attenuation can be very different, depending on the mathematical form of the nonlinearity and the ordering of the nonlinearity and the filtering in the feedback loop. Further, the dynamics of feedback control can be strongly asymmetrical for increment versus decrement steps of the input. Nevertheless, for each of the models studied, the nonlinearity in the feedback loop can be chosen such that immediately after an input step, the dynamics of feedback control is symmetric with respect to increments versus decrements. Finally, we study the dynamics of the output of the control loops and find conditions under which overshoots and undershoots of the output relative to the steady-state output occur when the models are stimulated with low-pass filtered steps. For small steps at the input, overshoots and undershoots of the output do not occur when the filtering in the control path is faster than the low-pass filtering at the input. For large steps at the input, however, results depend on the model, and for some of the models, multiple overshoots and undershoots can occur even with a fast control path.
Pharmacogenomics: where will it take us?
Felcone, Linda Hull
2004-07-01
Until now, drug research has focused on discovering blockbusters to treat millions of patients. Pharmacogenomics, a multidisciplinary effort arising from the Human Genome Project, strives to deliver "personalized medicine." Researchers use genetic information to understand disease pathways and create drugs designed for small, likely-to-respond populations. The path from research to finished drugs is as logistically complex as landing a human on the moon, but don't expect a giant leap; progress will come throughout the next couple of decades via incremental steps.
Araya, Ricardo; Flynn, Terry; Rojas, Graciela; Fritsch, Rosemarie; Simon, Greg
2006-08-01
The authors compared the incremental cost-effectiveness of a stepped-care, multicomponent program with usual care for the treatment of depressed women in primary care in Santiago, Chile. A cost-effectiveness study was conducted of a previous randomized controlled trial involving 240 eligible women with DSM-IV major depression who were selected from a consecutive sample of adult women attending primary care clinics. The patients were randomly allocated to usual care or a multicomponent stepped-care program led by a nonmedical health care worker. Depression-free days and health care costs derived from local sources were assessed after 3 and 6 months. A health service perspective was used in the economic analysis. Complete data were determined for 80% of the randomly assigned patients. After we adjusted for initial severity, women receiving the stepped-care program had a mean of 50 additional depression-free days over 6 months relative to patients allocated to usual care. The stepped-care program was marginally more expensive than usual care (an extra 216 Chilean pesos per depression-free day). There was a 90% probability that the incremental cost of obtaining an extra depression-free day with the intervention would not exceed 300 pesos (1.04 US dollars). The stepped-care program was significantly more effective and marginally more expensive than usual care for the treatment of depressed women in primary care. Small investments to improve depression appear to yield larger gains in poorer environments. Simple and inexpensive treatment programs tested in developing countries might provide good study models for developed countries.
Incremental classification learning for anomaly detection in medical images
NASA Astrophysics Data System (ADS)
Giritharan, Balathasan; Yuan, Xiaohui; Liu, Jianguo
2009-02-01
Computer-aided diagnosis usually screens thousands of instances to find only a few positive cases that indicate probable presence of disease.The amount of patient data increases consistently all the time. In diagnosis of new instances, disagreement occurs between a CAD system and physicians, which suggests inaccurate classifiers. Intuitively, misclassified instances and the previously acquired data should be used to retrain the classifier. This, however, is very time consuming and, in some cases where dataset is too large, becomes infeasible. In addition, among the patient data, only a small percentile shows positive sign, which is known as imbalanced data.We present an incremental Support Vector Machines(SVM) as a solution for the class imbalance problem in classification of anomaly in medical images. The support vectors provide a concise representation of the distribution of the training data. Here we use bootstrapping to identify potential candidate support vectors for future iterations. Experiments were conducted using images from endoscopy videos, and the sensitivity and specificity were close to that of SVM trained using all samples available at a given incremental step with significantly improved efficiency in training the classifier.
NASA Astrophysics Data System (ADS)
Zeng, Guang; Cao, Shuchao; Liu, Chi; Song, Weiguo
2018-06-01
It is important to study pedestrian stepping behavior and characteristics for facility design and pedestrian flow study due to pedestrians' bipedal movement. In this paper, data of steps are extracted based on trajectories of pedestrians from a single-file experiment. It is found that step length and step frequency will decrease 75% and 33%, respectively, when global density increases from 0.46 ped/m to 2.28 ped/m. With the increment of headway, they will first increase and then remain constant when the headway is beyond 1.16 m and 0.91 m, respectively. Step length and frequency under different headways can be described well by normal distributions. Meanwhile, relationships between step length and frequency under different headways exist. Step frequency decreases with the increment of step length. However, the decrease tendencies depend on headways as a whole. And there are two decrease tendencies: when the headway is between about 0.6 m and 1.0 m, the decrease rate of the step frequency will increase with the increment of step length; while it will decrease when the headway is beyond about 1.0 m and below about 0.6 m. A model is built based on the experiment results. In fundamental diagrams, the results of simulation agree well with those of experiment. The study can be helpful for understanding pedestrian stepping behavior and designing public facilities.
Step length and individual anaerobic threshold assessment in swimming.
Fernandes, R J; Sousa, M; Machado, L; Vilas-Boas, J P
2011-12-01
Anaerobic threshold is widely used for diagnosis of swimming aerobic endurance but the precise incremental protocols step duration for its assessment is controversial. A physiological and biomechanical comparison between intermittent incremental protocols with different step lengths and a maximal lactate steady state (MLSS) test was conducted. 17 swimmers performed 7×200, 300 and 400 m (30 s and 24 h rest between steps and protocols) in front crawl until exhaustion and an MLSS test. The blood lactate concentration values ([La-]) at individual anaerobic threshold were 2.1±0.1, 2.2±0.2 and 1.8±0.1 mmol.l - 1 in the 200, 300 and 400 m protocols (with significant differences between 300 and 400 m tests), and 2.9±1.2 mmol.l - 1 at MLSS (higher than the incremental protocols); all these values are much lower than the traditional 4 mmol.l - 1 value. The velocities at individual anaerobic threshold obtained in incremental protocols were similar (and highly related) to the MLSS, being considerably lower than the velocity at 4 mmol.l - 1. Stroke rate increased and stroke length decreased throughout the different incremental protocols. It was concluded that it is valid to use intermittent incremental protocols of 200 and 300 m lengths to assess the swimming velocity corresponding to individual anaerobic threshold, the progressive protocols tend to underestimate the [La-] at anaerobic threshold assessed by the MLSS test, and swimmers increase velocity through stroke rate increases. © Georg Thieme Verlag KG Stuttgart · New York.
Linear-scaling generation of potential energy surfaces using a double incremental expansion
DOE Office of Scientific and Technical Information (OSTI.GOV)
König, Carolin, E-mail: carolink@kth.se; Christiansen, Ove, E-mail: ove@chem.au.dk
We present a combination of the incremental expansion of potential energy surfaces (PESs), known as n-mode expansion, with the incremental evaluation of the electronic energy in a many-body approach. The application of semi-local coordinates in this context allows the generation of PESs in a very cost-efficient way. For this, we employ the recently introduced flexible adaptation of local coordinates of nuclei (FALCON) coordinates. By introducing an additional transformation step, concerning only a fraction of the vibrational degrees of freedom, we can achieve linear scaling of the accumulated cost of the single point calculations required in the PES generation. Numerical examplesmore » of these double incremental approaches for oligo-phenyl examples show fast convergence with respect to the maximum number of simultaneously treated fragments and only a modest error introduced by the additional transformation step. The approach, presented here, represents a major step towards the applicability of vibrational wave function methods to sizable, covalently bound systems.« less
One Step at a Time: SBM as an Incremental Process.
ERIC Educational Resources Information Center
Conrad, Mark
1995-01-01
Discusses incremental SBM budgeting and answers questions regarding resource equity, bookkeeping requirements, accountability, decision-making processes, and purchasing. Approaching site-based management as an incremental process recognizes that every school system engages in some level of site-based decisions. Implementation can be gradual and…
Loukriz, Abdelhamid; Haddadi, Mourad; Messalti, Sabir
2016-05-01
Improvement of the efficiency of photovoltaic system based on new maximum power point tracking (MPPT) algorithms is the most promising solution due to its low cost and its easy implementation without equipment updating. Many MPPT methods with fixed step size have been developed. However, when atmospheric conditions change rapidly , the performance of conventional algorithms is reduced. In this paper, a new variable step size Incremental Conductance IC MPPT algorithm has been proposed. Modeling and simulation of different operational conditions of conventional Incremental Conductance IC and proposed methods are presented. The proposed method was developed and tested successfully on a photovoltaic system based on Flyback converter and control circuit using dsPIC30F4011. Both, simulation and experimental design are provided in several aspects. A comparative study between the proposed variable step size and fixed step size IC MPPT method under similar operating conditions is presented. The obtained results demonstrate the efficiency of the proposed MPPT algorithm in terms of speed in MPP tracking and accuracy. Copyright © 2015 ISA. Published by Elsevier Ltd. All rights reserved.
Support vector machine incremental learning triggered by wrongly predicted samples
NASA Astrophysics Data System (ADS)
Tang, Ting-long; Guan, Qiu; Wu, Yi-rong
2018-05-01
According to the classic Karush-Kuhn-Tucker (KKT) theorem, at every step of incremental support vector machine (SVM) learning, the newly adding sample which violates the KKT conditions will be a new support vector (SV) and migrate the old samples between SV set and non-support vector (NSV) set, and at the same time the learning model should be updated based on the SVs. However, it is not exactly clear at this moment that which of the old samples would change between SVs and NSVs. Additionally, the learning model will be unnecessarily updated, which will not greatly increase its accuracy but decrease the training speed. Therefore, how to choose the new SVs from old sets during the incremental stages and when to process incremental steps will greatly influence the accuracy and efficiency of incremental SVM learning. In this work, a new algorithm is proposed to select candidate SVs and use the wrongly predicted sample to trigger the incremental processing simultaneously. Experimental results show that the proposed algorithm can achieve good performance with high efficiency, high speed and good accuracy.
Effects of protocol step length on biomechanical measures in swimming.
Barbosa, Tiago M; de Jesus, Kelly; Abraldes, J Arturo; Ribeiro, João; Figueiredo, Pedro; Vilas-Boas, João Paulo; Fernandes, Ricardo J
2015-03-01
The assessment of energetic and mechanical parameters in swimming often requires the use of an intermittent incremental protocol, whose step lengths are corner stones for the efficiency of the evaluation procedures. To analyze changes in swimming kinematics and interlimb coordination behavior in 3 variants, with different step lengths, of an intermittent incremental protocol. Twenty-two male swimmers performed n×di variants of an intermittent and incremental protocol (n≤7; d1=200 m, d2=300 m, and d3=400 m). Swimmers were videotaped in the sagittal plane for 2-dimensional kinematical analysis using a dual-media setup. Video images were digitized with a motion-capture system. Parameters that were assessed included the stroke kinematics, the segmental and anatomical landmark kinematics, and interlimb coordination. Movement efficiency was also estimated. There were no significant variations in any of the selected variables according to the step lengths. A high to very high relationship was observed between step lengths. The bias was much reduced and the 95%CI fairly tight. Since there were no meaningful differences between the 3 protocol variants, the 1 with shortest step length (ie, 200 m) should be adopted for logistical reasons.
NASA Astrophysics Data System (ADS)
Hamedon, Zamzuri; Kuang, Shea Cheng; Jaafar, Hasnulhadi; Azhari, Azmir
2018-03-01
Incremental sheet forming is a versatile sheet metal forming process where a sheet metal is formed into its final shape by a series of localized deformation without a specialised die. However, it still has many shortcomings that need to be overcome such as geometric accuracy, surface roughness, formability, forming speed, and so on. This project focus on minimising the surface roughness of aluminium sheet and improving its thickness uniformity in incremental sheet forming via optimisation of wall angle, feed rate, and step size. Besides, the effect of wall angle, feed rate, and step size to the surface roughness and thickness uniformity of aluminium sheet was investigated in this project. From the results, it was observed that surface roughness and thickness uniformity were inversely varied due to the formation of surface waviness. Increase in feed rate and decrease in step size will produce a lower surface roughness, while uniform thickness reduction was obtained by reducing the wall angle and step size. By using Taguchi analysis, the optimum parameters for minimum surface roughness and uniform thickness reduction of aluminium sheet were determined. The finding of this project helps to reduce the time in optimising the surface roughness and thickness uniformity in incremental sheet forming.
Design and construction of a novel rotary magnetostrictive motor
NASA Astrophysics Data System (ADS)
Zhou, Nanjia; Blatchley, Charles C.; Ibeh, Christopher C.
2009-04-01
Magnetostriction can be used to induce linear incremental motion, which is effective in giant magnetostrictive inchworm motors. Such motors possess the advantage of combining small step incremental motion with large force. However, continuous rotation may be preferred in practical applications. This paper describes a novel magnetostrictive rotary motor using terfenol-D (Tb0.3Dy0.7Fe1.9) material as the driving element. The motor is constructed of two giant magnetostrictive actuators with shell structured flexure-hinge and leaf springs. These two actuators are placed in a perpendicular position to minimize the coupling displacement of the two actuators. The principal design parameters of the actuators and strain amplifiers are optimally determined, and its static analysis is undertaken through finite element analysis software. The small movements of the magnetostrictive actuators are magnified by about three times using oval shell structured amplifiers. When two sinusoidal wave currents with 90° phase shift are applied to the magnetostrictive actuators, purely rotational movement can be produced as in the orbit of a Lissajous diagram in an oscillograph, and this movement is used to drive the rotor of the motor. A prototype has been constructed and tested.
Hatoum-Aslan, Asma; Samai, Poulami; Maniv, Inbal; Jiang, Wenyan; Marraffini, Luciano A
2013-09-27
Small RNAs undergo maturation events that precisely determine the length and structure required for their function. CRISPRs (clustered regularly interspaced short palindromic repeats) encode small RNAs (crRNAs) that together with CRISPR-associated (cas) genes constitute a sequence-specific prokaryotic immune system for anti-viral and anti-plasmid defense. crRNAs are subject to multiple processing events during their biogenesis, and little is known about the mechanism of the final maturation step. We show that in the Staphylococcus epidermidis type III CRISPR-Cas system, mature crRNAs are measured in a Cas10·Csm ribonucleoprotein complex to yield discrete lengths that differ by 6-nucleotide increments. We looked for mutants that impact this crRNA size pattern and found that an alanine substitution of a conserved aspartate residue of Csm3 eliminates the 6-nucleotide increments in the length of crRNAs. In vitro, recombinant Csm3 binds RNA molecules at multiple sites, producing gel-shift patterns that suggest that each protein binds 6 nucleotides of substrate. In vivo, changes in the levels of Csm3 modulate the crRNA size distribution without disrupting the 6-nucleotide periodicity. Our data support a model in which multiple Csm3 molecules within the Cas10·Csm complex bind the crRNA with a 6-nucleotide periodicity to function as a ruler that measures the extent of crRNA maturation.
Focus drive mechanism for the IUE scientific instrument
NASA Technical Reports Server (NTRS)
Devine, E. J.; Dennis, T. B., Jr.
1977-01-01
A compact, lightweight mechanism was developed for in-orbit adjustment of the position of the secondary mirror (focusing) of the International Ultraviolet Explored telescope. This device is a linear drive with small (.0004 in.) and highly repeatable step increments. Extremely close tolerances are also held in tilt and decentering. The unique mechanization is described with attention to the design details that contribute to positional accuracy. Lubrication, materials, thermal considerations, sealing, detenting against launch loads, and other features peculiar to flight hardware are discussed. The methods employed for mounting the low expansion quartz mirror with minimum distortion are also given.
Stability analysis of Eulerian-Lagrangian methods for the one-dimensional shallow-water equations
Casulli, V.; Cheng, R.T.
1990-01-01
In this paper stability and error analyses are discussed for some finite difference methods when applied to the one-dimensional shallow-water equations. Two finite difference formulations, which are based on a combined Eulerian-Lagrangian approach, are discussed. In the first part of this paper the results of numerical analyses for an explicit Eulerian-Lagrangian method (ELM) have shown that the method is unconditionally stable. This method, which is a generalized fixed grid method of characteristics, covers the Courant-Isaacson-Rees method as a special case. Some artificial viscosity is introduced by this scheme. However, because the method is unconditionally stable, the artificial viscosity can be brought under control either by reducing the spatial increment or by increasing the size of time step. The second part of the paper discusses a class of semi-implicit finite difference methods for the one-dimensional shallow-water equations. This method, when the Eulerian-Lagrangian approach is used for the convective terms, is also unconditionally stable and highly accurate for small space increments or large time steps. The semi-implicit methods seem to be more computationally efficient than the explicit ELM; at each time step a single tridiagonal system of linear equations is solved. The combined explicit and implicit ELM is best used in formulating a solution strategy for solving a network of interconnected channels. The explicit ELM is used at channel junctions for each time step. The semi-implicit method is then applied to the interior points in each channel segment. Following this solution strategy, the channel network problem can be reduced to a set of independent one-dimensional open-channel flow problems. Numerical results support properties given by the stability and error analyses. ?? 1990.
NASA Technical Reports Server (NTRS)
Cotton, William B.; Hilb, Robert; Koczo, Stefan, Jr.; Wing, David J.
2016-01-01
A set of five developmental steps building from the NASA TASAR (Traffic Aware Strategic Aircrew Requests) concept are described, each providing incrementally more efficiency and capacity benefits to airspace system users and service providers, culminating in a Full Airborne Trajectory Management capability. For each of these steps, the incremental Operational Hazards and Safety Requirements are identified for later use in future formal safety assessments intended to lead to certification and operational approval of the equipment and the associated procedures. Two established safety assessment methodologies that are compliant with the FAA's Safety Management System were used leading to Failure Effects Classifications (FEC) for each of the steps. The most likely FEC for the first three steps, Basic TASAR, Digital TASAR, and 4D TASAR, is "No effect". For step four, Strategic Airborne Trajectory Management, the likely FEC is "Minor". For Full Airborne Trajectory Management (Step 5), the most likely FEC is "Major".
Competition in Weapon Systems Acquisition: Cost Analyses of Some Issues
1990-09-01
10% increments , also known as the step-ladder bids) submitted by the contractor in the first year of dual source procurement. The triangles represent...savings by subtracting annual incremental government costs, stated in constant dollars, from (3). (5) Estimate nonrecurring start-up costs, stated in...constant dollars, by fiscal year. (6) Estimate incremental logistic support costs, stated in constant dollars. by fiscal year. (7) Calculate a net
Lewis, L K; Rowlands, A V; Gardiner, P A; Standage, M; English, C; Olds, T
2016-03-01
This study aimed to evaluate the preliminary effectiveness and feasibility of a theory-informed program to reduce sitting time in older adults. Pre-experimental (pre-post) study. Thirty non-working adult (≥ 60 years) participants attended a one hour face-to-face intervention session and were guided through: a review of their sitting time; normative feedback on sitting time; and setting goals to reduce total sitting time and bouts of prolonged sitting. Participants chose six goals and integrated one per week incrementally for six weeks. Participants received weekly phone calls. Sitting time and bouts of prolonged sitting (≥ 30 min) were measured objectively for seven days (activPAL3c inclinometer) pre- and post-intervention. During these periods, a 24-h time recall instrument was administered by computer-assisted telephone interview. Participants completed a post-intervention project evaluation questionnaire. Paired t tests with sequential Bonferroni corrections and Cohen's d effect sizes were calculated for all outcomes. Twenty-seven participants completed the assessments (71.7 ± 6.5 years). Post-intervention, objectively-measured total sitting time was significantly reduced by 51.5 min per day (p=0.006; d=-0.58) and number of bouts of prolonged sitting by 0.8 per day (p=0.002; d=-0.70). Objectively-measured standing increased by 39 min per day (p=0.006; d=0.58). Participants self-reported spending 96 min less per day sitting (p<0.001; d=-0.77) and 32 min less per day watching television (p=0.005; d=-0.59). Participants were highly satisfied with the program. The 'Small Steps' program is a feasible and promising avenue for behavioral modification to reduce sitting time in older adults. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Dynamics of catalytic tubular microjet engines: Dependence on geometry and chemical environment
NASA Astrophysics Data System (ADS)
Li
2011-12-01
Strain-engineered tubular microjet engines with various geometric dimensions hold interesting autonomous motions in an aqueous fuel solution when propelled by catalytic decomposition of hydrogen peroxide to oxygen and water. The catalytically-generated oxygen bubbles expelled from microtubular cavities propel the microjet step by step in discrete increments. We focus on the dynamics of our tubular microjets in one step and build up a body deformation model to elucidate the interaction between tubular microjets and the bubbles they produce. The average microjet velocity is calculated analytically based on our model and the obtained results demonstrate that the velocity of the microjet increases linearly with the concentration of hydrogen peroxide. The geometric dimensions of the microjet, such as length and radius, also influence its dynamic characteristics significantly. A close consistency between experimental and calculated results is achieved despite a small deviation due to the existence of an approximation in the model. The results presented in this work improve our understanding regarding catalytic motions of tubular microjets and demonstrate the controllability of the microjet which may have potential applications in drug delivery and biology.Strain-engineered tubular microjet engines with various geometric dimensions hold interesting autonomous motions in an aqueous fuel solution when propelled by catalytic decomposition of hydrogen peroxide to oxygen and water. The catalytically-generated oxygen bubbles expelled from microtubular cavities propel the microjet step by step in discrete increments. We focus on the dynamics of our tubular microjets in one step and build up a body deformation model to elucidate the interaction between tubular microjets and the bubbles they produce. The average microjet velocity is calculated analytically based on our model and the obtained results demonstrate that the velocity of the microjet increases linearly with the concentration of hydrogen peroxide. The geometric dimensions of the microjet, such as length and radius, also influence its dynamic characteristics significantly. A close consistency between experimental and calculated results is achieved despite a small deviation due to the existence of an approximation in the model. The results presented in this work improve our understanding regarding catalytic motions of tubular microjets and demonstrate the controllability of the microjet which may have potential applications in drug delivery and biology. Electronic supplementary information (ESI) available: I. Video of the catalytic motion of a typical microjet moving in a linear way. II. Detailed numerical analyses: Reynolds number calculation, displacement of the microjet and the bubble after separation, and example of experimental velocity calculation. See DOI: 10.1039/c1nr10840a
Studies on the finite element simulation in sheet metal stamping processes
NASA Astrophysics Data System (ADS)
Huang, Ying
The sheet metal stamping process plays an important role in modern industry. With the ever-increasing demand for shape complexity, product quality and new materials, the traditional trial and error method for setting up a sheet metal stamping process is no longer efficient. As a result, the Finite Element Modeling (FEM) method has now been widely used. From a physical point of view, the formability and the quality of a product are influenced by several factors. The design of the product in the initial stage and the motion of the press during the production stage are two of these crucial factors. This thesis focuses on the numerical simulation for these two factors using FEM. Currently, there are a number of commercial FEM software systems available in the market. These software systems are based on an incremental FEM process that models the sheet metal stamping process in small incremental steps. Even though the incremental FEM is accurate, it is not suitable for the initial conceptual design for its needing of detailed design parameters and enormous calculation times. As a result, another type of FEM, called the inverse FEM method or one-step FEM method, has been proposed. While it is less accurate than that of the incremental method, this method requires much less computation and hence, has a great potential. However, it also faces a number of unsolved problems, which limits its application. This motivates the presented research. After the review of the basic theory of the inverse method, a new modified arc-length search method is proposed to find better initial solution. The methods to deal with the vertical walls are also discussed and presented. Then, a generalized multi-step inverse FEM method is proposed. It solves two key obstacles: the first one is to determine the initial solution of the intermediate three-dimensional configurations and the other is to control the movement of nodes so they could only slide on constraint surfaces during the search by Newton-Raphson iteration. The computer implementation of the generalized multi-step inverse FEM is also presented. By comparing to the simulation results using a commercial software system, the effectiveness of the new method is validated. Other than the product design, the punch motion (including punch speed and punch trajectory) of the stamping press also has significant effect on the formability and the quality of the product. In fact, this is one of the major reasons why hydraulic presses and/or servo presses are used for parts which demand high quality. In order to reveal the quantitative correlation between the punch motion and the part quality, the Cowper-Symonds strain rate constitutive model and the implicit dynamic incremental FEM are combined to conduct the research. The effects of the punch motion on the part quality, especially the plastic strain distribution and the potential springback, have been investigated for the deep drawing and the bending processes respectively. A qualitative relationship between the punch motion and the part quality is also derived. The reaction force of the punch motion causes the dynamic deformation of the press during the stamping, which in turn influences the part quality as well. This dynamic information, in the form of the strain signal, is an important basis for the on-line monitoring of the part quality. By using the actual force as the input to the press, the incremental FEM is needed to predict the strain of the press. The result is validated by means of experiments and can be used to assist the on-line monitoring.
Hatoum-Aslan, Asma; Samai, Poulami; Maniv, Inbal; Jiang, Wenyan; Marraffini, Luciano A.
2013-01-01
Small RNAs undergo maturation events that precisely determine the length and structure required for their function. CRISPRs (clustered regularly interspaced short palindromic repeats) encode small RNAs (crRNAs) that together with CRISPR-associated (cas) genes constitute a sequence-specific prokaryotic immune system for anti-viral and anti-plasmid defense. crRNAs are subject to multiple processing events during their biogenesis, and little is known about the mechanism of the final maturation step. We show that in the Staphylococcus epidermidis type III CRISPR-Cas system, mature crRNAs are measured in a Cas10·Csm ribonucleoprotein complex to yield discrete lengths that differ by 6-nucleotide increments. We looked for mutants that impact this crRNA size pattern and found that an alanine substitution of a conserved aspartate residue of Csm3 eliminates the 6-nucleotide increments in the length of crRNAs. In vitro, recombinant Csm3 binds RNA molecules at multiple sites, producing gel-shift patterns that suggest that each protein binds 6 nucleotides of substrate. In vivo, changes in the levels of Csm3 modulate the crRNA size distribution without disrupting the 6-nucleotide periodicity. Our data support a model in which multiple Csm3 molecules within the Cas10·Csm complex bind the crRNA with a 6-nucleotide periodicity to function as a ruler that measures the extent of crRNA maturation. PMID:23935102
Study on the mechanism of Si-glass-Si two step anodic bonding process
NASA Astrophysics Data System (ADS)
Hu, Lifang; Wang, Hao; Xue, Yongzhi; Shi, Fangrong; Chen, Shaoping
2018-04-01
Si-glass-Si was successfully bonded together through a two-step anodic bonding process. The bonding current in each step of the two-step bonding process was investigated, and found to be quite different. The first bonding current decreased quickly to a relatively small value, but for the second bonding step, there were two current peaks; the current first decreased, then increased, and then decreased again. The second current peak occurred earlier with higher temperature and voltage. The two-step anodic bonding process was investigated in terms of bonding current. SEM and EDS tests were conducted to investigate the interfacial structure of the Si-glass-Si samples. The two bonding interfaces were almost the same, but after an etching process, transitional layers could be found in the bonding interface and a deeper trench with a thickness of ~1.5 µm could be found in the second bonding interface. Atomic force microscopy mapping results indicated that sodium precipitated from the back of the glass, which makes the roughness of the surface become coarse. Tensile tests indicated that the fracture occurred at the glass substrate and that the bonding strength increased with the increment of bonding temperature and voltage with the maximum strength of 6.4 MPa.
Focus drive mechanism for the IUE scientific instrument
NASA Technical Reports Server (NTRS)
Devine, E. J.; Dennis, T. B., Jr.
1977-01-01
A compact, lightweight mechanism was developed for in-orbit adjustment of the position of the secondary mirror (focusing) of the International Ultraviolet Explorer telescope. This device is a linear drive with small and highly repeatable step increments. Extremely close tolerances are also held in tilt and decentering. The unique mechanization is described with attention to the design details that contribute to positional accuracy. Lubrication, materials, thermal considerations, sealing, detenting against launch loads, and other features peculiar to flight hardware are discussed. The methods employed for mounting the low expansion quartz mirror with minimum distortion are also given. Results of qualification and acceptance testing, are included.
NASA Astrophysics Data System (ADS)
Accomazzi, A.
2010-10-01
Over the next decade, we will witness the development of a new infrastructure in support of data-intensive scientific research, which includes Astronomy. This new networked environment will offer both challenges and opportunities to our community and has the potential to transform the way data are described, curated and preserved. Based on the lessons learned during the development and management of the ADS, a case is made for adopting the emerging technologies and practices of the Semantic Web to support the way Astronomy research will be conducted. Examples of how small, incremental steps can, in the aggregate, make a significant difference in the provision and repurposing of astronomical data are provided.
MEG Evidence for Incremental Sentence Composition in the Anterior Temporal Lobe
ERIC Educational Resources Information Center
Brennan, Jonathan R.; Pylkkänen, Liina
2017-01-01
Research investigating the brain basis of language comprehension has associated the left anterior temporal lobe (ATL) with sentence-level combinatorics. Using magnetoencephalography (MEG), we test the parsing strategy implemented in this brain region. The number of incremental parse steps from a predictive left-corner parsing strategy that is…
Numerical simulation of pseudoelastic shape memory alloys using the large time increment method
NASA Astrophysics Data System (ADS)
Gu, Xiaojun; Zhang, Weihong; Zaki, Wael; Moumni, Ziad
2017-04-01
The paper presents a numerical implementation of the large time increment (LATIN) method for the simulation of shape memory alloys (SMAs) in the pseudoelastic range. The method was initially proposed as an alternative to the conventional incremental approach for the integration of nonlinear constitutive models. It is adapted here for the simulation of pseudoelastic SMA behavior using the Zaki-Moumni model and is shown to be especially useful in situations where the phase transformation process presents little or lack of hardening. In these situations, a slight stress variation in a load increment can result in large variations of strain and local state variables, which may lead to difficulties in numerical convergence. In contrast to the conventional incremental method, the LATIN method solve the global equilibrium and local consistency conditions sequentially for the entire loading path. The achieved solution must satisfy the conditions of static and kinematic admissibility and consistency simultaneously after several iterations. 3D numerical implementation is accomplished using an implicit algorithm and is then used for finite element simulation using the software Abaqus. Computational tests demonstrate the ability of this approach to simulate SMAs presenting flat phase transformation plateaus and subjected to complex loading cases, such as the quasi-static behavior of a stent structure. Some numerical results are contrasted to those obtained using step-by-step incremental integration.
Automated apparatus for producing gradient gels
Anderson, N.L.
1983-11-10
Apparatus for producing a gradient gel which serves as a standard medium for a two-dimensional analysis of proteins, the gel having a density gradient along its height formed by a variation in gel composition, with the apparatus including first and second pumping means each including a plurality of pumps on a common shaft and driven by a stepping motor capable of providing small incremental changes in pump outputs for the gel ingredients, the motors being controlled, by digital signals from a digital computer, a hollow form or cassette for receiving the gel composition, means for transferring the gel composition including a filler tube extending near the bottom of the cassette, adjustable horizontal and vertical arms for automatically removing and relocating the filler tube in the next cassette, and a digital computer programmed to automatically control the stepping motors, arm movements, and associated sensing operations involving the filling operation.
Automated apparatus for producing gradient gels
Anderson, Norman L.
1986-01-01
Apparatus for producing a gradient gel which serves as a standard medium for a two-dimensional analysis of proteins, the gel having a density gradient along its height formed by a variation in gel composition, with the apparatus including first and second pumping means each including a plurality of pumps on a common shaft and driven by a stepping motor capable of providing small incremental changes in pump outputs for the gel ingredients, the motors being controlled, by digital signals from a digital computer, a hollow form or cassette for receiving the gel composition, means for transferring the gel composition including a filler tube extending near the bottom of the cassette, adjustable horizontal and vertical arms for automatically removing and relocating the filler tube in the next cassette, and a digital computer programmed to automatically control the stepping motors, arm movements, and associated sensing operations involving the filling operation.
NASA Astrophysics Data System (ADS)
Lee, Ji-Seok; Song, Ki-Won
2015-11-01
The objective of the present study is to systematically elucidate the time-dependent rheological behavior of concentrated xanthan gum systems in complicated step-shear flow fields. Using a strain-controlled rheometer (ARES), step-shear flow behaviors of a concentrated xanthan gum model solution have been experimentally investigated in interrupted shear flow fields with a various combination of different shear rates, shearing times and rest times, and step-incremental and step-reductional shear flow fields with various shearing times. The main findings obtained from this study are summarized as follows. (i) In interrupted shear flow fields, the shear stress is sharply increased until reaching the maximum stress at an initial stage of shearing times, and then a stress decay towards a steady state is observed as the shearing time is increased in both start-up shear flow fields. The shear stress is suddenly decreased immediately after the imposed shear rate is stopped, and then slowly decayed during the period of a rest time. (ii) As an increase in rest time, the difference in the maximum stress values between the two start-up shear flow fields is decreased whereas the shearing time exerts a slight influence on this behavior. (iii) In step-incremental shear flow fields, after passing through the maximum stress, structural destruction causes a stress decay behavior towards a steady state as an increase in shearing time in each step shear flow region. The time needed to reach the maximum stress value is shortened as an increase in step-increased shear rate. (iv) In step-reductional shear flow fields, after passing through the minimum stress, structural recovery induces a stress growth behavior towards an equilibrium state as an increase in shearing time in each step shear flow region. The time needed to reach the minimum stress value is lengthened as a decrease in step-decreased shear rate.
Johnson, T S; Andriacchi, T P; Erdman, A G
2004-01-01
Various uses of the screw or helical axis have previously been reported in the literature in an attempt to quantify the complex displacements and coupled rotations of in vivo human knee kinematics. Multiple methods have been used by previous authors to calculate the axis parameters, and it has been theorized that the mathematical stability and accuracy of the finite helical axis (FHA) is highly dependent on experimental variability and rotation increment spacing between axis calculations. Previous research has not addressed the sensitivity of the FHA for true in vivo data collection, as required for gait laboratory analysis. This research presents a controlled series of experiments simulating continuous data collection as utilized in gait analysis to investigate the sensitivity of the three-dimensional finite screw axis parameters of rotation, displacement, orientation and location with regard to time step increment spacing, utilizing two different methods for spatial location. Six-degree-of-freedom motion parameters are measured for an idealized rigid body knee model that is constrained to a planar motion profile for the purposes of error analysis. The kinematic data are collected using a multicamera optoelectronic system combined with an error minimization algorithm known as the point cluster method. Rotation about the screw axis is seen to be repeatable, accurate and time step increment insensitive. Displacement along the axis is highly dependent on time step increment sizing, with smaller rotation angles between calculations producing more accuracy. Orientation of the axis in space is accurate with only a slight filtering effect noticed during motion reversal. Locating the screw axis by a projected point onto the screw axis from the mid-point of the finite displacement is found to be less sensitive to motion reversal than finding the intersection of the axis with a reference plane. A filtering effect of the spatial location parameters was noted for larger time step increments during periods of little or no rotation.
Incremental soil sampling root water uptake, or be great through others
USDA-ARS?s Scientific Manuscript database
Ray Allmaras pursued several research topics in relation to residue and tillage research. He looked for new tools to help explain soil responses to tillage, including disk permeameters and image analysis. The incremental sampler developed by Pikul and Allmaras allowed small-depth increment, volumetr...
Taking the Next Step: Combining Incrementally Valid Indicators to Improve Recidivism Prediction
ERIC Educational Resources Information Center
Walters, Glenn D.
2011-01-01
The possibility of combining indicators to improve recidivism prediction was evaluated in a sample of released federal prisoners randomly divided into a derivation subsample (n = 550) and a cross-validation subsample (n = 551). Five incrementally valid indicators were selected from five domains: demographic (age), historical (prior convictions),…
Effect of infliximab top-down therapy on weight gain in pediatric Crohns disease.
Kim, Mi Jin; Lee, Woo Yong; Choi, Kyong Eun; Choe, Yon Ho
2012-12-01
This retrospective-medical-record review was conducted to evaluate effect of infliximab therapy, particularly with a top-down strategy, on the nutritional parameters of children with Crohns disease (CD). 42 patients who were diagnosed with Crohns disease at the Pediatric Gastroenterology center of a tertiary care teaching hospital and achieved remission at two months and one year after beginning of treatment were divided into four subgroups according to the treatment regimen; azathioprine group (n = 11), steroid group (n = 11), infliximab top-down group (n = 11) and step-up group (n = 9). Weight, height, and serum albumin were measured at diagnosis, and then at two months and one year after the initiation of treatment. At 2 months, the Z score increment for weight was highest in the steroid group, followed by the top-down, step-up, and azathioprine groups. At one year, the Z score increment was highest in top-down group, followed by steroid, azathioprine, and step-up group. There were no significant differences between the four groups in Z score increment for height and serum albumin during the study period. The top-down infliximab treatment resulted in superior outcome for weight gain, compared to the step-up therapy and other treatment regimens.
NASA Astrophysics Data System (ADS)
Vollrath, Bastian; Hübel, Hartwig
2018-01-01
The Simplified Theory of Plastic Zones (STPZ) may be used to determine post-shakedown quantities such as strain ranges and accumulated strains at plastic or elastic shakedown. The principles of the method are summarized. Its practical applicability is shown by the example of a pipe bend subjected to constant internal pressure along with cyclic in-plane bending or/and cyclic radial temperature gradient. The results are compared with incremental analyses performed step-by-step throughout the entire load history until the state of plastic shakedown is achieved.
Biochemistry and Cell Wall Changes Associated with Noni (Morinda citrifolia L.) Fruit Ripening.
Cárdenas-Coronel, Wendy G; Carrillo-López, Armando; Vélez de la Rocha, Rosabel; Labavitch, John M; Báez-Sañudo, Manuel A; Heredia, José B; Zazueta-Morales, José J; Vega-García, Misael O; Sañudo-Barajas, J Adriana
2016-01-13
Quality and compositional changes were determined in noni fruit harvested at five ripening stages, from dark-green to thaslucent-grayish. Fruit ripening was accompanied by acidity and soluble solids accumulation but pH diminution, whereas the softening profile presented three differential steps named early (no significant softening), intermediate (significant softening), and final (dramatic softening). At early step the extensive depolymerization of hydrosoluble pectins and the significantly increment of pectinase activities did not correlate with the slight reduction in firmness. The intermediate step showed an increment of pectinases and hemicellulases activities. The final step was accompanied by the most significant reduction in the yield of alcohol-insoluble solids as well as in the composition of uronic acids and neutral sugars; pectinases increased their activity and depolymerization of hemicellulosic fractions occurred. Noni ripening is a process conducted by the coordinated action of pectinases and hemicellulases that promote the differential dissasembly of cell wall polymers.
Implementing Kernel Methods Incrementally by Incremental Nonlinear Projection Trick.
Kwak, Nojun
2016-05-20
Recently, the nonlinear projection trick (NPT) was introduced enabling direct computation of coordinates of samples in a reproducing kernel Hilbert space. With NPT, any machine learning algorithm can be extended to a kernel version without relying on the so called kernel trick. However, NPT is inherently difficult to be implemented incrementally because an ever increasing kernel matrix should be treated as additional training samples are introduced. In this paper, an incremental version of the NPT (INPT) is proposed based on the observation that the centerization step in NPT is unnecessary. Because the proposed INPT does not change the coordinates of the old data, the coordinates obtained by INPT can directly be used in any incremental methods to implement a kernel version of the incremental methods. The effectiveness of the INPT is shown by applying it to implement incremental versions of kernel methods such as, kernel singular value decomposition, kernel principal component analysis, and kernel discriminant analysis which are utilized for problems of kernel matrix reconstruction, letter classification, and face image retrieval, respectively.
NASA Astrophysics Data System (ADS)
Goh, C. P.; Ismail, H.; Yen, K. S.; Ratnam, M. M.
2017-01-01
The incremental digital image correlation (DIC) method has been applied in the past to determine strain in large deformation materials like rubber. This method is, however, prone to cumulative errors since the total displacement is determined by combining the displacements in numerous stages of the deformation. In this work, a method of mapping large strains in rubber using DIC in a single-step without the need for a series of deformation images is proposed. The reference subsets were deformed using deformation factors obtained from the fitted mean stress-axial stretch ratio curve obtained experimentally and the theoretical Poisson function. The deformed reference subsets were then correlated with the deformed image after loading. The recently developed scanner-based digital image correlation (SB-DIC) method was applied on dumbbell rubber specimens to obtain the in-plane displacement fields up to 350% axial strain. Comparison of the mean axial strains determined from the single-step SB-DIC method with those from the incremental SB-DIC method showed an average difference of 4.7%. Two rectangular rubber specimens containing circular and square holes were deformed and analysed using the proposed method. The resultant strain maps from the single-step SB-DIC method were compared with the results of finite element modeling (FEM). The comparison shows that the proposed single-step SB-DIC method can be used to map the strain distribution accurately in large deformation materials like rubber at much shorter time compared to the incremental DIC method.
21 CFR 874.1070 - Short increment sensitivity index (SISI) adapter.
Code of Federal Regulations, 2011 CFR
2011-04-01
... 21 Food and Drugs 8 2011-04-01 2011-04-01 false Short increment sensitivity index (SISI) adapter. 874.1070 Section 874.1070 Food and Drugs FOOD AND DRUG ADMINISTRATION, DEPARTMENT OF HEALTH AND HUMAN... short periodic sound pulses in specific small decibel increments that are intended to be superimposed on...
21 CFR 874.1070 - Short increment sensitivity index (SISI) adapter.
Code of Federal Regulations, 2010 CFR
2010-04-01
... 21 Food and Drugs 8 2010-04-01 2010-04-01 false Short increment sensitivity index (SISI) adapter. 874.1070 Section 874.1070 Food and Drugs FOOD AND DRUG ADMINISTRATION, DEPARTMENT OF HEALTH AND HUMAN... short periodic sound pulses in specific small decibel increments that are intended to be superimposed on...
NASA Astrophysics Data System (ADS)
Kim, Sungjun; Park, Byung-Gook
2016-08-01
A study on the bipolar-resistive switching of an Ni/SiN/Si-based resistive random-access memory (RRAM) device shows that the influences of the reset power and the resistance value of the low-resistance state (LRS) on the reset-switching transitions are strong. For a low LRS with a large conducting path, the sharp reset switching, which requires a high reset power (>7 mW), was observed, whereas for a high LRS with small multiple-conducting paths, the step-by-step reset switching with a low reset power (<7 mW) was observed. The attainment of higher nonlinear current-voltage ( I-V) characteristics in terms of the step-by-step reset switching is due to the steep current-increased region of the trap-controlled space charge-limited current (SCLC) model. A multilevel cell (MLC) operation, for which the reset stop voltage ( V STOP) is used in the DC sweep mode and an incremental amplitude is used in the pulse mode for the step-by-step reset switching, is demonstrated here. The results of the present study suggest that well-controlled conducting paths in a SiN-based RRAM device, which are not too strong and not too weak, offer considerable potential for the realization of low-power and high-density crossbar-array applications.
Hollingworth, Andrew; Henderson, John M
2004-07-01
In a change detection paradigm, the global orientation of a natural scene was incrementally changed in 1 degree intervals. In Experiments 1 and 2, participants demonstrated sustained change blindness to incremental rotation, often coming to consider a significantly different scene viewpoint as an unchanged continuation of the original view. Experiment 3 showed that participants who failed to detect the incremental rotation nevertheless reliably detected a single-step rotation back to the initial view. Together, these results demonstrate an important dissociation between explicit change detection and visual memory. Following a change, visual memory is updated to reflect the changed state of the environment, even if the change was not detected.
How to set the stage for a full-fledged clinical trial testing 'incremental haemodialysis'.
Casino, Francesco Gaetano; Basile, Carlo
2017-07-21
Most people who make the transition to maintenance haemodialysis (HD) therapy are treated with a fixed dose of thrice-weekly HD (3HD/week) regimen without consideration of their residual kidney function (RKF). The RKF provides an effective and naturally continuous clearance of both small and middle molecules, plays a major role in metabolic homeostasis, nutritional status and cardiovascular health, and aids in fluid management. The RKF is associated with better patient survival and greater health-related quality of life. Its preservation is instrumental to the prescription of incremental (1HD/week to 2HD/week) HD. The recently heightened interest in incremental HD has been hindered by the current limitations of the urea kinetic model (UKM), which tend to overestimate the needed dialysis dose in the presence of a substantial RKF. A recent paper by Casino and Basile suggested a variable target model (VTM), which gives more clinical weight to the RKF and allows less frequent HD treatments at lower RKF as opposed to the fixed target model, based on the wrong concept of the clinical equivalence between renal and dialysis clearance. A randomized controlled trial (RCT) enrolling incident patients and comparing incremental HD (prescribed according to the VTM) with the standard 3HD/week schedule and focused on hard outcomes, such as survival and health-related quality of life of patients, is urgently needed. The first step in designing such a study is to compute the 'adequacy lines' and the associated fitting equations necessary for the most appropriate allocation of the patients in the two arms and their correct and safe follow-up. In conclusion, the potentially important clinical and financial implications of the incremental HD render it highly promising and warrant RCTs. The UKM is the keystone for conducting such studies. © The Author 2017. Published by Oxford University Press on behalf of ERA-EDTA. All rights reserved.
Combining Accuracy and Efficiency: An Incremental Focal-Point Method Based on Pair Natural Orbitals.
Fiedler, Benjamin; Schmitz, Gunnar; Hättig, Christof; Friedrich, Joachim
2017-12-12
In this work, we present a new pair natural orbitals (PNO)-based incremental scheme to calculate CCSD(T) and CCSD(T0) reaction, interaction, and binding energies. We perform an extensive analysis, which shows small incremental errors similar to previous non-PNO calculations. Furthermore, slight PNO errors are obtained by using T PNO = T TNO with appropriate values of 10 -7 to 10 -8 for reactions and 10 -8 for interaction or binding energies. The combination with the efficient MP2 focal-point approach yields chemical accuracy relative to the complete basis-set (CBS) limit. In this method, small basis sets (cc-pVDZ, def2-TZVP) for the CCSD(T) part are sufficient in case of reactions or interactions, while some larger ones (e.g., (aug)-cc-pVTZ) are necessary for molecular clusters. For these larger basis sets, we show the very high efficiency of our scheme. We obtain not only tremendous decreases of the wall times (i.e., factors >10 2 ) due to the parallelization of the increment calculations as well as of the total times due to the application of PNOs (i.e., compared to the normal incremental scheme) but also smaller total times with respect to the standard PNO method. That way, our new method features a perfect applicability by combining an excellent accuracy with a very high efficiency as well as the accessibility to larger systems due to the separation of the full computation into several small increments.
Numerical solution methods for viscoelastic orthotropic materials
NASA Technical Reports Server (NTRS)
Gramoll, K. C.; Dillard, D. A.; Brinson, H. F.
1988-01-01
Numerical solution methods for viscoelastic orthotropic materials, specifically fiber reinforced composite materials, are examined. The methods include classical lamination theory using time increments, direction solution of the Volterra Integral, Zienkiewicz's linear Prony series method, and a new method called Nonlinear Differential Equation Method (NDEM) which uses a nonlinear Prony series. The criteria used for comparison of the various methods include the stability of the solution technique, time step size stability, computer solution time length, and computer memory storage. The Volterra Integral allowed the implementation of higher order solution techniques but had difficulties solving singular and weakly singular compliance function. The Zienkiewicz solution technique, which requires the viscoelastic response to be modeled by a Prony series, works well for linear viscoelastic isotropic materials and small time steps. The new method, NDEM, uses a modified Prony series which allows nonlinear stress effects to be included and can be used with orthotropic nonlinear viscoelastic materials. The NDEM technique is shown to be accurate and stable for both linear and nonlinear conditions with minimal computer time.
Analysis of progressive damage in thin circular laminates due to static-equivalent impact loads
NASA Technical Reports Server (NTRS)
Shivakumar, K. N.; Elber, W.; Illg, W.
1983-01-01
Clamped circular graphite/epoxy plates (25.4, 38.1, and 50.8 mm radii) with an 8-ply quasi-isotropic layup were analyzed for static-equivalent impact loads using the minimum-total-potential-energy method and the von Karman strain-displacement equations. A step-by-step incremental transverse displacement procedure was used to calculate plate load and ply stresses. The ply failure region was calculated using the Tsai-Wu criterion. The corresponding failure modes (splitting and fiber failure) were determined using the maximum stress criteria. The first-failure mode was splitting and initiated first in the bottom ply. The splitting-failure thresholds were relatively low and tended to be lower for larger plates than for small plates. The splitting-damage region in each ply was elongated in its fiber direction; the bottom ply had the largest damage region. The calculated damage region for the 25.4-mm-radius plate agreed with limited static test results from the literature.
NASA Astrophysics Data System (ADS)
Kuhn, Matthew R.; Daouadji, Ali
2018-05-01
The paper addresses a common assumption of elastoplastic modeling: that the recoverable, elastic strain increment is unaffected by alterations of the elastic moduli that accompany loading. This assumption is found to be false for a granular material, and discrete element (DEM) simulations demonstrate that granular materials are coupled materials at both micro- and macro-scales. Elasto-plastic coupling at the macro-scale is placed in the context of thermomechanics framework of Tomasz Hueckel and Hans Ziegler, in which the elastic moduli are altered by irreversible processes during loading. This complex behavior is explored for multi-directional loading probes that follow an initial monotonic loading. An advanced DEM model is used in the study, with non-convex non-spherical particles and two different contact models: a conventional linear-frictional model and an exact implementation of the Hertz-like Cattaneo-Mindlin model. Orthotropic true-triaxial probes were used in the study (i.e., no direct shear strain), with tiny strain increments of 2 ×10-6 . At the micro-scale, contact movements were monitored during small increments of loading and load-reversal, and results show that these movements are not reversed by a reversal of strain direction, and some contacts that were sliding during a loading increment continue to slide during reversal. The probes show that the coupled part of a strain increment, the difference between the recoverable (elastic) increment and its reversible part, must be considered when partitioning strain increments into elastic and plastic parts. Small increments of irreversible (and plastic) strain and contact slipping and frictional dissipation occur for all directions of loading, and an elastic domain, if it exists at all, is smaller than the strain increment used in the simulations.
Suggested Best Practice for seismic monitoring and characterization of non-conventional reservoirs
NASA Astrophysics Data System (ADS)
Malin, P. E.; Bohnhoff, M.; terHeege, J. H.; Deflandre, J. P.; Sicking, C.
2017-12-01
High rates of induced seismicity and gas leakage in non-conventional production have become a growing issue of public concern. It has resulted in calls for independent monitoring before, during and after reservoir production. To date no uniform practice for it exists and few reservoirs are locally monitored at all. Nonetheless, local seismic monitoring is a pre-requisite for detecting small earthquakes, increases of which can foreshadow damaging ones and indicate gas leaks. Appropriately designed networks, including seismic reflection studies, can be used to collect these and Seismic Emission Tomography (SET) data, the latter significantly helping reservoir characterization and exploitation. We suggest a Step-by-Step procedure for implementing such networks. We describe various field kits, installations, and workflows, all aimed at avoiding damaging seismicity, as indicators of well stability, and improving reservoir exploitation. In Step 1, a single downhole seismograph is recommended for establishing baseline seismicity before development. Subsequent Steps are used to decide cost-effective ways of monitoring treatments, production, and abandonment. We include suggestions for monitoring of disposal and underground storage. We also describe how repeated SET observations improve reservoir management as well as regulatory monitoring. Moreover, SET acquisition can be included at incremental cost in active surveys or temporary passive deployments.
Spear, Ashley D.; Hochhalter, Jacob D.; Cerrone, Albert R.; ...
2016-04-27
In an effort to reproduce computationally the observed evolution of microstructurally small fatigue cracks (MSFCs), a method is presented for generating conformal, finite-element (FE), volume meshes from 3D measurements of MSFC propagation. The resulting volume meshes contain traction-free surfaces that conform to incrementally measured 3D crack shapes. Grain morphologies measured using near-field high-energy X-ray diffraction microscopy are also represented within the FE volume meshes. Proof-of-concept simulations are performed to demonstrate the utility of the mesh-generation method. The proof-of-concept simulations employ a crystal-plasticity constitutive model and are performed using the conformal FE meshes corresponding to successive crack-growth increments. Although the simulationsmore » for each crack increment are currently independent of one another, they need not be, and transfer of material-state information among successive crack-increment meshes is discussed. The mesh-generation method was developed using post-mortem measurements, yet it is general enough that it can be applied to in-situ measurements of 3D MSFC propagation.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Spear, Ashley D.; Hochhalter, Jacob D.; Cerrone, Albert R.
In an effort to reproduce computationally the observed evolution of microstructurally small fatigue cracks (MSFCs), a method is presented for generating conformal, finite-element (FE), volume meshes from 3D measurements of MSFC propagation. The resulting volume meshes contain traction-free surfaces that conform to incrementally measured 3D crack shapes. Grain morphologies measured using near-field high-energy X-ray diffraction microscopy are also represented within the FE volume meshes. Proof-of-concept simulations are performed to demonstrate the utility of the mesh-generation method. The proof-of-concept simulations employ a crystal-plasticity constitutive model and are performed using the conformal FE meshes corresponding to successive crack-growth increments. Although the simulationsmore » for each crack increment are currently independent of one another, they need not be, and transfer of material-state information among successive crack-increment meshes is discussed. The mesh-generation method was developed using post-mortem measurements, yet it is general enough that it can be applied to in-situ measurements of 3D MSFC propagation.« less
Two-dimensional scanner apparatus. [flaw detector in small flat plates
NASA Technical Reports Server (NTRS)
Kurtz, G. W.; Bankston, B. F. (Inventor)
1984-01-01
An X-Y scanner utilizes an eddy current or ultrasonic current test probe to detect surface defects in small flat plates and the like. The apparatus includes a scanner which travels on a pair of slide tubes in the X-direction. The scanner, carried on a carriage which slides in the Y-direction, is driven by a helix shaft with a closed-loop helix groove in which a follower pin carried by scanner rides. The carriage is moved incrementally in the Y-direction upon the completion of travel of the scanner back and forth in the X-direction by means of an indexing actuator and an indexing gear. The actuator is in the form of a ratchet which engages ratchet gear upon return of the scanner to the indexing position. The indexing gear is rotated a predetermined increment along a crack gear to move carriage incrementally in the Y-direction. Thus, simplified highly responsive mechanical motion may be had in a small lightweight portable unit for accurate scanning of small area.
40 CFR 60.1605 - What if I do not meet an increment of progress?
Code of Federal Regulations, 2010 CFR
2010-07-01
... Times for Small Municipal Waste Combustion Units Constructed on or Before August 30, 1999 Model Rule... increment of progress, you must submit a notification to the Administrator postmarked within 10 business...
Ultra-high resolution computed tomography imaging
Paulus, Michael J.; Sari-Sarraf, Hamed; Tobin, Jr., Kenneth William; Gleason, Shaun S.; Thomas, Jr., Clarence E.
2002-01-01
A method for ultra-high resolution computed tomography imaging, comprising the steps of: focusing a high energy particle beam, for example x-rays or gamma-rays, onto a target object; acquiring a 2-dimensional projection data set representative of the target object; generating a corrected projection data set by applying a deconvolution algorithm, having an experimentally determined a transfer function, to the 2-dimensional data set; storing the corrected projection data set; incrementally rotating the target object through an angle of approximately 180.degree., and after each the incremental rotation, repeating the radiating, acquiring, generating and storing steps; and, after the rotating step, applying a cone-beam algorithm, for example a modified tomographic reconstruction algorithm, to the corrected projection data sets to generate a 3-dimensional image. The size of the spot focus of the beam is reduced to not greater than approximately 1 micron, and even to not greater than approximately 0.5 microns.
Rod-cone interaction in light adaptation
Latch, M.; Lennie, P.
1977-01-01
1. The increment-threshold for a small test spot in the peripheral visual field was measured against backgrounds that were red or blue. 2. When the background was a large uniform field, threshold over most of the scotopic range depended exactly upon the background's effect upon rods. This confirms Flamant & Stiles (1948). But when the background was small, threshold was elevated more by a long wave-length than a short wave-length background equated for its effect on rods. 3. The influence of cones was explored in a further experiment. The scotopic increment-threshold was established for a short wave-length test spot on a large, short wave-length background. Then a steady red circular patch, conspicuous to cones, but below the increment-threshold for rod vision, was added to the background. When it was small, but not when it was large, this patch substantially raised the threshold for the test. 4. When a similar experiment was made using, instead of a red patch, a short wave-length one that was conspicuous in rod vision, threshold varied similarly with patch size. These results support the notion that the influence of small backgrounds arises in some size-selective mechanism that is indifferent to the receptor system in which visual signals originate. Two corollaries of this hypothesis were tested in further experiments. 5. A small patch was chosen so as to lift scotopic threshold substantially above its level on a uniform field. This threshold elevation persisted for minutes after extinction of the patch, but only when the patch was small. A large patch made bright enough to elevate threshold by as much as the small one gave rise to no corresponding after-effect. 6. Increment-thresholds for a small red test spot, detected through cones, followed the same course whether a large uniform background was long- or short wave-length. When the background was small, threshold upon the short wave-length one began to rise for much lower levels of background illumination, suggesting the influence of rods. This was confirmed by repeating the experiment after a strong bleach when the cones, but not rods, had fully recovered their sensitivity. Increment-thresholds upon small backgrounds of long or short wave-lengths then followed the same course. PMID:894602
Mission and Implementation of an Affordable Lunar Return
NASA Technical Reports Server (NTRS)
Spudis, Paul; Lavoie, Anthony
2010-01-01
We present an architecture that establishes the infrastructure for routine space travel by taking advantage of the Moon's resources, proximity and accessibility. We use robotic assets on the Moon that are teleoperated from Earth to prospect, test, demonstrate and produce water from lunar resources before human arrival. This plan is affordable, flexible and not tied to any specific launch vehicle solution. Individual surface pieces are small, permitting them to be deployed separately on small launchers or combined together on single large launchers. Schedule is our free variable; even under highly constrained budgets, the architecture permits this program to be continuously pursued using small, incremental, cumulative steps. The end stage is a fully functional, human-tended lunar outpost capable of producing 150 metric tonnes of water per year enough to export water from the Moon and create a transportation system that allows routine access to all of cislunar space. This cost-effective lunar architecture advances technology and builds a sustainable transportation infrastructure. By eliminating the need to launch everything from the surface of the Earth, we fundamentally change the paradigm of spaceflight.
Radial-rotation profile forming: A new processing technology of incremental sheet metal forming
NASA Astrophysics Data System (ADS)
Laue, Robert; Härtel, Sebastian; Awiszus, Birgit
2018-05-01
Incremental forming processes (i.e., spinning) of sheet metal blanks into cylindrical cups are suitable for lower lot sizes. The produced cups were frequently used as preforms to produce workpieces in further forming steps with additional functions like profiled hollow parts [1]. The incremental forming process radial-rotation profile forming has been developed to enable the production of profiled hollow parts with low sheet thinning and good geometrical accuracy. The two principal forming steps are the production of the preform by rotational swing-folding [2] and the subsequent radial profiling of the hollow part in one clamping position. The rotational swing-folding process is based on a combination of conventional spinning and swing-folding. Therefore, a round blank rotates on a profiled mandrel and due to the swinging of a cylindrical forming tool, the blank is formed to a cup with low sheet thinning. In addition, thickening results at the edge of the blank and wrinkling occurs. However, the wrinkles are formed into the indentation of the profiled mandrel and can be reshaped as an advantage in the second process step, the radial profiling. Due to the rotation and continuous radial feed of a profiled forming tool to the profiled mandrel, the axial profile is formed in the second process step. Because of the minor relative movement in axial direction between tool and blank, low sheet thinning occurs. This is an advantage of the principle of the process.
NASA Astrophysics Data System (ADS)
Carette, Yannick; Vanhove, Hans; Duflou, Joost
2018-05-01
Single Point Incremental Forming is a flexible process that is well-suited for small batch production and rapid prototyping of complex sheet metal parts. The distributed nature of the deformation process and the unsupported sheet imply that controlling the final accuracy of the workpiece is challenging. To improve the process limits and the accuracy of SPIF, the use of multiple forming passes has been proposed and discussed by a number of authors. Most methods use multiple intermediate models, where the previous one is strictly smaller than the next one, while gradually increasing the workpieces' wall angles. Another method that can be used is the manufacture of a smoothed-out "base geometry" in the first pass, after which more detailed features can be added in subsequent passes. In both methods, the selection of these intermediate shapes is freely decided by the user. However, their practical implementation in the production of complex freeform parts is not straightforward. The original CAD model can be manually adjusted or completely new CAD models can be created. This paper discusses an automatic method that is able to extract the base geometry from a full STL-based CAD model in an analytical way. Harmonic decomposition is used to express the final geometry as the sum of individual surface harmonics. It is then possible to filter these harmonic contributions to obtain a new CAD model with a desired level of geometric detail. This paper explains the technique and its implementation, as well as its use in the automatic generation of multi-step geometries.
Small Diameter Bomb Increment II (SDB II)
2015-12-01
Selected Acquisition Report ( SAR ) RCS: DD-A&T(Q&A)823-439 Small Diameter Bomb Increment II (SDB II) As of FY 2017 President’s Budget Defense...Acquisition Management Information Retrieval (DAMIR) March 23, 2016 16:19:13 UNCLASSIFIED SDB II December 2015 SAR March 23, 2016 16:19:13 UNCLASSIFIED...Document OSD - Office of the Secretary of Defense O&S - Operating and Support PAUC - Program Acquisition Unit Cost SDB II December 2015 SAR March 23
A Capsule Look at Zero-Base Budgeting.
ERIC Educational Resources Information Center
Griffin, William A., Jr.
Weaknesses of the traditional incremental budgeting approach are considered as background to indicate the need for a new system of budgeting in educational institutions, and a step-by-step description of zero-based budgeting (ZBB) is presented. Proposed advantages of ZBB include the following: better staff morale due to a budget that is open and…
NASA Astrophysics Data System (ADS)
Farstad, Jan Magnus Granheim; Netland, Øyvind; Welo, Torgeir
2017-10-01
This paper presents the results from a second series of experiments made to study local plastic deformations of a complex, hollow aluminium extrusion formed in roll bending. The first experimental series utilizing a single step roll bending sequence has been presented at the ESAFORM 2016 conference by Farstad et. al. In this recent experimental series, the same aluminium extrusion was formed in incremental steps. The objective was to investigate local distortions of the deformed cross section as a result of different number of steps employed to arrive at the final global shape of the extrusion. Moreover, the results between the two experimental series are compared, focusing on identifying differences in both the desired and the undesired deformations taking place as a result of bending and contact stresses. The profiles formed through multiple passes had less undesirable local distortions of the cross-section than the profiles that were formed in a single pass. However, the springback effect was more pronounced, meaning that the released radii of the profiles were higher.
Blood flow patterns during incremental and steady-state aerobic exercise.
Coovert, Daniel; Evans, LeVisa D; Jarrett, Steven; Lima, Carla; Lima, Natalia; Gurovich, Alvaro N
2017-05-30
Endothelial shear stress (ESS) is a physiological stimulus for vascular homeostasis, highly dependent on blood flow patterns. Exercise-induced ESS might be beneficial on vascular health. However, it is unclear what type of ESS aerobic exercise (AX) produces. The aims of this study are to characterize exercise-induced blood flow patterns during incremental and steady-state AX. We expect blood flow pattern during exercise will be intensity-dependent and bidirectional. Six college-aged students (2 males and 4 females) were recruited to perform 2 exercise tests on cycleergometer. First, an 8-12-min incremental test (Test 1) where oxygen uptake (VO2), heart rate (HR), blood pressure (BP), and blood lactate (La) were measured at rest and after each 2-min step. Then, at least 48-hr. after the first test, a 3-step steady state exercise test (Test 2) was performed measuring VO2, HR, BP, and La. The three steps were performed at the following exercise intensities according to La: 0-2 mmol/L, 2-4 mmol/L, and 4-6 mmol/L. During both tests, blood flow patterns were determined by high-definition ultrasound and Doppler on the brachial artery. These measurements allowed to determine blood flow velocities and directions during exercise. On Test 1 VO2, HR, BP, La, and antegrade blood flow velocity significantly increased in an intensity-dependent manner (repeated measures ANOVA, p<0.05). Retrograde blood flow velocity did not significantly change during Test 1. On Test 2 all the previous variables significantly increased in an intensity-dependent manner (repeated measures ANOVA, p<0.05). These results support the hypothesis that exercise induced ESS might be increased in an intensity-dependent way and blood flow patterns during incremental and steady-state exercises include both antegrade and retrograde blood flows.
2010-06-01
Sampling (MIS)? • Technique of combining many increments of soil from a number of points within exposure area • Developed by Enviro Stat (Trademarked...Demonstrating a reliable soil sampling strategy to accurately characterize contaminant concentrations in spatially extreme and heterogeneous...into a set of decision (exposure) units • One or several discrete or small- scale composite soil samples collected to represent each decision unit
Measurement of the tensile forces during bone lengthening.
Ohnishi, Isao; Kurokawa, Takahide; Sato, Wakyo; Nakamura, Kozo
2005-05-01
The purpose of this study was to investigate the effects of lengthening frequency on mechanical environment in limb lengthening. Tensile forces were continuously monitored using a load sensor attached to a unilateral external fixator. Twenty patients were monitored. Ten patients were with acquired femoral shortening, and five of them underwent quasi-continuous lengthening of 1440 steps per day, and the other five received step lengthening twice a day. The other 10 patients were with achondropalsia. Five of them underwent the same quasi-continuous lengthening, and the other five received the same step lengthening. The circadian change and the daily course of the tensile forces were assessed and compared between quasi-continuous lengthening and step lengthening. As for circadian change, an acute increase in the force took place simultaneously with each step of lengthening in the step-lengthening group, but very little change of the baseline force level was seen during quasi-continuous lengthening. As for daily course of the tensile force, it increased almost linearly in both lengthening frequency groups in the initial stage of lengthening. No significant difference of the average force increment rate in this phase was recognized between the quasi-continuous and step lengthening groups irrespective of the etiologies. The lengthening frequency greatly affected the circadian change of the tensile force, but did not affect the increment rate of the force in the linear phase.
Self-Advancing Step-Tap Drills
NASA Technical Reports Server (NTRS)
Pettit, Donald R.; Camarda, Charles J.; Penner, Ronald K.; Franklin, Larry D.
2007-01-01
Self-advancing tool bits that are hybrids of drills and stepped taps make it possible to form threaded holes wider than about 1/2 in. (about 13 mm) without applying any more axial force than is necessary for forming narrower pilot holes. These self-advancing stepped-tap drills were invented for use by space-suited astronauts performing repairs on reinforced carbon/carbon space-shuttle leading edges during space walks, in which the ability to apply axial drilling forces is severely limited. Self-advancing stepped-tap drills could also be used on Earth for making wide holes without applying large axial forces. A self-advancing stepped-tap drill (see figure) includes several sections having progressively larger diameters, typically in increments between 0.030 and 0.060 in. (between about 0.8 and about 1.5 mm). The tip section, which is the narrowest, is a pilot drill bit that typically has a diameter between 1/8 and 3/16 in. (between about 3.2 and about 4.8 mm). The length of the pilot-drill section is chosen, according to the thickness of the object to be drilled and tapped, so that the pilot hole is completed before engagement of the first tap section. Provided that the cutting-edge geometry of the drill bit is optimized for the material to be drilled, only a relatively small axial force [typically of the order of a few pounds (of the order of 10 newtons)] must be applied during drilling of the pilot hole. Once the first tap section engages the pilot hole, it is no longer necessary for the drill operator to apply axial force: the thread engagement between the tap and the workpiece provides the axial force to advance the tool bit. Like the pilot-drill section, each tap section must be long enough to complete its hole before engagement of the next, slightly wider tap section. The precise values of the increments in diameter, the thread pitch, the rake angle of the tap cutting edge, and other geometric parameters of the tap sections must be chosen, in consideration of the workpiece material and thickness, to prevent stripping of threads during the drilling/tapping operation. A stop-lip or shoulder at the shank end of the widest tap section prevents further passage of the tool bit through the hole.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tumuluru, Jaya Shankar; McCulloch, Richard Chet James
In this work a new hybrid genetic algorithm was developed which combines a rudimentary adaptive steepest ascent hill climbing algorithm with a sophisticated evolutionary algorithm in order to optimize complex multivariate design problems. By combining a highly stochastic algorithm (evolutionary) with a simple deterministic optimization algorithm (adaptive steepest ascent) computational resources are conserved and the solution converges rapidly when compared to either algorithm alone. In genetic algorithms natural selection is mimicked by random events such as breeding and mutation. In the adaptive steepest ascent algorithm each variable is perturbed by a small amount and the variable that caused the mostmore » improvement is incremented by a small step. If the direction of most benefit is exactly opposite of the previous direction with the most benefit then the step size is reduced by a factor of 2, thus the step size adapts to the terrain. A graphical user interface was created in MATLAB to provide an interface between the hybrid genetic algorithm and the user. Additional features such as bounding the solution space and weighting the objective functions individually are also built into the interface. The algorithm developed was tested to optimize the functions developed for a wood pelleting process. Using process variables (such as feedstock moisture content, die speed, and preheating temperature) pellet properties were appropriately optimized. Specifically, variables were found which maximized unit density, bulk density, tapped density, and durability while minimizing pellet moisture content and specific energy consumption. The time and computational resources required for the optimization were dramatically decreased using the hybrid genetic algorithm when compared to MATLAB's native evolutionary optimization tool.« less
Adaptive management on public lands in the United States: commitment or rhetoric?
William H. Moir; William M. Block
2001-01-01
Adaptive management (AM is the process of implementing land management activities in incremental steps and evaluating whether desired outcomes are being achieved at each step. If conditions deviate substantially from predictions, management activities are adjusted to achieve the desired outcomes. Thus, AM is a kind of monitoring, an activity that land management...
Pre-eruptive magmatic processes re-timed using a non-isothermal approach to magma chamber dynamics.
Petrone, Chiara Maria; Bugatti, Giuseppe; Braschi, Eleonora; Tommasini, Simone
2016-10-05
Constraining the timescales of pre-eruptive magmatic processes in active volcanic systems is paramount to understand magma chamber dynamics and the triggers for volcanic eruptions. Temporal information of magmatic processes is locked within the chemical zoning profiles of crystals but can be accessed by means of elemental diffusion chronometry. Mineral compositional zoning testifies to the occurrence of substantial temperature differences within magma chambers, which often bias the estimated timescales in the case of multi-stage zoned minerals. Here we propose a new Non-Isothermal Diffusion Incremental Step model to take into account the non-isothermal nature of pre-eruptive processes, deconstructing the main core-rim diffusion profiles of multi-zoned crystals into different isothermal steps. The Non-Isothermal Diffusion Incremental Step model represents a significant improvement in the reconstruction of crystal lifetime histories. Unravelling stepwise timescales at contrasting temperatures provides a novel approach to constraining pre-eruptive magmatic processes and greatly increases our understanding of magma chamber dynamics.
Pre-eruptive magmatic processes re-timed using a non-isothermal approach to magma chamber dynamics
Petrone, Chiara Maria; Bugatti, Giuseppe; Braschi, Eleonora; Tommasini, Simone
2016-01-01
Constraining the timescales of pre-eruptive magmatic processes in active volcanic systems is paramount to understand magma chamber dynamics and the triggers for volcanic eruptions. Temporal information of magmatic processes is locked within the chemical zoning profiles of crystals but can be accessed by means of elemental diffusion chronometry. Mineral compositional zoning testifies to the occurrence of substantial temperature differences within magma chambers, which often bias the estimated timescales in the case of multi-stage zoned minerals. Here we propose a new Non-Isothermal Diffusion Incremental Step model to take into account the non-isothermal nature of pre-eruptive processes, deconstructing the main core-rim diffusion profiles of multi-zoned crystals into different isothermal steps. The Non-Isothermal Diffusion Incremental Step model represents a significant improvement in the reconstruction of crystal lifetime histories. Unravelling stepwise timescales at contrasting temperatures provides a novel approach to constraining pre-eruptive magmatic processes and greatly increases our understanding of magma chamber dynamics. PMID:27703141
A Nonlinear Model for Transient Responses from Light-Adapted Wolf Spider Eyes
DeVoe, Robert D.
1967-01-01
A quantitative model is proposed to test the hypothesis that the dynamics of nonlinearities in retinal action potentials from light-adapted wolf spider eyes may be due to delayed asymmetries in responses of the visual cells. For purposes of calculation, these delayed asymmetries are generated in an analogue by a time-variant resistance. It is first shown that for small incremental stimuli, the linear behavior of such a resistance describes peaking and low frequency phase lead in frequency responses of the eye to sinusoidal modulations of background illumination. It also describes the overshoots in linear step responses. It is next shown that the analogue accounts for nonlinear transient and short term DC responses to large positive and negative step stimuli and for the variations in these responses with changes in degree of light adaptation. Finally, a physiological model is proposed in which the delayed asymmetries in response are attributed to delayed rectification by the visual cell membrane. In this model, cascaded chemical reactions may serve to transduce visual stimuli into membrane resistance changes. PMID:6056011
One Giant Leap for Categorizers: One Small Step for Categorization Theory
Smith, J. David; Ell, Shawn W.
2015-01-01
We explore humans’ rule-based category learning using analytic approaches that highlight their psychological transitions during learning. These approaches confirm that humans show qualitatively sudden psychological transitions during rule learning. These transitions contribute to the theoretical literature contrasting single vs. multiple category-learning systems, because they seem to reveal a distinctive learning process of explicit rule discovery. A complete psychology of categorization must describe this learning process, too. Yet extensive formal-modeling analyses confirm that a wide range of current (gradient-descent) models cannot reproduce these transitions, including influential rule-based models (e.g., COVIS) and exemplar models (e.g., ALCOVE). It is an important theoretical conclusion that existing models cannot explain humans’ rule-based category learning. The problem these models have is the incremental algorithm by which learning is simulated. Humans descend no gradient in rule-based tasks. Very different formal-modeling systems will be required to explain humans’ psychology in these tasks. An important next step will be to build a new generation of models that can do so. PMID:26332587
Mario, F M; Graff, S K; Spritzer, P M
2017-04-01
To examine the effect of habitual physical activity (PA) on the metabolic and hormonal profiles of women with polycystic ovary syndrome. Anthropometric, metabolic and hormonal assessment and determination of habitual PA levels with a digital pedometer were evaluated in 84 women with PCOS and 67 age- and body mass index (BMI)-matched controls. PA status was defined according to number of steps (≥7500 steps, active, or <7500 steps, sedentary). BMI was lower in active women from both groups. Active PCOS women presented lower waist circumference (WC) and lipid accumulation product (LAP) values versus sedentary PCOS women. In the control group, active women also had lower WC, lower values for fasting and 120-min insulin, and lower LAP than sedentary controls. In the PCOS group, androgen levels were lower in active versus sedentary women (p = 0.001). In the control group, free androgen index (FAI) was also lower in active versus sedentary women (p = 0.018). Homeostasis model assessment of insulin resistance and 2000 daily step increments were independent predictors of FAI. Each 2000 daily step increment was associated with a decrease of 1.07 in FAI. Habitual PA was associated with a better anthropometric and androgenic profile in PCOS.
NASA Astrophysics Data System (ADS)
Frey, M. P.; Stamm, C.; Schneider, M. K.; Reichert, P.
2011-12-01
A distributed hydrological model was used to simulate the distribution of fast runoff formation as a proxy for critical source areas for herbicide pollution in a small agricultural catchment in Switzerland. We tested to what degree predictions based on prior knowledge without local measurements could be improved upon relying on observed discharge. This learning process consisted of five steps: For the prior prediction (step 1), knowledge of the model parameters was coarse and predictions were fairly uncertain. In the second step, discharge data were used to update the prior parameter distribution. Effects of uncertainty in input data and model structure were accounted for by an autoregressive error model. This step decreased the width of the marginal distributions of parameters describing the lower boundary (percolation rates) but hardly affected soil hydraulic parameters. Residual analysis (step 3) revealed model structure deficits. We modified the model, and in the subsequent Bayesian updating (step 4) the widths of the posterior marginal distributions were reduced for most parameters compared to those of the prior. This incremental procedure led to a strong reduction in the uncertainty of the spatial prediction. Thus, despite only using spatially integrated data (discharge), the spatially distributed effect of the improved model structure can be expected to improve the spatially distributed predictions also. The fifth step consisted of a test with independent spatial data on herbicide losses and revealed ambiguous results. The comparison depended critically on the ratio of event to preevent water that was discharged. This ratio cannot be estimated from hydrological data only. The results demonstrate that the value of local data is strongly dependent on a correct model structure. An iterative procedure of Bayesian updating, model testing, and model modification is suggested.
Teren, Andrej; Zachariae, Silke; Beutner, Frank; Ubrich, Romy; Sandri, Marcus; Engel, Christoph; Löffler, Markus; Gielen, Stephan
2016-07-01
Cardiorespiratory fitness is a well-established independent predictor of cardiovascular health. However, the relevance of alternative exercise and non-exercise tests for cardiorespiratory fitness assessment in large cohorts has not been studied in detail. We aimed to evaluate the YMCA-step test and the Veterans Specific Activity Questionnaire (VSAQ) for the estimation of cardiorespiratory fitness in the general population. One hundred and five subjects answered the VSAQ, performed the YMCA-step test and a maximal cardiopulmonary exercise test (CPX) and gave BORG ratings for both exercise tests (BORGSTEP, BORGCPX). Correlations of peak oxygen uptake on a treadmill (VO2_PEAK) with VSAQ, BORGSTEP, one-minute, post-exercise heartbeat count, and peak oxygen uptake during the step test (VO2_STEP) were determined. Moreover, the incremental values of the questionnaire and the step test in addition to other fitness-related parameters were evaluated using block-wise hierarchical regression analysis. Eighty-six subjects completed the step test according to the protocol. For completers, correlations of VO2_PEAK with the age- and gender-adjusted VSAQ, heartbeat count and VO2_STEP were 0.67, 0.63 and 0.49, respectively. However, using hierarchical regression analysis, age, gender and body mass index already explained 68.8% of the variance of VO2_PEAK, while the additional benefit of VSAQ was rather low (3.4%). The inclusion of BORGSTEP, heartbeat count and VO2_STEP increased R(2) by a further 2.2%, 3.3% and 5.6%, respectively, yielding a total R(2) of 83.3%. Neither VSAQ nor the YMCA-step test contributes sufficiently to the assessment of cardiorespiratory fitness in population-based studies. © The European Society of Cardiology 2015.
Mortelliti, Caroline L; Mortelliti, Anthony J
2016-08-01
To elucidate the relatively large incremental percent change (IPC) in cross sectional area (CSA) in currently available small endotracheal tubes (ETTs), and to make recommendation for lesser incremental change in CSA in these smaller ETTs, in order to minimize iatrogenic airway injury. The CSAs of a commercially available line of ETTs were calculated, and the IPC of the CSA between consecutive size ETTs was calculated and graphed. The average IPC in CSA with large ETTs was applied to calculate identical IPC in the CSA for a theoretical, smaller ETT series, and the dimensions of a new theoretical series of proposed small ETTs were defined. The IPC of CSA in the larger (5.0-8.0 mm inner diameter (ID)) ETTs was 17.07%, and the IPC of CSA in the smaller ETTs (2.0-4.0 mm ID) is remarkably larger (38.08%). Applying the relatively smaller IPC of CSA from larger ETTs to a theoretical sequence of small ETTs, starting with the 2.5 mm ID ETT, suggests that intermediate sizes of small ETTs (ID 2.745 mm, 3.254 mm, and 3.859 mm) should exist. We recommend manufacturers produce additional small ETT size options at the intuitive intermediate sizes of 2.75 mm, 3.25 mm, and 3.75 mm ID in order to improve airway management for infants and small children. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Single-pass incremental force updates for adaptively restrained molecular dynamics.
Singh, Krishna Kant; Redon, Stephane
2018-03-30
Adaptively restrained molecular dynamics (ARMD) allows users to perform more integration steps in wall-clock time by switching on and off positional degrees of freedoms. This article presents new, single-pass incremental force updates algorithms to efficiently simulate a system using ARMD. We assessed different algorithms for speedup measurements and implemented them in the LAMMPS MD package. We validated the single-pass incremental force update algorithm on four different benchmarks using diverse pair potentials. The proposed algorithm allows us to perform simulation of a system faster than traditional MD in both NVE and NVT ensembles. Moreover, ARMD using the new single-pass algorithm speeds up the convergence of observables in wall-clock time. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.
Testing electroexplosive devices by programmed pulsing techniques
NASA Technical Reports Server (NTRS)
Rosenthal, L. A.; Menichelli, V. J.
1976-01-01
A novel method for testing electroexplosive devices is proposed wherein capacitor discharge pulses, with increasing energy in a step-wise fashion, are delivered to the device under test. The size of the energy increment can be programmed so that firing takes place after many, or after only a few, steps. The testing cycle is automatically terminated upon firing. An energy-firing contour relating the energy required to the programmed step size describes the single-pulse firing energy and the possible sensitization or desensitization of the explosive device.
Incrementally developing a cultural and regulatory infrastructure for reusable launch vehicles
NASA Astrophysics Data System (ADS)
Simberg, Rand
1998-01-01
At this point in time, technology is perhaps the least significant barrier to the development of high-flight-rate, reusable launchers, necessary for low-cost space access. Much more daunting are the issues of regulatory regimes, needed markets, and public/investor perception of their feasibility. The approach currently the focus of the government (X-33) assumes that the necessary conditions will be in place to support a new reusable launch vehicle in the Shuttle class at the end of the X-33 development. For a number of reasons (market size, lack of confidence in the technology, regulations designed for expendable vehicles, difficulties in capital formation) such an approach may prove too rapid a leap for success. More incremental steps, both experimental and operational, could be a higher-probability path to achieving the goal of cheap access through reusables. Such incrementalism, via intermediate vehicles (possibly multi-stage) exploiting suborbital and smaller-payload markets, could provide the gradual acclimatization of the public, regulatory and investment communities to reusable launchers, and build the confidence necessary to go on to subsequent steps to provide truly cheap access, while providing lower-cost access much sooner.
Bradbury, Penelope A; Tu, Dongsheng; Seymour, Lesley; Isogai, Pierre K; Zhu, Liting; Ng, Raymond; Mittmann, Nicole; Tsao, Ming-Sound; Evans, William K; Shepherd, Frances A; Leighl, Natasha B
2010-03-03
The NCIC Clinical Trials Group conducted the BR.21 trial, a randomized placebo-controlled trial of erlotinib (an epidermal growth factor receptor tyrosine kinase inhibitor) in patients with previously treated advanced non-small cell lung cancer. This trial accrued patients between August 14, 2001, and January 31, 2003, and found that overall survival and quality of life were improved in the erlotinib arm than in the placebo arm. However, funding restrictions limit access to erlotinib in many countries. We undertook an economic analysis of erlotinib treatment in this trial and explored different molecular and clinical predictors of outcome to determine the cost-effectiveness of treating various populations with erlotinib. Resource utilization was determined from individual patient data in the BR.21 trial database. The trial recruited 731 patients (488 in the erlotinib arm and 243 in the placebo arm). Costs arising from erlotinib treatment, diagnostic tests, outpatient visits, acute hospitalization, adverse events, lung cancer-related concomitant medications, transfusions, and radiation therapy were captured. The incremental cost-effectiveness ratio was calculated as the ratio of incremental cost (in 2007 Canadian dollars) to incremental effectiveness (life-years gained). In exploratory analyses, we evaluated the benefits of treatment in selected subgroups to determine the impact on the incremental cost-effectiveness ratio. The incremental cost-effectiveness ratio for erlotinib treatment in the BR.21 trial population was $94,638 per life-year gained (95% confidence interval = $52,359 to $429,148). The major drivers of cost-effectiveness included the magnitude of survival benefit and erlotinib cost. Subgroup analyses revealed that erlotinib may be more cost-effective in never-smokers or patients with high EGFR gene copy number. With an incremental cost-effectiveness ratio of $94 638 per life-year gained, erlotinib treatment for patients with previously treated advanced non-small cell lung cancer is marginally cost-effective. The use of molecular predictors of benefit for targeted agents may help identify more or less cost-effective subgroups for treatment.
The matrix exponential in transient structural analysis
NASA Technical Reports Server (NTRS)
Minnetyan, Levon
1987-01-01
The primary usefulness of the presented theory is in the ability to represent the effects of high frequency linear response with accuracy, without requiring very small time steps in the analysis of dynamic response. The matrix exponential contains a series approximation to the dynamic model. However, unlike the usual analysis procedure which truncates the high frequency response, the approximation in the exponential matrix solution is in the time domain. By truncating the series solution to the matrix exponential short, the solution is made inaccurate after a certain time. Yet, up to that time the solution is extremely accurate, including all high frequency effects. By taking finite time increments, the exponential matrix solution can compute the response very accurately. Use of the exponential matrix in structural dynamics is demonstrated by simulating the free vibration response of multi degree of freedom models of cantilever beams.
ERIC Educational Resources Information Center
Cochrane, Andy; Barnes-Holmes, Dermot; Barnes-Holmes, Yvonne
2008-01-01
One hundred twenty female participants, with varying levels of spider fear were asked to complete an automated 8-step perceived-threat behavioral approach test (PT-BAT). The steps involved asking the participants if they were willing to put their hand into a number of opaque jars with an incrementally increasing risk of contact with a spider (none…
Small Group Learning: Do Group Members' Implicit Theories of Ability Make a Difference?
ERIC Educational Resources Information Center
Beckmann, Nadin; Wood, Robert E.; Minbashian, Amirali; Tabernero, Carmen
2012-01-01
We examined the impact of members' implicit theories of ability on group learning and the mediating role of several group process variables, such as goal-setting, effort attributions, and efficacy beliefs. Comparisons were between 15 groups with a strong incremental view on ability (high incremental theory groups), and 15 groups with a weak…
A Roadmap for Using Agile Development in a Traditional Environment
NASA Technical Reports Server (NTRS)
Streiffert, Barbara; Starbird, Thomas; Grenander, Sven
2006-01-01
One of the newer classes of software engineering techniques is called 'Agile Development'. In Agile Development software engineers take small implementation steps and, in some cases, they program in pairs. In addition, they develop automatic tests prior to implementing their small functional piece. Agile Development focuses on rapid turnaround, incremental planning, customer involvement and continuous integration. Agile Development is not the traditional waterfall method or even a rapid prototyping method (although this methodology is closer to Agile Development). At the Jet Propulsion Laboratory (JPL) a few groups have begun Agile Development software implementations. The difficulty with this approach becomes apparent when Agile Development is used in an organization that has specific criteria and requirements handed down for how software development is to be performed. The work at the JPL is performed for the National Aeronautics and Space Agency (NASA). Both organizations have specific requirements, rules and processes for developing software. This paper will discuss some of the initial uses of the Agile Development methodology, the spread of this method and the current status of the successful incorporation into the current JPL development policies and processes.
A Roadmap for Using Agile Development in a Traditional Environment
NASA Technical Reports Server (NTRS)
Streiffert, Barbara A.; Starbird, Thomas; Grenander, Sven
2006-01-01
One of the newer classes of software engineering techniques is called 'Agile Development'. In Agile Development software engineers take small implementation steps and, in some cases they program in pairs. In addition, they develop automatic tests prior to implementing their small functional piece. Agile Development focuses on rapid turnaround, incremental planning, customer involvement and continuous integration. Agile Development is not the traditional waterfall method or even a rapid prototyping method (although this methodology is closer to Agile Development). At Jet Propulsion Laboratory (JPL) a few groups have begun Agile Development software implementations. The difficulty with this approach becomes apparent when Agile Development is used in an organization that has specific criteria and requirements handed down for how software development is to be performed. The work at the JPL is performed for the National Aeronautics and Space Agency (NASA). Both organizations have specific requirements, rules and procedure for developing software. This paper will discuss the some of the initial uses of the Agile Development methodology, the spread of this method and the current status of the successful incorporation into the current JPL development policies.
NASA Astrophysics Data System (ADS)
Kim, Sungjun; Park, Byung-Gook
2017-01-01
In this letter, we compare three different types of reset switching behavior in a bipolar resistive random-access memory (RRAM) system that is housed in a Ni/Si3N4/Si structure. The abrupt, step-like gradual and continuous gradual reset transitions are largely determined by the low-resistance state (LRS). For abrupt reset switching, the large conducting path shows ohmic behavior or has a weak nonlinear current-voltage (I-V) characteristics in the LRS. For gradual switching, including both the step-like and continuous reset types, trap-assisted direct tunneling is dominant in the low-voltage regime, while trap-assisted Fowler-Nordheim tunneling is dominant in the high-voltage regime, thus causing nonlinear I-V characteristics. More importantly, we evaluate the multi-level capabilities of the two different gradual switching types, including both step-like and continuous reset behavior, using identical and incremental voltage conditions. Finer control of the conductance level with good uniformity is achieved in continuous gradual reset switching when compared to that in step-like gradual reset switching. For continuous reset switching, a single conducting path, which initially has a tunneling gap, gradually responds to pulses with even and identical amplitudes, while for step-like reset switching, the multiple conducting paths only respond to incremental pulses to obtain effective multi-level states.
Multi-site Stochastic Simulation of Daily Streamflow with Markov Chain and KNN Algorithm
NASA Astrophysics Data System (ADS)
Mathai, J.; Mujumdar, P.
2017-12-01
A key focus of this study is to develop a method which is physically consistent with the hydrologic processes that can capture short-term characteristics of daily hydrograph as well as the correlation of streamflow in temporal and spatial domains. In complex water resource systems, flow fluctuations at small time intervals require that discretisation be done at small time scales such as daily scales. Also, simultaneous generation of synthetic flows at different sites in the same basin are required. We propose a method to equip water managers with a streamflow generator within a stochastic streamflow simulation framework. The motivation for the proposed method is to generate sequences that extend beyond the variability represented in the historical record of streamflow time series. The method has two steps: In step 1, daily flow is generated independently at each station by a two-state Markov chain, with rising limb increments randomly sampled from a Gamma distribution and the falling limb modelled as exponential recession and in step 2, the streamflow generated in step 1 is input to a nonparametric K-nearest neighbor (KNN) time series bootstrap resampler. The KNN model, being data driven, does not require assumptions on the dependence structure of the time series. A major limitation of KNN based streamflow generators is that they do not produce new values, but merely reshuffle the historical data to generate realistic streamflow sequences. However, daily flow generated using the Markov chain approach is capable of generating a rich variety of streamflow sequences. Furthermore, the rising and falling limbs of daily hydrograph represent different physical processes, and hence they need to be modelled individually. Thus, our method combines the strengths of the two approaches. We show the utility of the method and improvement over the traditional KNN by simulating daily streamflow sequences at 7 locations in the Godavari River basin in India.
Asynchronous Incremental Stochastic Dual Descent Algorithm for Network Resource Allocation
NASA Astrophysics Data System (ADS)
Bedi, Amrit Singh; Rajawat, Ketan
2018-05-01
Stochastic network optimization problems entail finding resource allocation policies that are optimum on an average but must be designed in an online fashion. Such problems are ubiquitous in communication networks, where resources such as energy and bandwidth are divided among nodes to satisfy certain long-term objectives. This paper proposes an asynchronous incremental dual decent resource allocation algorithm that utilizes delayed stochastic {gradients} for carrying out its updates. The proposed algorithm is well-suited to heterogeneous networks as it allows the computationally-challenged or energy-starved nodes to, at times, postpone the updates. The asymptotic analysis of the proposed algorithm is carried out, establishing dual convergence under both, constant and diminishing step sizes. It is also shown that with constant step size, the proposed resource allocation policy is asymptotically near-optimal. An application involving multi-cell coordinated beamforming is detailed, demonstrating the usefulness of the proposed algorithm.
Analysis of residual stress state in sheet metal parts processed by single point incremental forming
NASA Astrophysics Data System (ADS)
Maaß, F.; Gies, S.; Dobecki, M.; Brömmelhoff, K.; Tekkaya, A. E.; Reimers, W.
2018-05-01
The mechanical properties of formed metal components are highly affected by the prevailing residual stress state. A selective induction of residual compressive stresses in the component, can improve the product properties such as the fatigue strength. By means of single point incremental forming (SPIF), the residual stress state can be influenced by adjusting the process parameters during the manufacturing process. To achieve a fundamental understanding of the residual stress formation caused by the SPIF process, a valid numerical process model is essential. Within the scope of this paper the significance of kinematic hardening effects on the determined residual stress state is presented based on numerical simulations. The effect of the unclamping step after the manufacturing process is also analyzed. An average deviation of the residual stress amplitudes in the clamped and unclamped condition of 18 % reveals, that the unclamping step needs to be considered to reach a high numerical prediction quality.
Dynamic Failure of Materials. Volume 1 - Experiments and Analyses
1998-11-01
initial increments of voids do not lead to substantial relaxation of stress. In this case, the condition (8.1) gives Vv = Ao- aVv * ß=— (8.10) pc...enough strain steps to define the process accurately. At each strain step, a combined Newton-Raphson and regulafalsi solution technique (multiple trials ...laser surgery. Clinical studies have demonstrated that, for some applications, surgical lasers are superior to conventional surgical procedures
Planning paths through a spatial hierarchy - Eliminating stair-stepping effects
NASA Technical Reports Server (NTRS)
Slack, Marc G.
1989-01-01
Stair-stepping effects are a result of the loss of spatial continuity resulting from the decomposition of space into a grid. This paper presents a path planning algorithm which eliminates stair-stepping effects induced by the grid-based spatial representation. The algorithm exploits a hierarchical spatial model to efficiently plan paths for a mobile robot operating in dynamic domains. The spatial model and path planning algorithm map to a parallel machine, allowing the system to operate incrementally, thereby accounting for unexpected events in the operating space.
Incremental isometric embedding of high-dimensional data using connected neighborhood graphs.
Zhao, Dongfang; Yang, Li
2009-01-01
Most nonlinear data embedding methods use bottom-up approaches for capturing the underlying structure of data distributed on a manifold in high dimensional space. These methods often share the first step which defines neighbor points of every data point by building a connected neighborhood graph so that all data points can be embedded to a single coordinate system. These methods are required to work incrementally for dimensionality reduction in many applications. Because input data stream may be under-sampled or skewed from time to time, building connected neighborhood graph is crucial to the success of incremental data embedding using these methods. This paper presents algorithms for updating $k$-edge-connected and $k$-connected neighborhood graphs after a new data point is added or an old data point is deleted. It further utilizes a simple algorithm for updating all-pair shortest distances on the neighborhood graph. Together with incremental classical multidimensional scaling using iterative subspace approximation, this paper devises an incremental version of Isomap with enhancements to deal with under-sampled or unevenly distributed data. Experiments on both synthetic and real-world data sets show that the algorithm is efficient and maintains low dimensional configurations of high dimensional data under various data distributions.
History Matters: Incremental Ontology Reasoning Using Modules
NASA Astrophysics Data System (ADS)
Cuenca Grau, Bernardo; Halaschek-Wiener, Christian; Kazakov, Yevgeny
The development of ontologies involves continuous but relatively small modifications. Existing ontology reasoners, however, do not take advantage of the similarities between different versions of an ontology. In this paper, we propose a technique for incremental reasoning—that is, reasoning that reuses information obtained from previous versions of an ontology—based on the notion of a module. Our technique does not depend on a particular reasoning calculus and thus can be used in combination with any reasoner. We have applied our results to incremental classification of OWL DL ontologies and found significant improvement over regular classification time on a set of real-world ontologies.
An adaptive scale factor based MPPT algorithm for changing solar irradiation levels in outer space
NASA Astrophysics Data System (ADS)
Kwan, Trevor Hocksun; Wu, Xiaofeng
2017-03-01
Maximum power point tracking (MPPT) techniques are popularly used for maximizing the output of solar panels by continuously tracking the maximum power point (MPP) of their P-V curves, which depend both on the panel temperature and the input insolation. Various MPPT algorithms have been studied in literature, including perturb and observe (P&O), hill climbing, incremental conductance, fuzzy logic control and neural networks. This paper presents an algorithm which improves the MPP tracking performance by adaptively scaling the DC-DC converter duty cycle. The principle of the proposed algorithm is to detect the oscillation by checking the sign (ie. direction) of the duty cycle perturbation between the current and previous time steps. If there is a difference in the signs then it is clear an oscillation is present and the DC-DC converter duty cycle perturbation is subsequently scaled down by a constant factor. By repeating this process, the steady state oscillations become negligibly small which subsequently allows for a smooth steady state MPP response. To verify the proposed MPPT algorithm, a simulation involving irradiances levels that are typically encountered in outer space is conducted. Simulation and experimental results prove that the proposed algorithm is fast and stable in comparison to not only the conventional fixed step counterparts, but also to previous variable step size algorithms.
DYCAST: A finite element program for the crash analysis of structures
NASA Technical Reports Server (NTRS)
Pifko, A. B.; Winter, R.; Ogilvie, P.
1987-01-01
DYCAST is a nonlinear structural dynamic finite element computer code developed for crash simulation. The element library contains stringers, beams, membrane skin triangles, plate bending triangles and spring elements. Changing stiffnesses in the structure are accounted for by plasticity and very large deflections. Material nonlinearities are accommodated by one of three options: elastic-perfectly plastic, elastic-linear hardening plastic, or elastic-nonlinear hardening plastic of the Ramberg-Osgood type. Geometric nonlinearities are handled in an updated Lagrangian formulation by reforming the structure into its deformed shape after small time increments while accumulating deformations, strains, and forces. The nonlinearities due to combined loadings are maintained, and stiffness variation due to structural failures are computed. Numerical time integrators available are fixed-step central difference, modified Adams, Newmark-beta, and Wilson-theta. The last three have a variable time step capability, which is controlled internally by a solution convergence error measure. Other features include: multiple time-load history tables to subject the structure to time dependent loading; gravity loading; initial pitch, roll, yaw, and translation of the structural model with respect to the global system; a bandwidth optimizer as a pre-processor; and deformed plots and graphics as post-processors.
Chen, Bo-Ru; Yeh, An-Chou; Yeh, Jien-Wei
2016-02-29
In this study, the grain boundary evolution of equiatomic CoCrFeMnNi, CoCrFeNi, and FeCoNi alloys after one-step recrystallization were investigated. The special boundary fraction and twin density of these alloys were evaluated by electron backscatter diffraction analysis. Among the three alloys tested, FeCoNi exhibited the highest special boundary fraction and twin density after one-step recrystallization. The special boundary increment after one-step recrystallization was mainly affected by grain boundary velocity, while twin density was mainly affected by average grain boundary energy and twin boundary energy.
NASA Astrophysics Data System (ADS)
D'Archivio, Angelo Antonio; Maggi, Maria Anna; Odoardi, Antonella; Santucci, Sandro; Passacantando, Maurizio
2018-02-01
Multi-walled carbon nanotubes (MWCNTs), because of their small size and large available surface area, are potentially efficient sorbents for the extraction of water solutes. Dispersion of MWCNTs in aqueous medium is suitable to adsorb organic contaminants from small sample volumes, but, the recovery of the suspended sorbent for successive re-use represents a critical step, which makes this method inapplicable in large-scale water-treatment technologies. To overcome this problem, we proposed here MWCNTs grown on silicon supports and investigated on a small-volume scale their adsorption properties towards triazine herbicides dissolved in water. The adsorption efficiency of the supported MWCNTs has been tested on seven triazine herbicides, which are emerging water contaminants in Europe and USA, because of their massive use, persistence in soils and potential risks for the aquatic organisms and human health. The investigated compounds, in spite of their common molecular skeleton, cover a relatively large property range in terms of both solubility in water and hydrophilicity/hydrophobicity. The functionalisation of MWCNTs carried out by acidic oxidation, apart from increasing wettability of the material, results in a better adsorption performance. Increasing of functionalisation time between 17 and 60 h progressively increases the extraction of all seven pesticides and produces a moderate increment of selectivity.
Rise and fall of political complexity in island South-East Asia and the Pacific.
Currie, Thomas E; Greenhill, Simon J; Gray, Russell D; Hasegawa, Toshikazu; Mace, Ruth
2010-10-14
There is disagreement about whether human political evolution has proceeded through a sequence of incremental increases in complexity, or whether larger, non-sequential increases have occurred. The extent to which societies have decreased in complexity is also unclear. These debates have continued largely in the absence of rigorous, quantitative tests. We evaluated six competing models of political evolution in Austronesian-speaking societies using phylogenetic methods. Here we show that in the best-fitting model political complexity rises and falls in a sequence of small steps. This is closely followed by another model in which increases are sequential but decreases can be either sequential or in bigger drops. The results indicate that large, non-sequential jumps in political complexity have not occurred during the evolutionary history of these societies. This suggests that, despite the numerous contingent pathways of human history, there are regularities in cultural evolution that can be detected using computational phylogenetic methods.
Determination of Small Animal Long Bone Properties Using Densitometry
NASA Technical Reports Server (NTRS)
Breit, Gregory A.; Goldberg, BethAnn K.; Whalen, Robert T.; Hargens, Alan R. (Technical Monitor)
1996-01-01
Assessment of bone structural property changes due to loading regimens or pharmacological treatment typically requires destructive mechanical testing and sectioning. Our group has accurately and non-destructively estimated three dimensional cross-sectional areal properties (principal moments of inertia, Imax and Imin, and principal angle, Theta) of human cadaver long bones from pixel-by-pixel analysis of three non-coplanar densitometry scans. Because the scanner beam width is on the order of typical small animal diapbyseal diameters, applying this technique to high-resolution scans of rat long bones necessitates additional processing to minimize errors induced by beam smearing, such as dependence on sample orientation and overestimation of Imax and Imin. We hypothesized that these errors are correctable by digital image processing of the raw scan data. In all cases, four scans, using only the low energy data (Hologic QDR-1000W, small animal mode), are averaged to increase image signal-to-noise ratio. Raw scans are additionally processed by interpolation, deconvolution by a filter derived from scanner beam characteristics, and masking using a variable threshold based on image dynamic range. To assess accuracy, we scanned an aluminum step phantom at 12 orientations over a range of 180 deg about the longitudinal axis, in 15 deg increments. The phantom dimensions (2.5, 3.1, 3.8 mm x 4.4 mm; Imin/Imax: 0.33-0.74) were comparable to the dimensions of a rat femur which was also scanned. Cross-sectional properties were determined at 0.25 mm increments along the length of the phantom and femur. The table shows average error (+/- SD) from theory of Imax, Imin, and Theta) over the 12 orientations, calculated from raw and fully processed phantom images, as well as standard deviations about the mean for the femur scans. Processing of phantom scans increased agreement with theory, indicating improved accuracy. Smaller standard deviations with processing indicate increased precision and repeatability. Standard deviations for the femur are consistent with those of the phantom. We conclude that in conjunction with digital image enhancement, densitometry scans are suitable for non-destructive determination of areal properties of small animal bones of comparable size to our phantom, allowing prediction of Imax and Imin within 2.5% and Theta within a fraction of a degree. This method represents a considerable extension of current methods of analyzing bone tissue distribution in small animal bones.
NASA Astrophysics Data System (ADS)
Deng, J.; Zhou, L.; Dong, Y.; Sanford, R. A.; Shechtman, L. A.; Alcalde, R.; Werth, C. J.; Fouke, B. W.
2017-12-01
Microorganisms in nature have evolved in response to a variety of environmental stresses, including gradients in pH, flow and chemistry. While environmental stresses are generally considered to be the driving force of adaptive evolution, the impact and extent of any specific stress needed to drive such changes has not been well characterized. In this study, a microfluidic diffusion chamber (MDC) and a batch culturing system were used to systematically study the effects of continuous versus step-wise stress increments on adaptation of E. coli to the antibiotic ciprofloxacin. In the MDC, a diffusion gradient of ciprofloxacin was established across a microfluidic well array to microscopically observe changes in Escherichia coli strain 307 replication and migration patterns that would indicate emergence of resistance due to genetic mutations. Cells recovered from the MDC only had resistance of 50-times the original minimum inhibition concentration (MICoriginal) of ciprofloxacin, although minimum exposure concentrations were over 80 × MICoriginal by the end of the experiment. In complementary batch experiments, E. coli 307 were exposed to step-wise daily increases of ciprofloxacin at rates equivalent to 0.1×, 0.2×, 0.4× or 0.8× times MICoriginal/day. Over a period of 18 days, E. coli cells were able to acquire resistance of up to 225 × MICoriginal, with exposure to ciprofloxacin concentration up to only 14.9 × MICoriginal. The different levels of acquired resistance in the continuous MDC versus step-wise batch increment experiments suggests that the intrinsic rate of E. coli adaptation was exceeded in the MDC, while the step-wise experiments favor adaptation to the highest ciprofloxacin experiments. Genomic analyses of E. coli DNA extracted from the microfluidic cell and batch cultures indicated four single nucleotide polymorphism (SNP) mutations of amino acid 82, 83 and 87 in the gyrA gene. The progression of adaptation in the step-wise increments of ciprofloxacin indicate that the Ser83-Leu mutation gradually becomes dominant over other gyrA mutations with increased antibiotic resistance. Co-existence of the Ser83-Leu and Asp87—Gly mutations appear to provide the greatest level of resistance (i.e., 85 × to 225 × MICoriginal), and only emerged after the whole community acquired the Ser83—Leu mutation.
NASA Astrophysics Data System (ADS)
Wernicke, S.; Dang, T.; Gies, S.; Tekkaya, A. E.
2018-05-01
The tendency to a higher variety of products requires economical manufacturing processes suitable for the production of prototypes and small batches. In the case of complex hollow-shaped parts, single point incremental forming (SPIF) represents a highly flexible process. The flexibility of this process comes along with a very long process time. To decrease the process time, a new incremental forming approach with multiple forming tools is investigated. The influence of two incremental forming tools on the resulting mechanical and geometrical component properties compared to SPIF is presented. Sheets made of EN AW-1050A were formed to frustums of a pyramid using different tool-path strategies. Furthermore, several variations of the tool-path strategy are analyzed. A time saving between 40% and 60% was observed depending on the tool-path and the radii of the forming tools while the mechanical properties remained unchanged. This knowledge can increase the cost efficiency of incremental forming processes.
NASA Astrophysics Data System (ADS)
Ham, Yoo-Geun; Song, Hyo-Jong; Jung, Jaehee; Lim, Gyu-Ho
2017-04-01
This study introduces a altered version of the incremental analysis updates (IAU), called the nonstationary IAU (NIAU) method, to enhance the assimilation accuracy of the IAU while retaining the continuity of the analysis. Analogous to the IAU, the NIAU is designed to add analysis increments at every model time step to improve the continuity in the intermittent data assimilation. Still, unlike the IAU, the NIAU method applies time-evolved forcing employing the forward operator as rectifications to the model. The solution of the NIAU is better than that of the IAU, of which analysis is performed at the start of the time window for adding the IAU forcing, in terms of the accuracy of the analysis field. It is because, in the linear systems, the NIAU solution equals that in an intermittent data assimilation method at the end of the assimilation interval. To have the filtering property in the NIAU, a forward operator to propagate the increment is reconstructed with only dominant singular vectors. An illustration of those advantages of the NIAU is given using the simple 40-variable Lorenz model.
Chen, Bo-Ru; Yeh, An-Chou; Yeh, Jien-Wei
2016-01-01
In this study, the grain boundary evolution of equiatomic CoCrFeMnNi, CoCrFeNi, and FeCoNi alloys after one-step recrystallization were investigated. The special boundary fraction and twin density of these alloys were evaluated by electron backscatter diffraction analysis. Among the three alloys tested, FeCoNi exhibited the highest special boundary fraction and twin density after one-step recrystallization. The special boundary increment after one-step recrystallization was mainly affected by grain boundary velocity, while twin density was mainly affected by average grain boundary energy and twin boundary energy. PMID:26923713
NASA Astrophysics Data System (ADS)
Frohn, Peter; Engel, Bernd; Groth, Sebastian
2018-05-01
Kinematic forming processes shape geometries by the process parameters to achieve a more universal process utilizations regarding geometric configurations. The kinematic forming process Incremental Swivel Bending (ISB) bends sheet metal strips or profiles in plane. The sequence for bending an arc increment is composed of the steps clamping, bending, force release and feed. The bending moment is frictionally engaged by two clamping units in a laterally adjustable bending pivot. A minimum clamping force hindering the material from slipping through the clamping units is a crucial criterion to achieve a well-defined incremental arc. Therefore, an analytic description of a singular bent increment is developed in this paper. The bending moment is calculated by the uniaxial stress distribution over the profiles' width depending on the bending pivot's position. By a Coulomb' based friction model, necessary clamping force is described in dependence of friction, offset, dimensions of the clamping tools and strip thickness as well as material parameters. Boundaries for the uniaxial stress calculation are given in dependence of friction, tools' dimensions and strip thickness. The results indicate that changing the bending pivot to an eccentric position significantly affects the process' bending moment and, hence, clamping force, which is given in dependence of yield stress and hardening exponent. FE simulations validate the model with satisfactory accordance.
NASA Technical Reports Server (NTRS)
Keppenne, Christian; Vernieres, Guillaume; Rienecker, Michele; Jacob, Jossy; Kovach, Robin
2011-01-01
Satellite altimetry measurements have provided global, evenly distributed observations of the ocean surface since 1993. However, the difficulties introduced by the presence of model biases and the requirement that data assimilation systems extrapolate the sea surface height (SSH) information to the subsurface in order to estimate the temperature, salinity and currents make it difficult to optimally exploit these measurements. This talk investigates the potential of the altimetry data assimilation once the biases are accounted for with an ad hoc bias estimation scheme. Either steady-state or state-dependent multivariate background-error covariances from an ensemble of model integrations are used to address the problem of extrapolating the information to the sub-surface. The GMAO ocean data assimilation system applied to an ensemble of coupled model instances using the GEOS-5 AGCM coupled to MOM4 is used in the investigation. To model the background error covariances, the system relies on a hybrid ensemble approach in which a small number of dynamically evolved model trajectories is augmented on the one hand with past instances of the state vector along each trajectory and, on the other, with a steady state ensemble of error estimates from a time series of short-term model forecasts. A state-dependent adaptive error-covariance localization and inflation algorithm controls how the SSH information is extrapolated to the sub-surface. A two-step predictor corrector approach is used to assimilate future information. Independent (not-assimilated) temperature and salinity observations from Argo floats are used to validate the assimilation. A two-step projection method in which the system first calculates a SSH increment and then projects this increment vertically onto the temperature, salt and current fields is found to be most effective in reconstructing the sub-surface information. The performance of the system in reconstructing the sub-surface fields is particularly impressive for temperature, but not as satisfactory for salt.
The cultural evolution of democracy: saltational changes in a political regime landscape.
Lindenfors, Patrik; Jansson, Fredrik; Sandberg, Mikael
2011-01-01
Transitions to democracy are most often considered the outcome of historical modernization processes. Socio-economic changes, such as increases in per capita GNP, education levels, urbanization and communication, have traditionally been found to be correlates or 'requisites' of democratic reform. However, transition times and the number of reform steps have not been studied comprehensively. Here we show that historically, transitions to democracy have mainly occurred through rapid leaps rather than slow and incremental transition steps, with a median time from autocracy to democracy of 2.4 years, and overnight in the reverse direction. Our results show that autocracy and democracy have acted as peaks in an evolutionary landscape of possible modes of institutional arrangements. Only scarcely have there been slow incremental transitions. We discuss our results in relation to the application of phylogenetic comparative methods in cultural evolution and point out that the evolving unit in this system is the institutional arrangement, not the individual country which is instead better regarded as the 'host' for the political system.
On higher order discrete phase-locked loops.
NASA Technical Reports Server (NTRS)
Gill, G. S.; Gupta, S. C.
1972-01-01
An exact mathematical model is developed for a discrete loop of a general order particularly suitable for digital computation. The deterministic response of the loop to the phase step and the frequency step is investigated. The design of the digital filter for the second-order loop is considered. Use is made of the incremental phase plane to study the phase error behavior of the loop. The model of the noisy loop is derived and the optimization of the loop filter for minimum mean-square error is considered.
Chase, R.L.
1963-05-01
An electronic fast multiplier circuit utilizing a transistor controlled voltage divider network is presented. The multiplier includes a stepped potentiometer in which solid state or transistor switches are substituted for mechanical wipers in order to obtain electronic switching that is extremely fast as compared to the usual servo-driven mechanical wipers. While this multiplier circuit operates as an approximation and in steps to obtain a voltage that is the product of two input voltages, any desired degree of accuracy can be obtained with the proper number of increments and adjustment of parameters. (AEC)
Bergstrom, Paul M.; Daly, Thomas P.; Moses, Edward I.; Patterson, Jr., Ralph W.; Schach von Wittenau, Alexis E.; Garrett, Dewey N.; House, Ronald K.; Hartmann-Siantar, Christine L.; Cox, Lawrence J.; Fujino, Donald H.
2000-01-01
A system and method is disclosed for radiation dose calculation within sub-volumes of a particle transport grid. In a first step of the method voxel volumes enclosing a first portion of the target mass are received. A second step in the method defines dosel volumes which enclose a second portion of the target mass and overlap the first portion. A third step in the method calculates common volumes between the dosel volumes and the voxel volumes. A fourth step in the method identifies locations in the target mass of energy deposits. And, a fifth step in the method calculates radiation doses received by the target mass within the dosel volumes. A common volume calculation module inputs voxel volumes enclosing a first portion of the target mass, inputs voxel mass densities corresponding to a density of the target mass within each of the voxel volumes, defines dosel volumes which enclose a second portion of the target mass and overlap the first portion, and calculates common volumes between the dosel volumes and the voxel volumes. A dosel mass module, multiplies the common volumes by corresponding voxel mass densities to obtain incremental dosel masses, and adds the incremental dosel masses corresponding to the dosel volumes to obtain dosel masses. A radiation transport module identifies locations in the target mass of energy deposits. And, a dose calculation module, coupled to the common volume calculation module and the radiation transport module, for calculating radiation doses received by the target mass within the dosel volumes.
Stopping mechanism for capsule endoscope using electrical stimulus.
Woo, Sang Hyo; Kim, Tae Wan; Cho, Jin Ho
2010-01-01
An ingestible capsule, which has the ability to stop at certain locations in the small intestine, was designed and implemented to monitor intestinal diseases. The proposed capsule can contract the small intestine by using electrical stimuli; this contraction causes the capsule to stop when the maximum static frictional force (MSFF) is larger than the force of natural peristalsis. In vitro experiments were carried out to verify the feasibility of the capsule, and the results showed that the capsule was successfully stopped in the small intestine. Various electrodes and electrical stimulus parameters were determined on the basis of the MSFF. A moderate increment of the MSFF (12.7 +/- 4.6 gf at 5 V, 10 Hz, and 5 ms) and the maximum increment of the MSFF (56.5 +/- 9.77 gf at 20 V, 10 Hz, and 5 ms) were obtained, and it is sufficient force to stop the capsule.
Building Program Models Incrementally from Informal Descriptions.
1979-10-01
specified at each step. Since the user controls the interaction, the user may determine the order in which information flows into PMB. Information is received...until only ten years ago the term aautomatic programming" referred to the development of the assemblers, macro expanders, and compilers for these
Dynamic Constraint Satisfaction with Reasonable Global Constraints
NASA Technical Reports Server (NTRS)
Frank, Jeremy
2003-01-01
Previously studied theoretical frameworks for dynamic constraint satisfaction problems (DCSPs) employ a small set of primitive operators to modify a problem instance. They do not address the desire to model problems using sophisticated global constraints, and do not address efficiency questions related to incremental constraint enforcement. In this paper, we extend a DCSP framework to incorporate global constraints with flexible scope. A simple approach to incremental propagation after scope modification can be inefficient under some circumstances. We characterize the cases when this inefficiency can occur, and discuss two ways to alleviate this problem: adding rejection variables to the scope of flexible constraints, and adding new features to constraints that permit increased control over incremental propagation.
A simple method for quantitating the propensity for calcium oxalate crystallization in urine
NASA Technical Reports Server (NTRS)
Wabner, C. L.; Pak, C. Y.
1991-01-01
To assess the propensity for spontaneous crystallization of calcium oxalate in urine, the permissible increment in oxalate is calculated. The previous method required visual observation of crystallization with the addition of oxalate, this warranted the need for a large volume of urine and a sacrifice in accuracy in defining differences between small incremental changes of added oxalate. Therefore, this method has been miniaturized and spontaneous crystallization is detected from the depletion of radioactive oxalate. The new "micro" method demonstrated a marked decrease (p < 0.001) in the permissible increment in oxalate in urine of stone formers versus normal subjects. Moreover, crystallization inhibitors added to urine, in vitro (heparin or diphosphonate) or in vivo (potassium citrate administration), substantially increased the permissible increment in oxalate. Thus, the "micro" method has proven reliable and accurate in discriminating stone forming from control urine and in distinguishing changes of inhibitory activity.
NASA Technical Reports Server (NTRS)
Smith, Mark S.; Bui, Trong T.; Garcia, Christian A.; Cumming, Stephen B.
2016-01-01
A pair of compliant trailing edge flaps was flown on a modified GIII airplane. Prior to flight test, multiple analysis tools of various levels of complexity were used to predict the aerodynamic effects of the flaps. Vortex lattice, full potential flow, and full Navier-Stokes aerodynamic analysis software programs were used for prediction, in addition to another program that used empirical data. After the flight-test series, lift and pitching moment coefficient increments due to the flaps were estimated from flight data and compared to the results of the predictive tools. The predicted lift increments matched flight data well for all predictive tools for small flap deflections. All tools over-predicted lift increments for large flap deflections. The potential flow and Navier-Stokes programs predicted pitching moment coefficient increments better than the other tools.
Building perceptual color maps for visualizing interval data
NASA Astrophysics Data System (ADS)
Kalvin, Alan D.; Rogowitz, Bernice E.; Pelah, Adar; Cohen, Aron
2000-06-01
In visualization, a 'color map' maps a range of data values onto a scale of colors. However, unless a color map is e carefully constructed, visual artifacts can be produced. This problem has stimulated considerable interest in creating perceptually based color maps, that is, color maps where equal steps in data value are perceived as equal steps in the color map [Robertson (1988); Pizer (1981); Green (1992); Lefkowitz and Herman, 1992)]. In Rogowitz and Treinish, (1996, 1998) and in Bergman, Treinish and Rogowitz, (1995), we demonstrated that color maps based on luminance or saturation could be good candidates for satisfying this requirement. This work is based on the seminal work of S.S. Stevens (1966), who measured the perceived magnitude of different magnitudes of physical stimuli. He found that for many physical scales, including luminance (cd/m2) and saturation (the 'redness' of a long-wavelength light source), equal ratios in stimulus value produced equal ratios in perceptual magnitude. He interpreted this as indicating that there exists in human cognition a common scale for representing magnitude, and we scale the effects of different physical stimuli to this internal scale. In Rogowitz, Kalvin, Pelahb and Cohen (1999), we used a psychophysical technique to test this hypothesis as it applies to the creation of perceptually uniform color maps. We constructed color maps as trajectories through three-color spaces, a common computer graphics standard (uncalibrated HSV), a common perceptually-based engineering standard for creating visual stimuli (L*a*b*), and a space commonly used in the graphic arts (Munsell). For each space, we created color scales that varied linearly in hue, saturation, or luminance and measured the detectability of increments in hue, saturation or luminance for each of these color scales. We measured the amplitude of the just-detectable Gaussian increments at 20 different values along the range of each color map. For all three color spaces, we found that luminance-based color maps provided the most perceptually- uniform representations of the data. The just-detectable increment was constant at all points in the color map, with the exception of the lowest-luminance values, where a larger increment was required. The saturation-based color maps provided less sensitivity than the luminance-based color maps, requiring much larger increments for detection. For the hue- based color maps, the size of the increment required for detection varied across the range. For example, for the standard 'rainbow' color map (uncalibrated HSV, hue-varying map), a step in the 'green' region required an increment 16 times the size of the increment required in the 'cyan' part of the range. That is, the rainbow color map would not successfully represent changes in the data in the 'green' region of this color map. In this paper, we extend this research by studying the detectability of spatially-modulated Gabor targets based on these hue, saturation and luminance scales. Since, in visualization, the user is called upon to detect and identify patterns that vary in their spatial characteristics, it is important to study how different types of color maps represent data with varying spatial properties. To do so, we measured modulation thresholds for low-(0.2 c/deg) and high-spatial frequency (4.0 c/deg) Gabor patches and compared them with the Gaussian results. As before, we measured increment thresholds for hue, saturation, and luminance modulations. These color scales were constructed as trajectories along the three perceptual dimensions of color (hue, saturation, and luminance) in two color spaces, uncalibrated HSV and calibrated L*a*b. This allowed us to study how the three perceptual dimensions represent magnitude information for test patterns varying in spatial frequency. This design also allowed us to test the hypothesis that the luminance channel best carries high-spatial frequency information while the saturation channel best represents low spatial-frequency information (Mullen 1985; DeValois and DeValois 1988).
1989-04-01
character*25 msg,echol,echo2,msgl character* 12 fname,vel,stepl,step2 character dvm(15),decl,dec2 integer numl,num2,row,col, icheck ,fig real data C fig...flg) linex = ’ send step2 error’ if (flg.ne.0) goto 8000 write (*,610) step2,echol c c 200 icheck = 0 c c ENTER FILE NAME c write (*,’(A/)’)’ Specify...dvm, data) write (*,660) i, icheck write (*,600) data write (3,640) data c c Increment Horizontal Position c msg - ’I1"’ call send855 (msg,echo 1,flg
MSH3 Promotes Dynamic Behavior of Trinucleotide Repeat Tracts In Vivo
Williams, Gregory M.; Surtees, Jennifer A.
2015-01-01
Trinucleotide repeat (TNR) expansions are the underlying cause of more than 40 neurodegenerative and neuromuscular diseases, including myotonic dystrophy and Huntington’s disease, yet the pathway to expansion remains poorly understood. An important step in expansion is the shift from a stable TNR sequence to an unstable, expanding tract, which is thought to occur once a TNR attains a threshold length. Modeling of human data has indicated that TNR tracts are increasingly likely to expand as they increase in size and to do so in increments that are smaller than the repeat itself, but this has not been tested experimentally. Genetic work has implicated the mismatch repair factor MSH3 in promoting expansions. Using Saccharomyces cerevisiae as a model for CAG and CTG tract dynamics, we examined individual threshold-length TNR tracts in vivo over time in MSH3 and msh3Δ backgrounds. We demonstrate, for the first time, that these TNR tracts are highly dynamic. Furthermore, we establish that once such a tract has expanded by even a few repeat units, it is significantly more likely to expand again. Finally, we show that threshold- length TNR sequences readily accumulate net incremental expansions over time through a series of small expansion and contraction events. Importantly, the tracts were substantially stabilized in the msh3Δ background, with a bias toward contractions, indicating that Msh2-Msh3 plays an important role in shifting the expansion-contraction equilibrium toward expansion in the early stages of TNR tract expansion. PMID:25969461
Image Fluctuations in LED Electromechanical 3D-Display
NASA Astrophysics Data System (ADS)
Klyuev, Alexey V.; Yakimov, Arkady V.
Fluctuations in parameters of light-emitting diode (LED) electromechanical 3D-display are investigated. It is shown, that there are two types of fluctuations in the rotating 3D-display. The first one is caused by a small increment in the rotation angle, which has a tendency to the increase. That occurs in the form of the “drift” without periodic changes of the angle. The second one is the change in small linear increments of the angle, which occurs as undamped harmonic oscillations with constant amplitude. This shows the stability of the investigated steady state because there is no tendency to increase the amplitude of the considered parameter regime. In conclusion we give some recommendations how to improve synchronization of the system.
Zheng, Wenjun; Brooks, Bernard R
2006-06-15
Recently we have developed a normal-modes-based algorithm that predicts the direction of protein conformational changes given the initial state crystal structure together with a small number of pairwise distance constraints for the end state. Here we significantly extend this method to accurately model both the direction and amplitude of protein conformational changes. The new protocol implements a multisteps search in the conformational space that is driven by iteratively minimizing the error of fitting the given distance constraints and simultaneously enforcing the restraint of low elastic energy. At each step, an incremental structural displacement is computed as a linear combination of the lowest 10 normal modes derived from an elastic network model, whose eigenvectors are reorientated to correct for the distortions caused by the structural displacements in the previous steps. We test this method on a list of 16 pairs of protein structures for which relatively large conformational changes are observed (root mean square deviation >3 angstroms), using up to 10 pairwise distance constraints selected by a fluctuation analysis of the initial state structures. This method has achieved a near-optimal performance in almost all cases, and in many cases the final structural models lie within root mean square deviation of 1 approximately 2 angstroms from the native end state structures.
Analysis of In-Canyon Flow Characterisitcs in step-up street canyons
NASA Astrophysics Data System (ADS)
PARK, S.; Kim, J.; Choi, W.; Pardyjak, E.
2017-12-01
Flow characteristics in strep-up street canyons were investigated focusing on in-canyon region. To see the effects of the building geometry, two building height ratios [ratio of the upwind (Hu) to downwind building heights (Hd) = 0.33, 0.6] were considered and eight building length ratios [ratio of the cross-wind building length (L) to street-canyon width (S) from 0.5 to 4 with the increment of 0.5] were systematically changed. For the model validation, the simulated results were compared with the wind- tunnel data measured for Hu/Hd = 0.33, 0.6 and L/S = 1, 2, 3, and 4. In the CFD model simulations, the corner vortices at the downwind side near the ground level and the recirculation zones above the downwind buildings had the relatively small extents, compared with those in the wind-tunnel experiments. However, the CFD model reproduced the main flow features such as the street-canyon vortices, circulations above the building roof, and the positions of the stagnation points on the downwind building walls in the wind-tunnel experiments reasonably well. By further analyzing the three-dimensional flow structures based on the numerical results simulated in the step-up street canyons, we schematically suggested the flow characteristics with different building-height and building-length ratios.
Harkey, Jane; Sortedahl, Charlotte; Crook, Michelle M; Sminkey, Patrice V
The propose of this discussion is to explore the role of the case manager to empower and motivate clients, especially those who appear "stuck" or resistant to change. Drawing upon the experiences of case managers across many different practice settings, the article addresses how case managers can tap into the individual's underlying and sometimes deep-seated desires in order to foster buy-in for making even small steps toward achieving their health goals. The article also addresses how motivational interviewing can be an effective tool used by case managers to uncover blocks and barriers that prevent clients from making changes in their health or lifestyle habits. This discussion applies to case management practices and work settings across the full continuum of health care. The implication for case managers is deeper understanding of the importance of motivation to help clients make positive steps toward achieving their health goals. This understanding is especially important in advocating for clients who appear to be unmotivated or ambivalent, but who are actually "stuck" in engrained behaviors and habits because of a variety of factors, including past failures. Without judgment and by establishing rapport, case managers can tap into clients' desires, to help them make incremental progress toward their health goals.
Concordance cosmology without dark energy
NASA Astrophysics Data System (ADS)
Rácz, Gábor; Dobos, László; Beck, Róbert; Szapudi, István; Csabai, István
2017-07-01
According to the separate universe conjecture, spherically symmetric sub-regions in an isotropic universe behave like mini-universes with their own cosmological parameters. This is an excellent approximation in both Newtonian and general relativistic theories. We estimate local expansion rates for a large number of such regions, and use a scale parameter calculated from the volume-averaged increments of local scale parameters at each time step in an otherwise standard cosmological N-body simulation. The particle mass, corresponding to a coarse graining scale, is an adjustable parameter. This mean field approximation neglects tidal forces and boundary effects, but it is the first step towards a non-perturbative statistical estimation of the effect of non-linear evolution of structure on the expansion rate. Using our algorithm, a simulation with an initial Ωm = 1 Einstein-de Sitter setting closely tracks the expansion and structure growth history of the Λ cold dark matter (ΛCDM) cosmology. Due to small but characteristic differences, our model can be distinguished from the ΛCDM model by future precision observations. Moreover, our model can resolve the emerging tension between local Hubble constant measurements and the Planck best-fitting cosmology. Further improvements to the simulation are necessary to investigate light propagation and confirm full consistency with cosmic microwave background observations.
Fate and Transport of Tungsten at Camp Edwards Small Arms Ranges
2007-08-01
area into the lower berm and/or trough. A similar approach was used in the lower berm area with samples collected from soil sloughing from the...bucket au- ger to collect samples beneath the bullet pockets and the trough. A multi - increment, subsurface soil sample was made by combining the...range. From these soil profiles, a total of 72 multi -increment subsurface soil sam- ples was collected (Table 2). The auger was cleaned between holes
Molecular Volumes and the Stokes-Einstein Equation
ERIC Educational Resources Information Center
Edward, John T.
1970-01-01
Examines the limitations of the Stokes-Einstein equation as it applies to small solute molecules. Discusses molecular volume determinations by atomic increments, molecular models, molar volumes of solids and liquids, and molal volumes. Presents an empirical correction factor for the equation which applies to molecular radii as small as 2 angstrom…
Critical Race Theory and the Whiteness of Teacher Education
ERIC Educational Resources Information Center
Sleeter, Christine E.
2017-01-01
This article uses three tenets of critical race theory to critique the common pattern of teacher education focusing on preparing predominantly White cohorts of teacher candidates for racially and ethnically diverse students. The tenet of interest convergence asks how White interests are served through incremental steps. The tenet of color…
Grammar and the Lexicon. Working Papers in Linguistics 16.
ERIC Educational Resources Information Center
University of Trondheim Working Papers in Linguistics, 1993
1993-01-01
In this volume, five working papers are presented. "Minimal Signs and Grammar" (Lars Hellan) proposes that a significant part of the "production" of grammar is incremental, building larger and larger constructs, with lexical objects called minimal signs as the first steps. It also suggests that the basic lexical information in…
Infant Attachment and Separation: The Foundations for Social/Emotional Growth.
ERIC Educational Resources Information Center
Orion, Judi
2002-01-01
Traces encounters between mother and child that occur around nursing and feeding, which result in a powerful attachment. Identifies approaching solid foods and subsequent weaning as the place where detachment begins. Discusses locomotion as another way incremental steps toward independence are reached: crawling, walking, and pulling up with hands…
36 CFR 1194.23 - Telecommunications products.
Code of Federal Regulations, 2012 CFR
2012-07-01
...) For transmitted voice signals, telecommunications products shall provide a gain adjustable up to a minimum of 20 dB. For incremental volume control, at least one intermediate step of 12 dB of gain shall be... access or shall restore it upon delivery. (k) Products which have mechanically operated controls or keys...
36 CFR 1194.23 - Telecommunications products.
Code of Federal Regulations, 2010 CFR
2010-07-01
...) For transmitted voice signals, telecommunications products shall provide a gain adjustable up to a minimum of 20 dB. For incremental volume control, at least one intermediate step of 12 dB of gain shall be... access or shall restore it upon delivery. (k) Products which have mechanically operated controls or keys...
36 CFR 1194.23 - Telecommunications products.
Code of Federal Regulations, 2011 CFR
2011-07-01
...) For transmitted voice signals, telecommunications products shall provide a gain adjustable up to a minimum of 20 dB. For incremental volume control, at least one intermediate step of 12 dB of gain shall be... access or shall restore it upon delivery. (k) Products which have mechanically operated controls or keys...
36 CFR 1194.23 - Telecommunications products.
Code of Federal Regulations, 2014 CFR
2014-07-01
...) For transmitted voice signals, telecommunications products shall provide a gain adjustable up to a minimum of 20 dB. For incremental volume control, at least one intermediate step of 12 dB of gain shall be... access or shall restore it upon delivery. (k) Products which have mechanically operated controls or keys...
Nasir, Hina; Javaid, Nadeem; Sher, Muhammad; Qasim, Umar; Khan, Zahoor Ali; Alrajeh, Nabil; Niaz, Iftikhar Azim
2016-01-01
This paper embeds a bi-fold contribution for Underwater Wireless Sensor Networks (UWSNs); performance analysis of incremental relaying in terms of outage and error probability, and based on the analysis proposition of two new cooperative routing protocols. Subject to the first contribution, a three step procedure is carried out; a system model is presented, the number of available relays are determined, and based on cooperative incremental retransmission methodology, closed-form expressions for outage and error probability are derived. Subject to the second contribution, Adaptive Cooperation in Energy (ACE) efficient depth based routing and Enhanced-ACE (E-ACE) are presented. In the proposed model, feedback mechanism indicates success or failure of data transmission. If direct transmission is successful, there is no need for relaying by cooperative relay nodes. In case of failure, all the available relays retransmit the data one by one till the desired signal quality is achieved at destination. Simulation results show that the ACE and E-ACE significantly improves network performance, i.e., throughput, when compared with other incremental relaying protocols like Cooperative Automatic Repeat reQuest (CARQ). E-ACE and ACE achieve 69% and 63% more throughput respectively as compared to CARQ in hard underwater environment. PMID:27420061
Pornographic image recognition and filtering using incremental learning in compressed domain
NASA Astrophysics Data System (ADS)
Zhang, Jing; Wang, Chao; Zhuo, Li; Geng, Wenhao
2015-11-01
With the rapid development and popularity of the network, the openness, anonymity, and interactivity of networks have led to the spread and proliferation of pornographic images on the Internet, which have done great harm to adolescents' physical and mental health. With the establishment of image compression standards, pornographic images are mainly stored with compressed formats. Therefore, how to efficiently filter pornographic images is one of the challenging issues for information security. A pornographic image recognition and filtering method in the compressed domain is proposed by using incremental learning, which includes the following steps: (1) low-resolution (LR) images are first reconstructed from the compressed stream of pornographic images, (2) visual words are created from the LR image to represent the pornographic image, and (3) incremental learning is adopted to continuously adjust the classification rules to recognize the new pornographic image samples after the covering algorithm is utilized to train and recognize the visual words in order to build the initial classification model of pornographic images. The experimental results show that the proposed pornographic image recognition method using incremental learning has a higher recognition rate as well as costing less recognition time in the compressed domain.
Joore, Manuela; Brunenberg, Danielle; Nelemans, Patricia; Wouters, Emiel; Kuijpers, Petra; Honig, Adriaan; Willems, Danielle; de Leeuw, Peter; Severens, Johan; Boonen, Annelies
2010-01-01
This article investigates whether differences in utility scores based on the EQ-5D and the SF-6D have impact on the incremental cost-utility ratios in five distinct patient groups. We used five empirical data sets of trial-based cost-utility studies that included patients with different disease conditions and severity (musculoskeletal disease, cardiovascular pulmonary disease, and psychological disorders) to calculate differences in quality-adjusted life-years (QALYs) based on EQ-5D and SF-6D utility scores. We compared incremental QALYs, incremental cost-utility ratios, and the probability that the incremental cost-utility ratio was acceptable within and across the data sets. We observed small differences in incremental QALYs, but large differences in the incremental cost-utility ratios and in the probability that these ratios were acceptable at a given threshold, in the majority of the presented cost-utility analyses. More specifically, in the patient groups with relatively mild health conditions the probability of acceptance of the incremental cost-utility ratio was considerably larger when using the EQ-5D to estimate utility. While in the patient groups with worse health conditions the probability of acceptance of the incremental cost-utility ratio was considerably larger when using the SF-6D to estimate utility. Much of the appeal in using QALYs as measure of effectiveness in economic evaluations is in the comparability across conditions and interventions. The incomparability of the results of cost-utility analyses using different instruments to estimate a single index value for health severely undermines this aspect and reduces the credibility of the use of incremental cost-utility ratios for decision-making.
Moriwaki, K; Mouri, M; Hagino, H
2017-06-01
Model-based economic evaluation was performed to assess the cost-effectiveness of zoledronic acid. Although zoledronic acid was dominated by alendronate, the incremental quality-adjusted life year (QALY) was quite small in extent. Considering the advantage of once-yearly injection of zoledronic acid in persistence, zoledronic acid might be a cost-effective treatment option compared to once-weekly oral alendronate. The purpose of this study was to estimate the cost-effectiveness of once-yearly injection of zoledronic acid for the treatment of osteoporosis in Japan. A patient-level state-transition model was developed to predict the outcome of patients with osteoporosis who have experienced a previous vertebral fracture. The efficacy of zoledronic acid was derived from a published network meta-analysis. Lifetime cost and QALYs were estimated for patients who had received zoledronic acid, alendronate, or basic treatment alone. The incremental cost-effectiveness ratio (ICER) of zoledronic acid was estimated. For patients 70 years of age, zoledronic acid was dominated by alendronate with incremental QALY of -0.004 to -0.000 and incremental cost of 430 USD to 493 USD. Deterministic sensitivity analysis indicated that the relative risk of hip fracture and drug cost strongly affected the cost-effectiveness of zoledronic acid compared to alendronate. Scenario analysis considering treatment persistence showed that the ICER of zoledronic acid compared to alendronate was estimated to be 47,435 USD, 27,018 USD, and 10,749 USD per QALY gained for patients with a T-score of -2.0, -2.5, or -3.0, respectively. Although zoledronic acid is dominated by alendronate, the incremental QALY is quite small in extent. Considering the advantage of annual zoledronic acid treatment in compliance and persistence, zoledronic acid may be a cost-effective treatment option compared to alendronate.
Incremental k-core decomposition: Algorithms and evaluation
Sariyuce, Ahmet Erdem; Gedik, Bugra; Jacques-SIlva, Gabriela; ...
2016-02-01
A k-core of a graph is a maximal connected subgraph in which every vertex is connected to at least k vertices in the subgraph. k-core decomposition is often used in large-scale network analysis, such as community detection, protein function prediction, visualization, and solving NP-hard problems on real networks efficiently, like maximal clique finding. In many real-world applications, networks change over time. As a result, it is essential to develop efficient incremental algorithms for dynamic graph data. In this paper, we propose a suite of incremental k-core decomposition algorithms for dynamic graph data. These algorithms locate a small subgraph that ismore » guaranteed to contain the list of vertices whose maximum k-core values have changed and efficiently process this subgraph to update the k-core decomposition. We present incremental algorithms for both insertion and deletion operations, and propose auxiliary vertex state maintenance techniques that can further accelerate these operations. Our results show a significant reduction in runtime compared to non-incremental alternatives. We illustrate the efficiency of our algorithms on different types of real and synthetic graphs, at varying scales. Furthermore, for a graph of 16 million vertices, we observe relative throughputs reaching a million times, relative to the non-incremental algorithms.« less
Dor, Avi; Luo, Qian; Gerstein, Maya Tuchman; Malveaux, Floyd; Mitchell, Herman; Markus, Anne Rossier
We present an incremental cost-effectiveness analysis of an evidence-based childhood asthma intervention (Community Healthcare for Asthma Management and Prevention of Symptoms [CHAMPS]) to usual management of childhood asthma in community health centers. Data used in the analysis include household surveys, Medicaid insurance claims, and community health center expenditure reports. We combined our incremental cost-effectiveness analysis with a difference-in-differences multivariate regression framework. We found that CHAMPS reduced symptom days by 29.75 days per child-year and was cost-effective (incremental cost-effectiveness ratio: $28.76 per symptom-free days). Most of the benefits were due to reductions in direct medical costs. Indirect benefits from increased household productivity were relatively small.
The Application of Quantity Discounts in Army Procurements (Field Test).
1980-04-01
Work Directive (PWD). d. The amended PWD is forwarded to the Procurement and Production (PP) control where quantity increments and delivery schedules are...counts on 97 Army Stock Fund small purchases (less than $10,000) and received 10 I be * 0p cebe * )~ Cb 111 cost effective discounts on 46 or 47.4% of...discount but the computed annualized cost for the QD increment was larger than the computed annualized cost for the EOQ, this was not a cost effective
2013-06-01
lenses of unconsolidated sand and rounded river gravel overlain by as much as 5 m of silt. Gravel consists mostly of quartz and metamorphic rock with...iii LIST OF FIGURES Page Figure 1. Example of multi-increment sampling using a systematic-random sampling design for collecting two separate...The small arms firing Range 16 Record berms at Fort Wainwright. .................... 25 Figure 9. Location of berms sampled using ISM and grab
Stokes parameters modulator for birefringent filters
NASA Technical Reports Server (NTRS)
Dollfus, A.
1985-01-01
The Solar Birefringent Filter (Filter Polarisiant Solaire Selectif FPSS) of Meudon Observatory is presently located at the focus of a solar refractor with a 28 cm lens directly pointed at the Sun. It produces a diffraction limited image without instrumental polarization and with a spectral resolution of 46,000 in a field of 6 arc min. diameter. The instrument is calibrated for absolute Doppler velocity measurements and is presently used for quantitative imagery of the radial velocity motions in the photosphere. The short period oscillations are recorded. Work of adapting the instrument for the imagery of the solar surface in the Stokes parameters is discussed. The first polarizer of the birefringent filter, with a reference position angle 0 deg, is associated with a fixed quarter wave plate at +45 deg. A rotating quarter wave plate is set at 0 deg and can be turned by incremented steps of exactly +45 deg. Another quarter wave plate also initially set at 0 deg is simultaneously incremented by -45 deg but only on each even step of the first plate. A complete cycle of increments produces images for each of the 6 parameters I + or - Q, I + or - U and I + or - V. These images are then subtracted by pairs to produce a full image in the three Stokes parameters Q, U and V. With proper retardation tolerance and positioning accuracy of the quarter wave plates, the cross talk between the Stokes parameters was calculated and checked to be minimal.
In silico evolution of biochemical networks
NASA Astrophysics Data System (ADS)
Francois, Paul
2010-03-01
We use computational evolution to select models of genetic networks that can be built from a predefined set of parts to achieve a certain behavior. Selection is made with the help of a fitness defining biological functions in a quantitative way. This fitness has to be specific to a process, but general enough to find processes common to many species. Computational evolution favors models that can be built by incremental improvements in fitness rather than via multiple neutral steps or transitions through less fit intermediates. With the help of these simulations, we propose a kinetic view of evolution, where networks are rapidly selected along a fitness gradient. This mathematics recapitulates Darwin's original insight that small changes in fitness can rapidly lead to the evolution of complex structures such as the eye, and explain the phenomenon of convergent/parallel evolution of similar structures in independent lineages. We will illustrate these ideas with networks implicated in embryonic development and patterning of vertebrates and primitive insects.
Nelson, Travis M; Sheller, Barbara; Friedman, Clive S; Bernier, Raphael
2015-01-01
Autism Spectrum Disorder (ASD) is a condition which most dentists will encounter in their practices. Contemporary educational and behavioral approaches may facilitate successful dental care. A literature review was conducted for relevant information on dental care for children with ASD. Educational principles used for children with ASD can be applied in the dental setting. Examples include: parent involvement in identifying strengths, sensitivities, and goal setting; using stories or video modeling in advance of the appointment; dividing dental treatment into sequential components; and modification of the environment to minimize sensory triggers. Patients with ASD are more capable of tolerating procedures that they are familiar with, and therefore should be exposed to new environments and stimuli in small incremental steps. By taking time to understand children with ASD as individuals and employing principles of learning, clinicians can provide high quality dental care for the majority of patients with ASD. © 2014 Special Care Dentistry Association and Wiley Periodicals, Inc.
A 3-D enlarged cell technique (ECT) for elastic wave modelling of a curved free surface
NASA Astrophysics Data System (ADS)
Wei, Songlin; Zhou, Jianyang; Zhuang, Mingwei; Liu, Qing Huo
2016-09-01
The conventional finite-difference time-domain (FDTD) method for elastic waves suffers from the staircasing error when applied to model a curved free surface because of its structured grid. In this work, an improved, stable and accurate 3-D FDTD method for elastic wave modelling on a curved free surface is developed based on the finite volume method and enlarged cell technique (ECT). To achieve a sufficiently accurate implementation, a finite volume scheme is applied to the curved free surface to remove the staircasing error; in the mean time, to achieve the same stability as the FDTD method without reducing the time step increment, the ECT is introduced to preserve the solution stability by enlarging small irregular cells into adjacent cells under the condition of conservation of force. This method is verified by several 3-D numerical examples. Results show that the method is stable at the Courant stability limit for a regular FDTD grid, and has much higher accuracy than the conventional FDTD method.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Koch, D.; Fertitta, E.; Paulus, B.
Due to the importance of both static and dynamical correlation in the bond formation, low-dimensional beryllium systems constitute interesting case studies to test correlation methods. Aiming to describe the whole dissociation curve of extended Be systems we chose to apply the method of increments (MoI) in its multireference (MR) formalism. To gain insight into the main characteristics of the wave function, we started by focusing on the description of small Be chains using standard quantum chemical methods. In a next step we applied the MoI to larger beryllium systems, starting from the Be{sub 6} ring. The complete active space formalismmore » was employed and the results were used as reference for local MR calculations of the whole dissociation curve. Although this is a well-established approach for systems with limited multireference character, its application regarding the description of whole dissociation curves requires further testing. Subsequent to the discussion of the role of the basis set, the method was finally applied to larger rings and extrapolated to an infinite chain.« less
Multi-Objective Control Optimization for Greenhouse Environment Using Evolutionary Algorithms
Hu, Haigen; Xu, Lihong; Wei, Ruihua; Zhu, Bingkun
2011-01-01
This paper investigates the issue of tuning the Proportional Integral and Derivative (PID) controller parameters for a greenhouse climate control system using an Evolutionary Algorithm (EA) based on multiple performance measures such as good static-dynamic performance specifications and the smooth process of control. A model of nonlinear thermodynamic laws between numerous system variables affecting the greenhouse climate is formulated. The proposed tuning scheme is tested for greenhouse climate control by minimizing the integrated time square error (ITSE) and the control increment or rate in a simulation experiment. The results show that by tuning the gain parameters the controllers can achieve good control performance through step responses such as small overshoot, fast settling time, and less rise time and steady state error. Besides, it can be applied to tuning the system with different properties, such as strong interactions among variables, nonlinearities and conflicting performance criteria. The results implicate that it is a quite effective and promising tuning method using multi-objective optimization algorithms in the complex greenhouse production. PMID:22163927
NASA Astrophysics Data System (ADS)
Baldi, Alfonso; Jacquot, Pierre
2003-05-01
Graphite-epoxy laminates are subjected to the "incremental hole-drilling" technique in order to investigate the residual stresses acting within each layer of the composite samples. In-plane speckle interferometry is used to measure the displacement field created by each drilling increment around the hole. Our approach features two particularities (1) we rely on the precise repositioning of the samples in the optical set-up after each new boring step, performed by means of a high precision, numerically controlled milling machine in the workshop; (2) for each increment, we acquire three displacement fields, along the length, the width of the samples, and at 45°, using a single symmetrical double beam illumination and a rotary stage holding the specimens. The experimental protocol is described in detail and the experimental results are presented, including a comparison with strain gages. Speckle interferometry appears as a suitable method to respond to the increasing demand for residual stress determination in composite samples.
SfM with MRFs: discrete-continuous optimization for large-scale structure from motion.
Crandall, David J; Owens, Andrew; Snavely, Noah; Huttenlocher, Daniel P
2013-12-01
Recent work in structure from motion (SfM) has built 3D models from large collections of images downloaded from the Internet. Many approaches to this problem use incremental algorithms that solve progressively larger bundle adjustment problems. These incremental techniques scale poorly as the image collection grows, and can suffer from drift or local minima. We present an alternative framework for SfM based on finding a coarse initial solution using hybrid discrete-continuous optimization and then improving that solution using bundle adjustment. The initial optimization step uses a discrete Markov random field (MRF) formulation, coupled with a continuous Levenberg-Marquardt refinement. The formulation naturally incorporates various sources of information about both the cameras and points, including noisy geotags and vanishing point (VP) estimates. We test our method on several large-scale photo collections, including one with measured camera positions, and show that it produces models that are similar to or better than those produced by incremental bundle adjustment, but more robustly and in a fraction of the time.
A Decade Revisited and a Step toward the Future: Incremental but Quintessential Progress
ERIC Educational Resources Information Center
Jin, D.-S.; Kim, M. H.; Park, D.
2014-01-01
This study scrutinizes "Asia Pacific Education Review" ("APER") that has been weathered 13 years of journey from its inception. Numerically, 504 peer-reviewed articles have been published so far, and 0.5 of impact factor has been achieved. This article recollects the history of "APER," overhaul the accomplishment and…
The Cultural Evolution of Democracy: Saltational Changes in A Political Regime Landscape
Lindenfors, Patrik; Jansson, Fredrik; Sandberg, Mikael
2011-01-01
Transitions to democracy are most often considered the outcome of historical modernization processes. Socio-economic changes, such as increases in per capita GNP, education levels, urbanization and communication, have traditionally been found to be correlates or ‘requisites’ of democratic reform. However, transition times and the number of reform steps have not been studied comprehensively. Here we show that historically, transitions to democracy have mainly occurred through rapid leaps rather than slow and incremental transition steps, with a median time from autocracy to democracy of 2.4 years, and overnight in the reverse direction. Our results show that autocracy and democracy have acted as peaks in an evolutionary landscape of possible modes of institutional arrangements. Only scarcely have there been slow incremental transitions. We discuss our results in relation to the application of phylogenetic comparative methods in cultural evolution and point out that the evolving unit in this system is the institutional arrangement, not the individual country which is instead better regarded as the ‘host’ for the political system. PMID:22140565
Accounting for estimated IQ in neuropsychological test performance with regression-based techniques.
Testa, S Marc; Winicki, Jessica M; Pearlson, Godfrey D; Gordon, Barry; Schretlen, David J
2009-11-01
Regression-based normative techniques account for variability in test performance associated with multiple predictor variables and generate expected scores based on algebraic equations. Using this approach, we show that estimated IQ, based on oral word reading, accounts for 1-9% of the variability beyond that explained by individual differences in age, sex, race, and years of education for most cognitive measures. These results confirm that adding estimated "premorbid" IQ to demographic predictors in multiple regression models can incrementally improve the accuracy with which regression-based norms (RBNs) benchmark expected neuropsychological test performance in healthy adults. It remains to be seen whether the incremental variance in test performance explained by estimated "premorbid" IQ translates to improved diagnostic accuracy in patient samples. We describe these methods, and illustrate the step-by-step application of RBNs with two cases. We also discuss the rationale, assumptions, and caveats of this approach. More broadly, we note that adjusting test scores for age and other characteristics might actually decrease the accuracy with which test performance predicts absolute criteria, such as the ability to drive or live independently.
Precision Linear Actuator for Space Interferometry Mission (SIM) Siderostat Pointing
NASA Technical Reports Server (NTRS)
Cook, Brant; Braun, David; Hankins, Steve; Koenig, John; Moore, Don
2008-01-01
'SIM PlanetQuest will exploit the classical measuring tool of astrometry (interferometry) with unprecedented precision to make dramatic advances in many areas of astronomy and astrophysics'(1). In order to obtain interferometric data two large steerable mirrors, or Siderostats, are used to direct starlight into the interferometer. A gimbaled mechanism actuated by linear actuators is chosen to meet the unprecedented pointing and angle tracking requirements of SIM. A group of JPL engineers designed, built, and tested a linear ballscrew actuator capable of performing submicron incremental steps for 10 years of continuous operation. Precise, zero backlash, closed loop pointing control requirements, lead the team to implement a ballscrew actuator with a direct drive DC motor and a precision piezo brake. Motor control commutation using feedback from a precision linear encoder on the ballscrew output produced an unexpected incremental step size of 20 nm over a range of 120 mm, yielding a dynamic range of 6,000,000:1. The results prove linear nanometer positioning requires no gears, levers, or hydraulic converters. Along the way many lessons have been learned and will subsequently be shared.
The Crucial Role of Error Correlation for Uncertainty Modeling of CFD-Based Aerodynamics Increments
NASA Technical Reports Server (NTRS)
Hemsch, Michael J.; Walker, Eric L.
2011-01-01
The Ares I ascent aerodynamics database for Design Cycle 3 (DAC-3) was built from wind-tunnel test results and CFD solutions. The wind tunnel results were used to build the baseline response surfaces for wind-tunnel Reynolds numbers at power-off conditions. The CFD solutions were used to build increments to account for Reynolds number effects. We calculate the validation errors for the primary CFD code results at wind tunnel Reynolds number power-off conditions and would like to be able to use those errors to predict the validation errors for the CFD increments. However, the validation errors are large compared to the increments. We suggest a way forward that is consistent with common practice in wind tunnel testing which is to assume that systematic errors in the measurement process and/or the environment will subtract out when increments are calculated, thus making increments more reliable with smaller uncertainty than absolute values of the aerodynamic coefficients. A similar practice has arisen for the use of CFD to generate aerodynamic database increments. The basis of this practice is the assumption of strong correlation of the systematic errors inherent in each of the results used to generate an increment. The assumption of strong correlation is the inferential link between the observed validation uncertainties at wind-tunnel Reynolds numbers and the uncertainties to be predicted for flight. In this paper, we suggest a way to estimate the correlation coefficient and demonstrate the approach using code-to-code differences that were obtained for quality control purposes during the Ares I CFD campaign. Finally, since we can expect the increments to be relatively small compared to the baseline response surface and to be typically of the order of the baseline uncertainty, we find that it is necessary to be able to show that the correlation coefficients are close to unity to avoid overinflating the overall database uncertainty with the addition of the increments.
How many steps/day are enough? For adults.
Tudor-Locke, Catrine; Craig, Cora L; Brown, Wendy J; Clemes, Stacy A; De Cocker, Katrien; Giles-Corti, Billie; Hatano, Yoshiro; Inoue, Shigeru; Matsudo, Sandra M; Mutrie, Nanette; Oppert, Jean-Michel; Rowe, David A; Schmidt, Michael D; Schofield, Grant M; Spence, John C; Teixeira, Pedro J; Tully, Mark A; Blair, Steven N
2011-07-28
Physical activity guidelines from around the world are typically expressed in terms of frequency, duration, and intensity parameters. Objective monitoring using pedometers and accelerometers offers a new opportunity to measure and communicate physical activity in terms of steps/day. Various step-based versions or translations of physical activity guidelines are emerging, reflecting public interest in such guidance. However, there appears to be a wide discrepancy in the exact values that are being communicated. It makes sense that step-based recommendations should be harmonious with existing evidence-based public health guidelines that recognize that "some physical activity is better than none" while maintaining a focus on time spent in moderate-to-vigorous physical activity (MVPA). Thus, the purpose of this review was to update our existing knowledge of "How many steps/day are enough?", and to inform step-based recommendations consistent with current physical activity guidelines. Normative data indicate that healthy adults typically take between 4,000 and 18,000 steps/day, and that 10,000 steps/day is reasonable for this population, although there are notable "low active populations." Interventions demonstrate incremental increases on the order of 2,000-2,500 steps/day. The results of seven different controlled studies demonstrate that there is a strong relationship between cadence and intensity. Further, despite some inter-individual variation, 100 steps/minute represents a reasonable floor value indicative of moderate intensity walking. Multiplying this cadence by 30 minutes (i.e., typical of a daily recommendation) produces a minimum of 3,000 steps that is best used as a heuristic (i.e., guiding) value, but these steps must be taken over and above habitual activity levels to be a true expression of free-living steps/day that also includes recommendations for minimal amounts of time in MVPA. Computed steps/day translations of time in MVPA that also include estimates of habitual activity levels equate to 7,100 to 11,000 steps/day. A direct estimate of minimal amounts of MVPA accumulated in the course of objectively monitored free-living behaviour is 7,000-8,000 steps/day. A scale that spans a wide range of incremental increases in steps/day and is congruent with public health recognition that "some physical activity is better than none," yet still incorporates step-based translations of recommended amounts of time in MVPA may be useful in research and practice. The full range of users (researchers to practitioners to the general public) of objective monitoring instruments that provide step-based outputs require good reference data and evidence-based recommendations to be able to design effective health messages congruent with public health physical activity guidelines, guide behaviour change, and ultimately measure, track, and interpret steps/day.
How many steps/day are enough? for adults
2011-01-01
Physical activity guidelines from around the world are typically expressed in terms of frequency, duration, and intensity parameters. Objective monitoring using pedometers and accelerometers offers a new opportunity to measure and communicate physical activity in terms of steps/day. Various step-based versions or translations of physical activity guidelines are emerging, reflecting public interest in such guidance. However, there appears to be a wide discrepancy in the exact values that are being communicated. It makes sense that step-based recommendations should be harmonious with existing evidence-based public health guidelines that recognize that "some physical activity is better than none" while maintaining a focus on time spent in moderate-to-vigorous physical activity (MVPA). Thus, the purpose of this review was to update our existing knowledge of "How many steps/day are enough?", and to inform step-based recommendations consistent with current physical activity guidelines. Normative data indicate that healthy adults typically take between 4,000 and 18,000 steps/day, and that 10,000 steps/day is reasonable for this population, although there are notable "low active populations." Interventions demonstrate incremental increases on the order of 2,000-2,500 steps/day. The results of seven different controlled studies demonstrate that there is a strong relationship between cadence and intensity. Further, despite some inter-individual variation, 100 steps/minute represents a reasonable floor value indicative of moderate intensity walking. Multiplying this cadence by 30 minutes (i.e., typical of a daily recommendation) produces a minimum of 3,000 steps that is best used as a heuristic (i.e., guiding) value, but these steps must be taken over and above habitual activity levels to be a true expression of free-living steps/day that also includes recommendations for minimal amounts of time in MVPA. Computed steps/day translations of time in MVPA that also include estimates of habitual activity levels equate to 7,100 to 11,000 steps/day. A direct estimate of minimal amounts of MVPA accumulated in the course of objectively monitored free-living behaviour is 7,000-8,000 steps/day. A scale that spans a wide range of incremental increases in steps/day and is congruent with public health recognition that "some physical activity is better than none," yet still incorporates step-based translations of recommended amounts of time in MVPA may be useful in research and practice. The full range of users (researchers to practitioners to the general public) of objective monitoring instruments that provide step-based outputs require good reference data and evidence-based recommendations to be able to design effective health messages congruent with public health physical activity guidelines, guide behaviour change, and ultimately measure, track, and interpret steps/day. PMID:21798015
Cost-Effectiveness Analysis of Regorafenib for Metastatic Colorectal Cancer
Goldstein, Daniel A.; Ahmad, Bilal B.; Chen, Qiushi; Ayer, Turgay; Howard, David H.; Lipscomb, Joseph; El-Rayes, Bassel F.; Flowers, Christopher R.
2015-01-01
Purpose Regorafenib is a standard-care option for treatment-refractory metastatic colorectal cancer that increases median overall survival by 6 weeks compared with placebo. Given this small incremental clinical benefit, we evaluated the cost-effectiveness of regorafenib in the third-line setting for patients with metastatic colorectal cancer from the US payer perspective. Methods We developed a Markov model to compare the cost and effectiveness of regorafenib with those of placebo in the third-line treatment of metastatic colorectal cancer. Health outcomes were measured in life-years and quality-adjusted life-years (QALYs). Drug costs were based on Medicare reimbursement rates in 2014. Model robustness was addressed in univariable and probabilistic sensitivity analyses. Results Regorafenib provided an additional 0.04 QALYs (0.13 life-years) at a cost of $40,000, resulting in an incremental cost-effectiveness ratio of $900,000 per QALY. The incremental cost-effectiveness ratio for regorafenib was > $550,000 per QALY in all of our univariable and probabilistic sensitivity analyses. Conclusion Regorafenib provides minimal incremental benefit at high incremental cost per QALY in the third-line management of metastatic colorectal cancer. The cost-effectiveness of regorafenib could be improved by the use of value-based pricing. PMID:26304904
Support System for Solar Receivers
NASA Technical Reports Server (NTRS)
Kiceniuk, T.
1985-01-01
Hinged split-ring mounts insure safe support of heavy receivers. In addition to safer operation and damage-free mounting system provides more accurate focusing, and small incremental adjustments of ring more easily made.
Gilbert, Jack A; O'Dor, Ronald; King, Nicholas; Vogel, Timothy M
2011-06-14
Scientific discovery is incremental. The Merriam-Webster definition of 'Scientific Method' is "principles and procedures for the systematic pursuit of knowledge involving the recognition and formulation of a problem, the collection of data through observation and experiment, and the formulation and testing of hypotheses". Scientists are taught to be excellent observers, as observations create questions, which in turn generate hypotheses. After centuries of science we tend to assume that we have enough observations to drive science, and enable the small steps and giant leaps which lead to theories and subsequent testable hypotheses. One excellent example of this is Charles Darwin's Voyage of the Beagle, which was essentially an opportunistic survey of biodiversity. Today, obtaining funding for even small-scale surveys of life on Earth is difficult; but few argue the importance of the theory that was generated by Darwin from his observations made during this epic journey. However, these observations, even combined with the parallel work of Alfred Russell Wallace at around the same time have still not generated an indisputable 'law of biology'. The fact that evolution remains a 'theory', at least to the general public, suggests that surveys for new data need to be taken to a new level.
On the statistics of increments in strong Alfvenic turbulence
NASA Astrophysics Data System (ADS)
Palacios, J. C.; Perez, J. C.
2017-12-01
In-situ measurements have shown that the solar wind is dominated by non-compressive Alfvén-like fluctuations of plasma velocity and magnetic field over a broad range of scales. In this work, we present recent progress in understanding intermittency in Alfvenic turbulence by investigating the statistics of Elsasser increments from simulations of steadily driven Reduced MHD with numerical resolutions up to 2048^3. The nature of these statistics guards a close relation to the fundamental properties of small-scale structures in which the turbulence is ultimately dissipated and therefore has profound implications in the possible contribution of turbulence to the heating of the solar wind. We extensively investigate the properties and three-dimensional structure of probability density functions (PDFs) of increments and compare with recent phenomenological models of intermittency in MHD turbulence.
Line roughness improvements on self-aligned quadruple patterning by wafer stress engineering
NASA Astrophysics Data System (ADS)
Liu, Eric; Ko, Akiteru; Biolsi, Peter; Chae, Soo Doo; Hsieh, Chia-Yun; Kagaya, Munehito; Lee, Choongman; Moriya, Tsuyoshi; Tsujikawa, Shimpei; Suzuki, Yusuke; Okubo, Kazuya; Imai, Kiyotaka
2018-04-01
In integrated circuit and memory devices, size shrinkage has been the most effective method to reduce production cost and enable the steady increment of the number of transistors per unit area over the past few decades. In order to reduce the die size and feature size, it is necessary to minimize pattern formation in the advance node development. In the node of sub-10nm, extreme ultra violet lithography (EUV) and multi-patterning solutions based on 193nm immersionlithography are the two most common options to achieve the size requirement. In such small features of line and space pattern, line width roughness (LWR) and line edge roughness (LER) contribute significant amount of process variation that impacts both physical and electrical performances. In this paper, we focus on optimizing the line roughness performance by using wafer stress engineering on 30nm pitch line and space pattern. This pattern is generated by a self-aligned quadruple patterning (SAQP) technique for the potential application of fin formation. Our investigation starts by comparing film materials and stress levels in various processing steps and material selection on SAQP integration scheme. From the cross-matrix comparison, we are able to determine the best stack of film selection and stress combination in order to achieve the lowest line roughness performance while obtaining pattern validity after fin etch. This stack is also used to study the step-by-step line roughness performance from SAQP to fin etch. Finally, we will show a successful patterning of 30nm pitch line and space pattern SAQP scheme with 1nm line roughness performance.
Multibeam collimator uses prism stack
NASA Technical Reports Server (NTRS)
Minott, P. O.
1981-01-01
Optical instrument creates many divergent light beams for surveying and machine element alignment applications. Angles and refractive indices of stack of prisms are selected to divert incoming laser beam by small increments, different for each prism. Angles of emerging beams thus differ by small, precisely-controlled amounts. Instrument is nearly immune to vibration, changes in gravitational force, temperature variations, and mechanical distortion.
The validity of the ActiPed for physical activity monitoring.
Brown, D K; Grimwade, D; Martinez-Bussion, D; Taylor, M J D; Gladwell, V F
2013-05-01
The ActiPed (FitLinxx) is a uniaxial accelerometer, which objectively measures physical activity, uploads the data wirelessly to a website, allowing participants and researchers to view activity levels remotely. The aim was to validate ActiPed's step count, distance travelled and activity time against direct observation. Further, to compare against pedometer (YAMAX), accelerometer (ActiGraph) and manufacturer's guidelines. 22 participants, aged 28±7 years, undertook 4 protocols, including walking on different surfaces and incremental running protocol (from 2 mph to 8 mph). Bland-Altman plots allowed comparison of direct observation against ActiPed estimates. For step count, the ActiPed showed a low % bias in all protocols: walking on a treadmill (-1.30%), incremental treadmill protocol (-1.98%), walking over grass (-1.67%), and walking over concrete (-0.93%). When differentiating between walking and running step count the ActiPed showed a % bias of 4.10% and -6.30%, respectively. The ActiPed showed >95% accuracy for distance and duration estimations overall, although underestimated distance (p<0.01) for walking over grass and concrete. Overall, the ActiPed showed acceptable levels of accuracy comparable to previous validated pedometers and accelerometers. The accuracy combined with the simple and informative remote gathering of data, suggests that the ActiPed could be a useful tool in objective physical activity monitoring. © Georg Thieme Verlag KG Stuttgart · New York.
Vieira, Marcus Fraga; de Sá E Souza, Gustavo Souto; Lehnen, Georgia Cristina; Rodrigues, Fábio Barbosa; Andrade, Adriano O
2016-10-01
The purpose of this study was to determine whether general fatigue induced by incremental maximal exercise test (IMET) affects gait stability and variability in healthy subjects. Twenty-two young healthy male subjects walked in a treadmill at preferred walking speed for 4min prior (PreT) the test, which was followed by three series of 4min of walking with 4min of rest among them. Gait variability was assessed using walk ratio (WR), calculated as step length normalized by step frequency, root mean square (RMSratio) of trunk acceleration, standard deviation of medial-lateral trunk acceleration between strides (VARML), coefficient of variation of step frequency (SFCV), length (SLCV) and width (SWCV). Gait stability was assessed using margin of stability (MoS) and local dynamic stability (λs). VARML, SFCV, SLCV and SWCV increased after the test indicating an increase in gait variability. MoS decreased and λs increased after the test, indicating a decrease in gait stability. All variables showed a trend to return to PreT values, but the 20-min post-test interval appears not to be enough for a complete recovery. The results showed that general fatigue induced by IMET alters negatively the gait, and an interval of at least 20min should be considered for injury prevention in tasks with similar demands. Copyright © 2016 Elsevier Ltd. All rights reserved.
Model based design introduction: modeling game controllers to microprocessor architectures
NASA Astrophysics Data System (ADS)
Jungwirth, Patrick; Badawy, Abdel-Hameed
2017-04-01
We present an introduction to model based design. Model based design is a visual representation, generally a block diagram, to model and incrementally develop a complex system. Model based design is a commonly used design methodology for digital signal processing, control systems, and embedded systems. Model based design's philosophy is: to solve a problem - a step at a time. The approach can be compared to a series of steps to converge to a solution. A block diagram simulation tool allows a design to be simulated with real world measurement data. For example, if an analog control system is being upgraded to a digital control system, the analog sensor input signals can be recorded. The digital control algorithm can be simulated with the real world sensor data. The output from the simulated digital control system can then be compared to the old analog based control system. Model based design can compared to Agile software develop. The Agile software development goal is to develop working software in incremental steps. Progress is measured in completed and tested code units. Progress is measured in model based design by completed and tested blocks. We present a concept for a video game controller and then use model based design to iterate the design towards a working system. We will also describe a model based design effort to develop an OS Friendly Microprocessor Architecture based on the RISC-V.
Groom, Madeleine J; Young, Zoe; Hall, Charlotte L; Gillott, Alinda; Hollis, Chris
2016-09-30
There is a clinical need for objective evidence-based measures that are sensitive and specific to ADHD when compared with other neurodevelopmental disorders. This study evaluated the incremental validity of adding an objective measure of activity and computerised cognitive assessment to clinical rating scales to differentiate adult ADHD from Autism spectrum disorders (ASD). Adults with ADHD (n=33) or ASD (n=25) performed the QbTest, comprising a Continuous Performance Test with motion-tracker to record physical activity. QbTest parameters measuring inattention, impulsivity and hyperactivity were combined to provide a summary score ('QbTotal'). Binary stepwise logistic regression measured the probability of assignment to the ADHD or ASD group based on scores on the Conners Adult ADHD Rating Scale-subscale E (CAARS-E) and Autism Quotient (AQ10) in the first step and then QbTotal added in the second step. The model fit was significant at step 1 (CAARS-E, AQ10) with good group classification accuracy. These predictors were retained and QbTotal was added, resulting in a significant improvement in model fit and group classification accuracy. All predictors were significant. ROC curves indicated superior specificity of QbTotal. The findings present preliminary evidence that adding QbTest to clinical rating scales may improve the differentiation of ADHD and ASD in adults. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Mandavia, Amar D; Bonanno, George A
2018-04-29
To determine whether there were incremental mental health impacts, specifically on depression trajectories, as a result of the 2008 economic crisis (the Great Recession) and subsequent Hurricane Sandy. Using latent growth mixture modeling and the ORANJ BOWL dataset, we examined prospective trajectories of depression among older adults (mean age, 60.67; SD, 6.86) who were exposed to the 2 events. We also collected community economic and criminal justice data to examine their impact upon depression trajectories. Participants (N=1172) were assessed at 3 times for affect, successful aging, and symptoms of depression. We additionally assessed posttraumatic stress disorder (PTSD) symptomology after Hurricane Sandy. We identified 3 prospective trajectories of depression. The majority (83.6%) had no significant change in depression from before to after these events (resilience), while 7.2% of the sample increased in depression incrementally after each event (incremental depression). A third group (9.2%) went from high to low depression symptomology following the 2 events (depressive-improving). Only those in the incremental depression group had significant PTSD symptoms following Hurricane Sandy. We identified a small group of individuals for whom the experience of multiple stressful events had an incremental negative effect on mental health outcomes. These results highlight the importance of understanding the perseveration of depression symptomology from one event to another. (Disaster Med Public Health Preparedness. 2018;page 1 of 10).
Using Hand Grip Force as a Correlate of Longitudinal Acceleration Comfort for Rapid Transit Trains
Guo, Beiyuan; Gan, Weide; Fang, Weining
2015-01-01
Longitudinal acceleration comfort is one of the essential metrics used to evaluate the ride comfort of train. The aim of this study was to investigate the effectiveness of using hand grip force as a correlate of longitudinal acceleration comfort of rapid transit trains. In the paper, a motion simulation system was set up and a two-stage experiment was designed to investigate the role of the grip force on the longitudinal comfort of rapid transit trains. The results of the experiment show that the incremental grip force was linearly correlated with the longitudinal acceleration value, while the incremental grip force had no correlation with the direction of the longitudinal acceleration vector. The results also show that the effects of incremental grip force and acceleration duration on the longitudinal comfort of rapid transit trains were significant. Based on multiple regression analysis, a step function model was established to predict the longitudinal comfort of rapid transit trains using the incremental grip force and the acceleration duration. The feasibility and practicably of the model was verified by a field test. Furthermore, a comparative analysis shows that the motion simulation system and the grip force based model were valid to support the laboratory studies on the longitudinal comfort of rapid transit trains. PMID:26147730
MSH3 Promotes Dynamic Behavior of Trinucleotide Repeat Tracts In Vivo.
Williams, Gregory M; Surtees, Jennifer A
2015-07-01
Trinucleotide repeat (TNR) expansions are the underlying cause of more than 40 neurodegenerative and neuromuscular diseases, including myotonic dystrophy and Huntington's disease, yet the pathway to expansion remains poorly understood. An important step in expansion is the shift from a stable TNR sequence to an unstable, expanding tract, which is thought to occur once a TNR attains a threshold length. Modeling of human data has indicated that TNR tracts are increasingly likely to expand as they increase in size and to do so in increments that are smaller than the repeat itself, but this has not been tested experimentally. Genetic work has implicated the mismatch repair factor MSH3 in promoting expansions. Using Saccharomyces cerevisiae as a model for CAG and CTG tract dynamics, we examined individual threshold-length TNR tracts in vivo over time in MSH3 and msh3Δ backgrounds. We demonstrate, for the first time, that these TNR tracts are highly dynamic. Furthermore, we establish that once such a tract has expanded by even a few repeat units, it is significantly more likely to expand again. Finally, we show that threshold- length TNR sequences readily accumulate net incremental expansions over time through a series of small expansion and contraction events. Importantly, the tracts were substantially stabilized in the msh3Δ background, with a bias toward contractions, indicating that Msh2-Msh3 plays an important role in shifting the expansion-contraction equilibrium toward expansion in the early stages of TNR tract expansion. Copyright © 2015 by the Genetics Society of America.
Plan-Do-Check-Act and the Management of Institutional Research. AIR 1992 Annual Forum Paper.
ERIC Educational Resources Information Center
McLaughlin, Gerald W.; Snyder, Julie K.
This paper describes the application of a Total Quality Management strategy called Plan-Do-Check-Act (PDCA) to the projects and activities of an institutional research office at the Virginia Polytechnic Institute and State University. PDCA is a cycle designed to facilitate incremental continual improvement through change. The specific steps are…
Review of Models of Beam-Noise Statistics
1977-11-01
depth. Rays are traced according to Snell’s Law from the receiver depth in 10 I vertical-angle steps for one cycle. If tte 10 increments are not...Blvd. Rockville, MD 20850 Attn: J. T. Gottwald TRW Systems Group 7600 Colshire Drive McLean, VA 22101 Attn: R. T. Brown 1 I. B. Gereben 1 Undersea
A methodology for evaluation of a markup-based specification of clinical guidelines.
Shalom, Erez; Shahar, Yuval; Taieb-Maimon, Meirav; Lunenfeld, Eitan
2008-11-06
We introduce a three-phase, nine-step methodology for specification of clinical guidelines (GLs) by expert physicians, clinical editors, and knowledge engineers, and for quantitative evaluation of the specification's quality. We applied this methodology to a particular framework for incremental GL structuring (mark-up) and to GLs in three clinical domains with encouraging results.
Incremental checking of Master Data Management model based on contextual graphs
NASA Astrophysics Data System (ADS)
Lamolle, Myriam; Menet, Ludovic; Le Duc, Chan
2015-10-01
The validation of models is a crucial step in distributed heterogeneous systems. In this paper, an incremental validation method is proposed in the scope of a Model Driven Engineering (MDE) approach, which is used to develop a Master Data Management (MDM) field represented by XML Schema models. The MDE approach presented in this paper is based on the definition of an abstraction layer using UML class diagrams. The validation method aims to minimise the model errors and to optimisethe process of model checking. Therefore, the notion of validation contexts is introduced allowing the verification of data model views. Description logics specify constraints that the models have to check. An experimentation of the approach is presented through an application developed in ArgoUML IDE.
40 CFR 60.1590 - When must I complete each increment of progress?
Code of Federal Regulations, 2010 CFR
2010-07-01
... PROGRAMS (CONTINUED) STANDARDS OF PERFORMANCE FOR NEW STATIONARY SOURCES Emission Guidelines and Compliance Times for Small Municipal Waste Combustion Units Constructed on or Before August 30, 1999 Model Rule...
Mehand, Massinissa Si; Srinivasan, Bala; De Crescenzo, Gregory
2015-01-01
Surface plasmon resonance-based biosensors have been successfully applied to the study of the interactions between macromolecules and small molecular weight compounds. In an effort to increase the throughput of these SPR-based experiments, we have already proposed to inject multiple compounds simultaneously over the same surface. When specifically applied to small molecular weight compounds, such a strategy would however require prior knowledge of the refractive index increment of each compound in order to correctly interpret the recorded signal. An additional experiment is typically required to obtain this information. In this manuscript, we show that through the introduction of an additional global parameter corresponding to the ratio of the saturating signals associated with each molecule, the kinetic parameters could be identified with similar confidence intervals without any other experimentation. PMID:26515024
Erkens, C G M; Dinmohamed, A G; Kamphorst, M; Toumanian, S; van Nispen-Dobrescu, R; Alink, M; Oudshoorn, N; Mensen, M; van den Hof, S; Borgdorff, M; Verver, S
2014-04-01
Interferon-gamma release assays (IGRAs) are reported to be more specific for the diagnosis of latent tuberculous infection (LTBI) than the tuberculin skin test (TST). The two-step procedure, TST followed by an IGRA, is reported to be cost-effective in high-income countries, but it requires more financial resources. To assess the added value of IGRA compared to TST alone in the Netherlands. Test results and background data on persons tested with an IGRA were recorded by the Public Municipal Health Services in a web-based database. The number of persons diagnosed with LTBI using different screening algorithms was calculated. In those tested with an IGRA, at least 60% of persons who would have been diagnosed with LTBI based on TST alone had a negative IGRA. Among those with a TST reaction below the cut-off for the diagnosis of LTBI, 13% had a positive IGRA. For 41% of persons tested with an IGRA after TST, the IGRA influenced whether or not an LTBI diagnosis would be made. With the IGRA as reference standard, a high proportion of persons in low-prevalence settings are treated unnecessarily for LTBI if tested with TST alone, while a small proportion eligible for preventive treatment are missed. Incremental costs of the two-step strategy seem to be balanced by the improved targeting of preventive treatment.
NASA Astrophysics Data System (ADS)
Blain, Matthew G.; Riter, Leah S.; Cruz, Dolores; Austin, Daniel E.; Wu, Guangxiang; Plass, Wolfgang R.; Cooks, R. Graham
2004-08-01
Breakthrough improvements in simplicity and reductions in the size of mass spectrometers are needed for high-consequence fieldable applications, including error-free detection of chemical/biological warfare agents, medical diagnoses, and explosives and contraband discovery. These improvements are most likely to be realized with the reconceptualization of the mass spectrometer, rather than by incremental steps towards miniaturization. Microfabricated arrays of mass analyzers represent such a conceptual advance. A massively parallel array of micrometer-scaled mass analyzers on a chip has the potential to set the performance standard for hand-held sensors due to the inherit selectivity, sensitivity, and universal applicability of mass spectrometry as an analytical method. While the effort to develop a complete micro-MS system must include innovations in ultra-small-scale sample introduction, ion sources, mass analyzers, detectors, and vacuum and power subsystems, the first step towards radical miniaturization lies in the design, fabrication, and characterization of the mass analyzer itself. In this paper we discuss design considerations and results from simulations of ion trapping behavior for a micrometer scale cylindrical ion trap (CIT) mass analyzer (internal radius r0 = 1 [mu]m). We also present a description of the design and microfabrication of a 0.25 cm2 array of 106 one-micrometer CITs, including integrated ion detectors, constructed in tungsten on a silicon substrate.
NASA Astrophysics Data System (ADS)
Störkle, Denis Daniel; Seim, Patrick; Thyssen, Lars; Kuhlenkötter, Bernd
2016-10-01
This article describes new developments in an incremental, robot-based sheet metal forming process (`Roboforming') for the production of sheet metal components for small lot sizes and prototypes. The dieless kinematic-based generation of the shape is implemented by means of two industrial robots, which are interconnected to a cooperating robot system. Compared to other incremental sheet metal forming (ISF) machines, this system offers high geometrical form flexibility without the need of any part-dependent tools. The industrial application of ISF is still limited by certain constraints, e.g. the low geometrical accuracy. Responding to these constraints, the authors present the influence of the part orientation and the forming sequence on the geometric accuracy. Their influence is illustrated with the help of various experimental results shown and interpreted within this article.
Thermomechanical simulations and experimental validation for high speed incremental forming
NASA Astrophysics Data System (ADS)
Ambrogio, Giuseppina; Gagliardi, Francesco; Filice, Luigino; Romero, Natalia
2016-10-01
Incremental sheet forming (ISF) consists in deforming only a small region of the workspace through a punch driven by a NC machine. The drawback of this process is its slowness. In this study, a high speed variant has been investigated from both numerical and experimental points of view. The aim has been the design of a FEM model able to perform the material behavior during the high speed process by defining a thermomechanical model. An experimental campaign has been performed by a CNC lathe with high speed to test process feasibility. The first results have shown how the material presents the same performance than in conventional speed ISF and, in some cases, better material behavior due to the temperature increment. An accurate numerical simulation has been performed to investigate the material behavior during the high speed process confirming substantially experimental evidence.
Transport of Internetwork Magnetic Flux Elements in the Solar Photosphere
NASA Astrophysics Data System (ADS)
Agrawal, Piyush; Rast, Mark P.; Gošić, Milan; Bellot Rubio, Luis R.; Rempel, Matthias
2018-02-01
The motions of small-scale magnetic flux elements in the solar photosphere can provide some measure of the Lagrangian properties of the convective flow. Measurements of these motions have been critical in estimating the turbulent diffusion coefficient in flux-transport dynamo models and in determining the Alfvén wave excitation spectrum for coronal heating models. We examine the motions of internetwork flux elements in Hinode/Narrowband Filter Imager magnetograms and study the scaling of their mean squared displacement and the shape of their displacement probability distribution as a function of time. We find that the mean squared displacement scales super-diffusively with a slope of about 1.48. Super-diffusive scaling has been observed in other studies for temporal increments as small as 5 s, increments over which ballistic scaling would be expected. Using high-cadence MURaM simulations, we show that the observed super-diffusive scaling at short increments is a consequence of random changes in barycenter positions due to flux evolution. We also find that for long temporal increments, beyond granular lifetimes, the observed displacement distribution deviates from that expected for a diffusive process, evolving from Rayleigh to Gaussian. This change in distribution can be modeled analytically by accounting for supergranular advection along with granular motions. These results complicate the interpretation of magnetic element motions as strictly advective or diffusive on short and long timescales and suggest that measurements of magnetic element motions must be used with caution in turbulent diffusion or wave excitation models. We propose that passive tracer motions in measured photospheric flows may yield more robust transport statistics.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Goodenough, D; Olafsdottir, H; Olafsson, I
Purpose: To automatically quantify the amount of missing tissue in a digital breast tomosynthesis system using four stair-stepped chest wall missing tissue gauges in the Tomophan™ from the Phantom Laboratory and image processing from Image Owl. Methods: The Tomophan™ phantom incorporates four stair-stepped missing tissue gauges by the chest wall, allowing measurement of missing chest wall in two different locations along the chest wall at two different heights. Each of the four gauges has 12 steps in 0.5 mm increments rising from the chest wall. An image processing algorithm was developed by Image Owl that first finds the two slicesmore » containing the steps then finds the signal through the highest step in all four gauges. Using the signal drop at the beginning of each gauge the distance to the end of the image gives the length of the missing tissue gauge in millimeters. Results: The Tomophan™ was imaged in digital breast tomosynthesis (DBT) systems from various vendors resulting in 46 cases used for testing. The results showed that on average 1.9 mm of 6 mm of the gauges are visible. A small focus group was asked to count the number of visible steps for each case which resulted in a good agreement between observer counts and computed data. Conclusion: First, the results indicate that the amount of missing chest wall can differ between vendors. Secondly it was shown that an automated method to estimate the amount of missing chest wall gauges agreed well with observer assessments. This finding indicates that consistency testing may be simplified using the Tomophan™ phantom and analysis by an automated image processing named Tomo QA. In general the reason for missing chest wall may be due to a function of the beam profile at the chest wall as DBT projects through the angular sampling. Research supported by Image Owl, Inc., The Phantom Laboratory, Inc. and Raforninn ehf; Mallozzi and Healy employed by The Phantom Laboratory, Inc.; Goodenough is a consultant to The Phantom Laboratory, Inc.; Fredriksson, Kristbjornsson, Olafsson, Oskarsdottir and Olafsdottir are employed by Raforninn, Ehf.« less
NASA Astrophysics Data System (ADS)
Minotti, Luca; Savaré, Giuseppe
2018-02-01
We propose the new notion of Visco-Energetic solutions to rate-independent systems {(X, E,} d) driven by a time dependent energy E and a dissipation quasi-distance d in a general metric-topological space X. As for the classic Energetic approach, solutions can be obtained by solving a modified time Incremental Minimization Scheme, where at each step the dissipation quasi-distance d is incremented by a viscous correction {δ} (for example proportional to the square of the distance d), which penalizes far distance jumps by inducing a localized version of the stability condition. We prove a general convergence result and a typical characterization by Stability and Energy Balance in a setting comparable to the standard energetic one, thus capable of covering a wide range of applications. The new refined Energy Balance condition compensates for the localized stability and provides a careful description of the jump behavior: at every jump the solution follows an optimal transition, which resembles in a suitable variational sense the discrete scheme that has been implemented for the whole construction.
Variational formulation for dissipative continua and an incremental J-integral
NASA Astrophysics Data System (ADS)
Rahaman, Md. Masiur; Dhas, Bensingh; Roy, D.; Reddy, J. N.
2018-01-01
Our aim is to rationally formulate a proper variational principle for dissipative (viscoplastic) solids in the presence of inertia forces. As a first step, a consistent linearization of the governing nonlinear partial differential equations (PDEs) is carried out. An additional set of complementary (adjoint) equations is then formed to recover an underlying variational structure for the augmented system of linearized balance laws. This makes it possible to introduce an incremental Lagrangian such that the linearized PDEs, including the complementary equations, become the Euler-Lagrange equations. Continuous groups of symmetries of the linearized PDEs are computed and an analysis is undertaken to identify the variational groups of symmetries of the linearized dissipative system. Application of Noether's theorem leads to the conservation laws (conserved currents) of motion corresponding to the variational symmetries. As a specific outcome, we exploit translational symmetries of the functional in the material space and recover, via Noether's theorem, an incremental J-integral for viscoplastic solids in the presence of inertia forces. Numerical demonstrations are provided through a two-dimensional plane strain numerical simulation of a compact tension specimen of annealed mild steel under dynamic loading.
Two models of minimalist, incremental syntactic analysis.
Stabler, Edward P
2013-07-01
Minimalist grammars (MGs) and multiple context-free grammars (MCFGs) are weakly equivalent in the sense that they define the same languages, a large mildly context-sensitive class that properly includes context-free languages. But in addition, for each MG, there is an MCFG which is strongly equivalent in the sense that it defines the same language with isomorphic derivations. However, the structure-building rules of MGs but not MCFGs are defined in a way that generalizes across categories. Consequently, MGs can be exponentially more succinct than their MCFG equivalents, and this difference shows in parsing models too. An incremental, top-down beam parser for MGs is defined here, sound and complete for all MGs, and hence also capable of parsing all MCFG languages. But since the parser represents its grammar transparently, the relative succinctness of MGs is again evident. Although the determinants of MG structure are narrowly and discretely defined, probabilistic influences from a much broader domain can influence even the earliest analytic steps, allowing frequency and context effects to come early and from almost anywhere, as expected in incremental models. Copyright © 2013 Cognitive Science Society, Inc.
"Small Steps, Big Rewards": Preventing Type 2 Diabetes
... please turn Javascript on. Feature: Diabetes "Small Steps, Big Rewards": Preventing Type 2 Diabetes Past Issues / Fall ... These are the plain facts in "Small Steps. Big Rewards: Prevent Type 2 Diabetes," an education campaign ...
Climate Change and Forests of the Future: Managing in the Face of Uncertainty
Constance Millar; Nathan L. Stephenson; Scott L. Stephens
2007-01-01
We offer a conceptual framework for managing forested ecosystems under an assumption that future environments will be different from present but that we cannot be certain about the specifics of change. We encourage flexible approaches that promote reversible and incremental steps, and that favor ongoing learning and capacity to modify direction as situations change. We...
Energy Based Multiscale Modeling with Non-Periodic Boundary Conditions
2013-05-13
below in Figure 8. At each incremental step in the analysis , the user material defined subroutine (UMAT) was utilized to perform the communication...initiation and modeling using XFEM. Appropriate localization schemes will be developed to allow for deformations conducive for crack opening...REFERENCES 1. Talreja R., 2006, “Damage analysis for structural integrity and durability of composite materials ,” Fatigue & Fracture of
Glaser, John P
2008-01-01
Partners Healthcare, and its affiliated hospitals, have a long track record of accomplishments in clinical information systems implementations and research. Seven ideas have shaped the information systems strategies and tactics at Partners; centrality of processes, organizational partnerships, progressive incrementalism, agility, architecture, embedded research, and engage the field. This article reviews the ideas and discusses the rationale and steps taken to put the ideas into practice.
Glaser, John P.
2008-01-01
Partners Healthcare, and its affiliated hospitals, have a long track record of accomplishments in clinical information systems implementations and research. Seven ideas have shaped the information systems strategies and tactics at Partners; centrality of processes, organizational partnerships, progressive incrementalism, agility, architecture, embedded research, and engage the field. This article reviews the ideas and discusses the rationale and steps taken to put the ideas into practice. PMID:18308978
Effects of Age on Maximal Work Capacity in Women Aged 18-48 Years.
ERIC Educational Resources Information Center
Hartung, G. Harley; And Others
Fifty-six healthy nontrained women aged 18 to 48 were tested for maximal work capacity on a bicycle ergometer. The women were divided into three age groups. A continuous step-increment bicycle ergometer work test was administered with the workload starting at 150 kpm (kilometers per minute) and 50 pedal rpm (revolutions per minute). The workload…
Extending a Powerful Idea. Artificial Intelligence Memo No. 590.
ERIC Educational Resources Information Center
Lawler, Robert W.
This document focuses on the use of a computer and the LOGO programing language by an eight-year-old boy. The stepping of variables, which is the development and incrementally changing of one of several variables, is an idea that is followed in one child's mind as he effectively directs himself in a freely-chosen problem-solving situation. The…
Analysis of Trajectory Parameters for Probe and Round-Trip Missions to Venus
NASA Technical Reports Server (NTRS)
Dugan, James F., Jr.; Simsic, Carl R.
1960-01-01
For one-way transfers between Earth and Venus, charts are obtained that show velocity, time, and angle parameters as functions of the eccentricity and semilatus rectum of the Sun-focused vehicle conic. From these curves, others are obtained that are useful in planning one-way and round-trip missions to Venus. The analysis is characterized by circular coplanar planetary orbits, successive two-body approximations, impulsive velocity changes, and circular parking orbits at 1.1 planet radii. For round trips the mission time considered ranges from 65 to 788 days, while wait time spent in the parking orbit at Venus ranges from 0 to 467 days. Individual velocity increments, one-way travel times, and departure dates are presented for round trips requiring the minimum total velocity increment. For both single-pass and orbiting Venusian probes, the time span available for launch becomes appreciable with only a small increase in velocity-increment capability above the minimum requirement. Velocity-increment increases are much more effective in reducing travel time for single-pass probes than they are for orbiting probes. Round trips composed of a direct route along an ellipse tangent to Earth's orbit and an aphelion route result in the minimum total velocity increment for wait times less than 100 days and mission times ranging from 145 to 612 days. Minimum-total-velocity-increment trips may be taken along perihelion-perihelion routes for wait times ranging from 300 to 467 days. These wait times occur during missions lasting from 640 to 759 days.
Efficient Grammar Induction Algorithm with Parse Forests from Real Corpora
NASA Astrophysics Data System (ADS)
Kurihara, Kenichi; Kameya, Yoshitaka; Sato, Taisuke
The task of inducing grammar structures has received a great deal of attention. The reasons why researchers have studied are different; to use grammar induction as the first stage in building large treebanks or to make up better language models. However, grammar induction has inherent computational complexity. To overcome it, some grammar induction algorithms add new production rules incrementally. They refine the grammar while keeping their computational complexity low. In this paper, we propose a new efficient grammar induction algorithm. Although our algorithm is similar to algorithms which learn a grammar incrementally, our algorithm uses the graphical EM algorithm instead of the Inside-Outside algorithm. We report results of learning experiments in terms of learning speeds. The results show that our algorithm learns a grammar in constant time regardless of the size of the grammar. Since our algorithm decreases syntactic ambiguities in each step, our algorithm reduces required time for learning. This constant-time learning considerably affects learning time for larger grammars. We also reports results of evaluation of criteria to choose nonterminals. Our algorithm refines a grammar based on a nonterminal in each step. Since there can be several criteria to decide which nonterminal is the best, we evaluate them by learning experiments.
NASA Astrophysics Data System (ADS)
Afandi, M. I.; Adinanta, H.; Setiono, A.; Qomaruddin; Widiyatmoko, B.
2018-03-01
There are many ways to measure landslide displacement using sensors such as multi-turn potentiometer, fiber optic strain sensor, GPS, geodetic measurement, ground penetrating radar, etc. The proposed way is to use an optical encoder that produces pulse signal with high stability of measurement resolution despite voltage source instability. The landslide measurement using extensometer based on optical encoder has the ability of high resolution for wide range measurement and for a long period of time. The type of incremental optical encoder provides information about the pulse and direction of a rotating shaft by producing quadrature square wave cycle per increment of shaft movement. The result of measurement using 2,000 pulses per resolution of optical encoder has been obtained. Resolution of extensometer is 36 μm with speed limit of about 3.6 cm/s. System test in hazard landslide area has been carried out with good reliability for small landslide displacement monitoring.
"Small Steps, Big Rewards": You Can Prevent Type 2 Diabetes
... Home Current Issue Past Issues Special Section "Small Steps, Big Rewards": You Can Prevent Type 2 Diabetes ... onset. Those are the basic facts of "Small Steps. Big Rewards: Prevent type 2 Diabetes," created by ...
Springback effects during single point incremental forming: Optimization of the tool path
NASA Astrophysics Data System (ADS)
Giraud-Moreau, Laurence; Belchior, Jérémy; Lafon, Pascal; Lotoing, Lionel; Cherouat, Abel; Courtielle, Eric; Guines, Dominique; Maurine, Patrick
2018-05-01
Incremental sheet forming is an emerging process to manufacture sheet metal parts. This process is more flexible than conventional one and well suited for small batch production or prototyping. During the process, the sheet metal blank is clamped by a blank-holder and a small-size smooth-end hemispherical tool moves along a user-specified path to deform the sheet incrementally. Classical three-axis CNC milling machines, dedicated structure or serial robots can be used to perform the forming operation. Whatever the considered machine, large deviations between the theoretical shape and the real shape can be observed after the part unclamping. These deviations are due to both the lack of stiffness of the machine and residual stresses in the part at the end of the forming stage. In this paper, an optimization strategy of the tool path is proposed in order to minimize the elastic springback induced by residual stresses after unclamping. A finite element model of the SPIF process allowing the shape prediction of the formed part with a good accuracy is defined. This model, based on appropriated assumptions, leads to calculation times which remain compatible with an optimization procedure. The proposed optimization method is based on an iterative correction of the tool path. The efficiency of the method is shown by an improvement of the final shape.
NASA Technical Reports Server (NTRS)
Monta, W. J.
1980-01-01
The effects of conventional and square stores on the longitudinal aerodynamic characteristics of a fighter aircraft configuration at Mach numbers of 1.6, 1.8, and 2.0 was investigated. Five conventional store configurations and six arrangements of a square store configuration were studied. All configurations of the stores produced small, positive increments in the pitching moment throughout the angle-of-attack range, but the configuration with area ruled wing tanks also had a slight decrease on stability at the higher angles of attack. There were some small changes in lift coefficient because of the addition of the stores, causing the drag increment to vary with the lift coefficient. As a result, there were corresponding changes in the increments of the maximum lift drag ratios. The store drag coefficient based on the cross sectional area of the stores ranged from a maximum of 1.1 for the configuration with three Maverick missiles to a minimum of about .040 for the two MK-84 bombs and the arrangements with four square stores touching or two square stores in tandem. Square stores located side by side yielded about 0.50 in the aft position compared to 0.74 in the forward position.
A sustainable woody biomass biorefinery.
Liu, Shijie; Lu, Houfang; Hu, Ruofei; Shupe, Alan; Lin, Lu; Liang, Bin
2012-01-01
Woody biomass is renewable only if sustainable production is imposed. An optimum and sustainable biomass stand production rate is found to be one with the incremental growth rate at harvest equal to the average overall growth rate. Utilization of woody biomass leads to a sustainable economy. Woody biomass is comprised of at least four components: extractives, hemicellulose, lignin and cellulose. While extractives and hemicellulose are least resistant to chemical and thermal degradation, cellulose is most resistant to chemical, thermal, and biological attack. The difference or heterogeneity in reactivity leads to the recalcitrance of woody biomass at conversion. A selection of processes is presented together as a biorefinery based on incremental sequential deconstruction, fractionation/conversion of woody biomass to achieve efficient separation of major components. A preference is given to a biorefinery absent of pretreatment and detoxification process that produce waste byproducts. While numerous biorefinery approaches are known, a focused review on the integrated studies of water-based biorefinery processes is presented. Hot-water extraction is the first process step to extract value from woody biomass while improving the quality of the remaining solid material. This first step removes extractives and hemicellulose fractions from woody biomass. While extractives and hemicellulose are largely removed in the extraction liquor, cellulose and lignin largely remain in the residual woody structure. Xylo-oligomers, aromatics and acetic acid in the hardwood extract are the major components having the greatest potential value for development. Higher temperature and longer residence time lead to higher mass removal. While high temperature (>200°C) can lead to nearly total dissolution, the amount of sugars present in the extraction liquor decreases rapidly with temperature. Dilute acid hydrolysis of concentrated wood extracts renders the wood extract with monomeric sugars. At higher acid concentration and higher temperature the hydrolysis produced more xylose monomers in a comparatively shorter period of reaction time. Xylose is the most abundant monomeric sugar in the hydrolysate. The other comparatively small amounts of monomeric sugars include arabinose, glucose, rhamnose, mannose and galactose. Acetic acid, formic acid, furfural, HMF and other byproducts are inevitably generated during the acid hydrolysis process. Short reaction time is preferred for the hydrolysis of hot-water wood extracts. Acid hydrolysis presents a perfect opportunity for the removal or separation of aromatic materials from the wood extract/hydrolysate. The hot-water wood extract hydrolysate, after solid-removal, can be purified by Nano-membrane filtration to yield a fermentable sugar stream. Fermentation products such as ethanol can be produced from the sugar stream without a detoxification step. Copyright © 2012 Elsevier Inc. All rights reserved.
Thomas Shelton
2013-01-01
A small-plot field trial was conducted to examine the area of influence of fipronil at incremental distances away from treated plots on the Harrison Experimental Forest near Saucier, MS. Small treated (water and fipronil) plots were surrounded by untreated wooden boards in an eight-point radial pattern, and examined for evidence of termite feeding every 60 d for 1 yr...
Contact stresses in meshing spur gear teeth: Use of an incremental finite element procedure
NASA Technical Reports Server (NTRS)
Hsieh, Chih-Ming; Huston, Ronald L.; Oswald, Fred B.
1992-01-01
Contact stresses in meshing spur gear teeth are examined. The analysis is based upon an incremental finite element procedure that simultaneously determines the stresses in the contact region between the meshing teeth. The teeth themselves are modeled by two dimensional plain strain elements. Friction effects are included, with the friction forces assumed to obey Coulomb's law. The analysis assumes that the displacements are small and that the tooth materials are linearly elastic. The analysis procedure is validated by comparing its results with those for the classical two contacting semicylinders obtained from the Hertz method. Agreement is excellent.
Process Parameters Optimization in Single Point Incremental Forming
NASA Astrophysics Data System (ADS)
Gulati, Vishal; Aryal, Ashmin; Katyal, Puneet; Goswami, Amitesh
2016-04-01
This work aims to optimize the formability and surface roughness of parts formed by the single-point incremental forming process for an Aluminium-6063 alloy. The tests are based on Taguchi's L18 orthogonal array selected on the basis of DOF. The tests have been carried out on vertical machining center (DMC70V); using CAD/CAM software (SolidWorks V5/MasterCAM). Two levels of tool radius, three levels of sheet thickness, step size, tool rotational speed, feed rate and lubrication have been considered as the input process parameters. Wall angle and surface roughness have been considered process responses. The influential process parameters for the formability and surface roughness have been identified with the help of statistical tool (response table, main effect plot and ANOVA). The parameter that has the utmost influence on formability and surface roughness is lubrication. In the case of formability, lubrication followed by the tool rotational speed, feed rate, sheet thickness, step size and tool radius have the influence in descending order. Whereas in surface roughness, lubrication followed by feed rate, step size, tool radius, sheet thickness and tool rotational speed have the influence in descending order. The predicted optimal values for the wall angle and surface roughness are found to be 88.29° and 1.03225 µm. The confirmation experiments were conducted thrice and the value of wall angle and surface roughness were found to be 85.76° and 1.15 µm respectively.
The Effect of Intensity on 3-Dimensional Kinematics and Coordination in Front-Crawl Swimming.
de Jesus, Kelly; Sanders, Ross; de Jesus, Karla; Ribeiro, João; Figueiredo, Pedro; Vilas-Boas, João P; Fernandes, Ricardo J
2016-09-01
Coaches are often challenged to optimize swimmers' technique at different training and competition intensities, but 3-dimensional (3D) analysis has not been conducted for a wide range of training zones. To analyze front-crawl 3D kinematics and interlimb coordination from low to severe swimming intensities. Ten male swimmers performed a 200-m front crawl at 7 incrementally increasing paces until exhaustion (0.05-m/s increments and 30-s intervals), with images from 2 cycles in each step (at the 25- and 175-m laps) being recorded by 2 surface and 4 underwater video cameras. Metabolic anaerobic threshold (AnT) was also assessed using the lactate-concentration-velocity curve-modeling method. Stroke frequency increased, stroke length decreased, hand and foot speed increased, and the index of interlimb coordination increased (within a catch-up mode) from low to severe intensities (P ≤ .05) and within the 200-m steps performed above the AnT (at or closer to the 4th step; P ≤ .05). Concurrently, intracyclic velocity variations and propelling efficiency remained similar between and within swimming intensities (P > .05). Swimming intensity has a significant impact on swimmers' segmental kinematics and interlimb coordination, with modifications being more evident after the point when AnT is reached. As competitive swimming events are conducted at high intensities (in which anaerobic metabolism becomes more prevalent), coaches should implement specific training series that lead swimmers to adapt their technique to the task constraints that exist in nonhomeostatic race conditions.
Analysis of High Order Difference Methods for Multiscale Complex Compressible Flows
NASA Technical Reports Server (NTRS)
Sjoegreen, Bjoern; Yee, H. C.; Tang, Harry (Technical Monitor)
2002-01-01
Accurate numerical simulations of complex multiscale compressible viscous flows, especially high speed turbulence combustion and acoustics, demand high order schemes with adaptive numerical dissipation controls. Standard high resolution shock-capturing methods are too dissipative to capture the small scales and/or long-time wave propagations without extreme grid refinements and small time steps. An integrated approach for the control of numerical dissipation in high order schemes with incremental studies was initiated. Here we further refine the analysis on, and improve the understanding of the adaptive numerical dissipation control strategy. Basically, the development of these schemes focuses on high order nondissipative schemes and takes advantage of the progress that has been made for the last 30 years in numerical methods for conservation laws, such as techniques for imposing boundary conditions, techniques for stability at shock waves, and techniques for stable and accurate long-time integration. We concentrate on high order centered spatial discretizations and a fourth-order Runge-Kutta temporal discretizations as the base scheme. Near the bound-aries, the base scheme has stable boundary difference operators. To further enhance stability, the split form of the inviscid flux derivatives is frequently used for smooth flow problems. To enhance nonlinear stability, linear high order numerical dissipations are employed away from discontinuities, and nonlinear filters are employed after each time step in order to suppress spurious oscillations near discontinuities to minimize the smearing of turbulent fluctuations. Although these schemes are built from many components, each of which is well-known, it is not entirely obvious how the different components be best connected. For example, the nonlinear filter could instead have been built into the spatial discretization, so that it would have been activated at each stage in the Runge-Kutta time stepping. We could think of a mechanism that activates the split form of the equations only at some parts of the domain. Another issue is how to define good sensors for determining in which parts of the computational domain a certain feature should be filtered by the appropriate numerical dissipation. For the present study we employ a wavelet technique introduced in as sensors. Here, the method is briefly described with selected numerical experiments.
Plutons: Simmer between 350° and 500°C for 10 million years, then serve cold (Invited)
NASA Astrophysics Data System (ADS)
Coleman, D. S.; Davis, J.
2009-12-01
The growing recognition that continental plutons are assembled incrementally over millions of years requires reexamination of the thermal histories of intrusive rocks. With the exception of the suggestion that pluton magma chambers can be revitalized by mafic input at their deepest structural levels, most aspects of modern pluton petrology are built on the underlying assumption that silicic plutons intrude as discrete thermal packages that undergo subsequent monotonic decay back to a steady-state geothermal gradient. The recognition that homogeneous silicic plutons are constructed over timescales too great to be single events necessitates rethinking pluton intrusion mechanisms, textures, thermochronology, chemical evolution and links to volcanic rocks. Three-dimensional thermal modeling of sheeted (horizontal and vertical) incremental pluton assembly (using HEAT3D by Wohletz, 2007) yields several results that are largely independent of intrusive geometry and may help understand bothersome field and laboratory results from plutonic rocks. 1) All increments cool quickly below hornblende closure temperature. However, late increments are emplaced into walls warmed by earlier increments, and they cycle between hornblende and biotite closure temperatures, a range in which fluid-rich melts are likely to be present. These conditions persist until the increments are far from the region of new magma flux, or the addition of increments stops. These observations are supported by Ar thermochronology and may explain why heterogeneous early marginal intrusive phases often grade into younger homogeneous interior map units. 2) Early increments become the contact metamorphic wall rocks of later increments. This observation suggests that much of the contact metamorphism associated with a given volume of plutonic rock is “lost” via textural modification of early increments during intrusion of later increments. Johnson and Glazner (CMP, in press) argue that mappable variations in pluton texture can result from textural modification during thermal cycling associated with incremental assembly. 3) The thermal structure of the model pluton evolves toward roughly spheroidal isotherms even though the pluton is assembled from thin tabular sheets. The zone of melt-bearing rock and the shape of intrapluton contact metamorphic isograds bear little resemblance to the increments from which the pluton was built. Consequently, pluton contacts mapped by variations in texture that reflect the thermal cycling inherent to incremental assembly will inevitably be “blob” or diapir-like, but will yield little insight into magma intrusion geometry. 4) Although models yield large regions of melt-bearing rock, the melt fraction is low and the melt-bearing volume at any time is small compared to the total volume of the pluton. This observation raises doubts about the connections between zoned silicic plutons and large ignimbrite eruptions.
Thermal elastoplastic structural analysis of non-metallic thermal protection systems
NASA Technical Reports Server (NTRS)
Chung, T. J.; Yagawa, G.
1972-01-01
An incremental theory and numerical procedure to analyze a three-dimensional thermoelastoplastic structure subjected to high temperature, surface heat flux, and volume heat supply as well as mechanical loadings are presented. Heat conduction equations and equilibrium equations are derived by assuming a specific form of incremental free energy, entropy, stresses and heat flux together with the first and second laws of thermodynamics, von Mises yield criteria and Prandtl-Reuss flow rule. The finite element discretization using the linear isotropic three-dimensional element for the space domain and a difference operator corresponding to a linear variation of temperature within a small time increment for the time domain lead to systematic solutions of temperature distribution and displacement and stress fields. Various boundary conditions such as insulated surfaces and convection through uninsulated surface can be easily treated. To demonstrate effectiveness of the present formulation a number of example problems are presented.
Willan, Andrew R; Eckermann, Simon
2012-10-01
Previous applications of value of information methods for determining optimal sample size in randomized clinical trials have assumed no between-study variation in mean incremental net benefit. By adopting a hierarchical model, we provide a solution for determining optimal sample size with this assumption relaxed. The solution is illustrated with two examples from the literature. Expected net gain increases with increasing between-study variation, reflecting the increased uncertainty in incremental net benefit and reduced extent to which data are borrowed from previous evidence. Hence, a trial can become optimal where current evidence is sufficient assuming no between-study variation. However, despite the expected net gain increasing, the optimal sample size in the illustrated examples is relatively insensitive to the amount of between-study variation. Further percentage losses in expected net gain were small even when choosing sample sizes that reflected widely different between-study variation. Copyright © 2011 John Wiley & Sons, Ltd.
Stop the escalators: using the built environment to increase usual daily activity.
Westfall, John M; Fernald, Doug H
2010-01-01
Obesity is an epidemic in the United States. Two-thirds of the population is overweight and does not get enough exercise. Eastern cities are full of escalators that transport obese Americans to and from the subway. Walking stairs is a moderate activity requiring 3-6 metabolic equivalent tasks (METS) and burning 3.5-7 kcal/min. We determined the caloric expenditure and potential weight change of the population of one eastern city if all the subway riders walked the stairs rather than ride the escalators. There are 5,000,000 daily journeys made on the New York City Subway. Subway entrances include a stairway or escalator of approximately 25 steps. Each step up requires 0.11-0.15 kcals; each step down requires 0.05 kcals. To lose one pound requires burning 3500 kcals. We assumed each rider made a round trip so about 2.5 million individual people ride the subway each day. By walking stairs rather than riding escalators, the riders of the New York Subway would lose more than 2.6 million pounds per year. The average subway rider would lose about one pound per year. While this may sound insignificant, in one decade the average subway rider would lose 10 pounds, effectively reversing the trend in the United States of gaining 10 pounds per decade. This conservative estimate of the number of stairs ascended daily means that subway riders might lose even more weight. We believe that this novel approach might lead to other public and private efforts to increase physical activity such as elevators that only stop on even numbered floors, making stairwells more attractive and well lit, and stopping moving sidewalks. The built environment may support small, incremental changes in usual daily physical activity that can have significant impact on populations and individuals.
Effect of Aspiration and Mean Gain on the Emergence of Cooperation in Unidirectional Pedestrian Flow
NASA Astrophysics Data System (ADS)
Wang, Zi-Yang; Ma, Jian; Zhao, Hui; Qin, Yong; Zhu, Wei; Jia, Li-Min
2013-03-01
When more than one pedestrian want to move to the same site, conflicts appear and thus the involved pedestrians play a motion game. In order to describe the emergence of cooperation during the conflict resolving process, an evolutionary cellular automation model is established considering the effect of aspiration and mean gain. In each game, pedestrian may be gentle cooperator or aggressive defector. We propose a set of win-stay-lose-shrift (WSLS) like rules for updating pedestrian's strategy. These rules prescribe that if the mean gain of current strategy between some given steps is larger than aspiration the strategy keeps, otherwise the strategy changes. The simulation results show that a high level aspiration will lead to more cooperation. With the increment of the statistic length, pedestrians will be more rational in decision making. It is also found that when the aspiration level is small enough and the statistic length is large enough all the pedestrian will turn to defectors. We use the prisoner's dilemma model to explain it. At last we discuss the effect of aspiration on fundamental diagram.
Federal Register 2010, 2011, 2012, 2013, 2014
2012-03-23
... avoid the use of special characters, any form of encryption, and be free of any defects or viruses. For... reaction rate constant (known as k OH ) with the hydroxyl radical (OH); (ii) the maximum incremental... with the OH radical in the air. This reaction is typically the first step in a series of chemical...
OBLIQUE VIEW OF SECOND STORY PORTION OF SOUTHWEST WING OF ...
OBLIQUE VIEW OF SECOND STORY PORTION OF SOUTHWEST WING OF RECREATION CENTER WITH GRADUATED SCALE IN 1' INCREMENTS. NOTE THE STEPS UP FROM THE ENTRANCE TERRACE TO THE LANDING AND DOORWAY TO THE SECOND FLOOR (RIGHT). VIEW FACING NORTH - U.S. Naval Base, Pearl Harbor, Bloch Recreation Center & Arena, Between Center Drive & North Road near Nimitz Gate, Pearl City, Honolulu County, HI
Incremental Upgrade of Legacy Systems (IULS)
2001-04-01
analysis task employed SEI’s Feature-Oriented Domain Analysis methodology (see FODA reference) and included several phases: • Context Analysis • Establish...Legacy, new Host and upgrade system and software. The Feature Oriented Domain Analysis approach ( FODA , see SUM References) was used for this step...Feature-Oriented Domain Analysis ( FODA ) Feasibility Study (CMU/SEI-90-TR- 21, ESD-90-TR-222); Software Engineering Institute, Carnegie Mellon University
ERIC Educational Resources Information Center
Knowles, Timothy
2013-01-01
This paper outlines a set of ideas for improving teacher quality in America's schools. In it, the author proposes a combination of incremental steps and ambitious ones, designed to stimulate policymakers, practitioners, and the public to accelerate efforts to develop high-quality teachers. The paper has four main sections. First, the author…
The realization of temperature controller for small resistance measurement system
NASA Astrophysics Data System (ADS)
Sobecki, Jakub; Walendziuk, Wojciech; Idzkowski, Adam
2017-08-01
This paper concerns the issues of construction and experimental tests of a temperature stabilization system for small resistance increments measurement circuits. After switching the system on, a PCB board heats up and the long-term temperature drift altered the measurement result. The aim of this work is reducing the time of achieving constant nominal temperature by the measurement system, which would enable decreasing the time of measurements in the steady state. Moreover, the influence of temperatures higher than the nominal on the measurement results and the obtained heating curve were tested. During the working process, the circuit heats up to about 32 °C spontaneously, and it has the time to reach steady state of about 1200 s. Implementing a USART terminal on the PC and an NI USB-6341 data acquisition card makes recording the data (concerning temperature and resistance) in the digital form and its further processing easier. It also enables changing the quantity of the regulator settings. This paper presents sample results of measurements for several temperature values and the characteristics of the temperature and resistance changes in time as well as their comparison with the output values. The object identification is accomplished due to the Ziegler-Nichols method. The algorithm of determining the step characteristics parameters and examples of computations of the regulator settings are included together with example characteristics of the object regulation.
NASA Technical Reports Server (NTRS)
Abell, Paul; Mazanek, Dan; Reeves, Dan; Chodas, Paul; Gates, Michele; Johnson, Lindley; Ticker, Ronald
2016-01-01
To achieve its long-term goal of sending humans to Mars, the National Aeronautics and Space Administration (NASA) plans to proceed in a series of incrementally more complex human space flight missions. Today, human flight experience extends only to Low- Earth Orbit (LEO), and should problems arise during a mission, the crew can return to Earth in a matter of minutes to hours. The next logical step for human space flight is to gain flight experience in the vicinity of the Moon. These cis-lunar missions provide a "proving ground" for the testing of systems and operations while still accommodating an emergency return path to the Earth that would last only several days. Cis-lunar mission experience will be essential for more ambitious human missions beyond the Earth-Moon system, which will require weeks, months, or even years of transit time. In addition, NASA has been given a Grand Challenge to find all asteroid threats to human populations and know what to do about them. Obtaining knowledge of asteroid physical properties combined with performing technology demonstrations for planetary defense provide much needed information to address the issue of future asteroid impacts on Earth. Hence the combined objectives of human exploration and planetary defense give a rationale for the Asteroid Re-direct Mission (ARM).
Lakdawalla, Darius N; Chou, Jacquelyn W; Linthicum, Mark T; MacEwan, Joanna P; Zhang, Jie; Goldman, Dana P
2015-05-01
Surrogate end points may be used as proxy for more robust clinical end points. One prominent example is the use of progression-free survival (PFS) as a surrogate for overall survival (OS) in trials for oncologic treatments. Decisions based on surrogate end points may expedite regulatory approval but may not accurately reflect drug efficacy. Payers and clinicians must balance the potential benefits of earlier treatment access based on surrogate end points against the risks of clinical uncertainty. To present a framework for evaluating the expected net benefit or cost of providing early access to new treatments on the basis of evidence of PFS benefits before OS results are available, using non-small-cell lung cancer (NSCLC) as an example. A probabilistic decision model was used to estimate expected incremental social value of the decision to grant access to a new treatment on the basis of PFS evidence. The model analyzed a hypothetical population of patients with NSCLC who could be treated during the period between PFS and OS evidence publication. Estimates for delay in publication of OS evidence following publication of PFS evidence, expected OS benefit given PFS benefit, incremental cost of new treatment, and other parameters were drawn from the literature on treatment of NSCLC. Incremental social value of early access for each additional patient per month (in 2014 US dollars). For "medium-value" model parameters, early reimbursement of drugs with any PFS benefit yields an incremental social cost of more than $170,000 per newly treated patient per month. In contrast, granting early access on the basis of PFS benefit between 1 and 3.5 months produces more than $73,000 in incremental social value. Across the full range of model parameter values, granting access for drugs with PFS benefit between 3 and 3.5 months is robustly beneficial, generating incremental social value ranging from $38,000 to more than $1 million per newly treated patient per month, whereas access for all drugs with any PFS benefit is usually not beneficial. The value of providing access to new treatments on the basis of surrogate end points, and PFS in particular, likely varies considerably. Payers and clinicians should carefully consider how to use PFS data in balancing potential benefits against costs in each particular disease.
Incremental online learning in high dimensions.
Vijayakumar, Sethu; D'Souza, Aaron; Schaal, Stefan
2005-12-01
Locally weighted projection regression (LWPR) is a new algorithm for incremental nonlinear function approximation in high-dimensional spaces with redundant and irrelevant input dimensions. At its core, it employs nonparametric regression with locally linear models. In order to stay computationally efficient and numerically robust, each local model performs the regression analysis with a small number of univariate regressions in selected directions in input space in the spirit of partial least squares regression. We discuss when and how local learning techniques can successfully work in high-dimensional spaces and review the various techniques for local dimensionality reduction before finally deriving the LWPR algorithm. The properties of LWPR are that it (1) learns rapidly with second-order learning methods based on incremental training, (2) uses statistically sound stochastic leave-one-out cross validation for learning without the need to memorize training data, (3) adjusts its weighting kernels based on only local information in order to minimize the danger of negative interference of incremental learning, (4) has a computational complexity that is linear in the number of inputs, and (5) can deal with a large number of-possibly redundant-inputs, as shown in various empirical evaluations with up to 90 dimensional data sets. For a probabilistic interpretation, predictive variance and confidence intervals are derived. To our knowledge, LWPR is the first truly incremental spatially localized learning method that can successfully and efficiently operate in very high-dimensional spaces.
High mobility high efficiency organic films based on pure organic materials
Salzman, Rhonda F [Ann Arbor, MI; Forrest, Stephen R [Ann Arbor, MI
2009-01-27
A method of purifying small molecule organic material, performed as a series of operations beginning with a first sample of the organic small molecule material. The first step is to purify the organic small molecule material by thermal gradient sublimation. The second step is to test the purity of at least one sample from the purified organic small molecule material by spectroscopy. The third step is to repeat the first through third steps on the purified small molecule material if the spectroscopic testing reveals any peaks exceeding a threshold percentage of a magnitude of a characteristic peak of a target organic small molecule. The steps are performed at least twice. The threshold percentage is at most 10%. Preferably the threshold percentage is 5% and more preferably 2%. The threshold percentage may be selected based on the spectra of past samples that achieved target performance characteristics in finished devices.
Quasi-static responses and variational principles in gradient plasticity
NASA Astrophysics Data System (ADS)
Nguyen, Quoc-Son
2016-12-01
Gradient models have been much discussed in the literature for the study of time-dependent or time-independent processes such as visco-plasticity, plasticity and damage. This paper is devoted to the theory of Standard Gradient Plasticity at small strain. A general and consistent mathematical description available for common time-independent behaviours is presented. Our attention is focussed on the derivation of general results such as the description of the governing equations for the global response and the derivation of related variational principles in terms of the energy and the dissipation potentials. It is shown that the quasi-static response under a loading path is a solution of an evolution variational inequality as in classical plasticity. The rate problem and the rate minimum principle are revisited. A time-discretization by the implicit scheme of the evolution equation leads to the increment problem. An increment of the response associated with a load increment is a solution of a variational inequality and satisfies also a minimum principle if the energy potential is convex. The increment minimum principle deals with stables solutions of the variational inequality. Some numerical methods are discussed in view of the numerical simulation of the quasi-static response.
Realizable optimal control for a remotely piloted research vehicle. [stability augmentation
NASA Technical Reports Server (NTRS)
Dunn, H. J.
1980-01-01
The design of a control system using the linear-quadratic regulator (LQR) control law theory for time invariant systems in conjunction with an incremental gradient procedure is presented. The incremental gradient technique reduces the full-state feedback controller design, generated by the LQR algorithm, to a realizable design. With a realizable controller, the feedback gains are based only on the available system outputs instead of being based on the full-state outputs. The design is for a remotely piloted research vehicle (RPRV) stability augmentation system. The design includes methods for accounting for noisy measurements, discrete controls with zero-order-hold outputs, and computational delay errors. Results from simulation studies of the response of the RPRV to a step in the elevator and frequency analysis techniques are included to illustrate these abnormalities and their influence on the controller design.
An efficient numerical algorithm for transverse impact problems
NASA Technical Reports Server (NTRS)
Sankar, B. V.; Sun, C. T.
1985-01-01
Transverse impact problems in which the elastic and plastic indentation effects are considered, involve a nonlinear integral equation for the contact force, which, in practice, is usually solved by an iterative scheme with small increments in time. In this paper, a numerical method is proposed wherein the iterations of the nonlinear problem are separated from the structural response computations. This makes the numerical procedures much simpler and also efficient. The proposed method is applied to some impact problems for which solutions are available, and they are found to be in good agreement. The effect of the magnitude of time increment on the results is also discussed.
Continuous microbial cultures maintained by electronically-controlled device
NASA Technical Reports Server (NTRS)
Eisler, W. J., Jr.; Webb, R. B.
1967-01-01
Photocell-controlled instrument maintains microbial culture. It uses commercially available chemostat glassware, provides adequate aeration through bubbling of the culture, maintains the population size and density, continuously records growth rates over small increments of time, and contains a simple, sterilizable nutrient control mechanism.
Federal Register 2010, 2011, 2012, 2013, 2014
2010-04-13
... transfer of defense articles, including technical data, and defense services for the manufacture of Small Diameter Bomb Increment I (SDB I) Weapon System in Italy for end-use by the Italian Air Force. The United...
Holographic evaluation of fatigue cracks by a compressive stress (HYSTERESIS) technique
NASA Technical Reports Server (NTRS)
Freska, S. A.; Rummel, W. D.
1974-01-01
Holographic interferometry compares unknown field of optical waves with known one. Differences are displayed as interference bands or fringes. Technique was evaluated on fatigue-cracked 2219-T87 aluminum-alloy panels. Small cracks were detected when specimen was incrementally unloaded.
Omelyan, Igor; Kovalenko, Andriy
2015-04-14
We developed a generalized solvation force extrapolation (GSFE) approach to speed up multiple time step molecular dynamics (MTS-MD) of biomolecules steered with mean solvation forces obtained from the 3D-RISM-KH molecular theory of solvation (three-dimensional reference interaction site model with the Kovalenko-Hirata closure). GSFE is based on a set of techniques including the non-Eckart-like transformation of coordinate space separately for each solute atom, extension of the force-coordinate pair basis set followed by selection of the best subset, balancing the normal equations by modified least-squares minimization of deviations, and incremental increase of outer time step in motion integration. Mean solvation forces acting on the biomolecule atoms in conformations at successive inner time steps are extrapolated using a relatively small number of best (closest) solute atomic coordinates and corresponding mean solvation forces obtained at previous outer time steps by converging the 3D-RISM-KH integral equations. The MTS-MD evolution steered with GSFE of 3D-RISM-KH mean solvation forces is efficiently stabilized with our optimized isokinetic Nosé-Hoover chain (OIN) thermostat. We validated the hybrid MTS-MD/OIN/GSFE/3D-RISM-KH integrator on solvated organic and biomolecules of different stiffness and complexity: asphaltene dimer in toluene solvent, hydrated alanine dipeptide, miniprotein 1L2Y, and protein G. The GSFE accuracy and the OIN efficiency allowed us to enlarge outer time steps up to huge values of 1-4 ps while accurately reproducing conformational properties. Quasidynamics steered with 3D-RISM-KH mean solvation forces achieves time scale compression of conformational changes coupled with solvent exchange, resulting in further significant acceleration of protein conformational sampling with respect to real time dynamics. Overall, this provided a 50- to 1000-fold effective speedup of conformational sampling for these systems, compared to conventional MD with explicit solvent. We have been able to fold the miniprotein from a fully denatured, extended state in about 60 ns of quasidynamics steered with 3D-RISM-KH mean solvation forces, compared to the average physical folding time of 4-9 μs observed in experiment.
Analysis of Lung Tissue Using Ion Beams
NASA Astrophysics Data System (ADS)
Alvarez, J. L.; Barrera, R.; Miranda, J.
2002-08-01
In this work a comparative study is presented of the contents of metals in lung tissue from healthy patients and with lung cancer, by means of two analytical techniques: Particle Induced X-ray Emission (PIXE) and Rutherford Backscattering Spectrometry (RBS). The samples of cancerous tissue were taken from 26 autopsies made to individuals died in the National Institute of Respiratory Disease (INER), 22 of cancer and 4 of other non-cancer biopsies. When analyzing the entirety of the samples, in the cancerous tissues, there were increments in the concentrations of S (4%), K (635%), Co (85%) and Cu (13%). Likewise, there were deficiencies in the concentrations of Cl (59%), Ca (6%), Fe (26%) and Zn (7%). Only in the cancerous tissues there were appearances of P, Ca, Ti, V, Cr, Mn, Ni, Br and Sr. The tissue samples were classified according to cancer types (adenocarcinomas, epidermoides and of small cell carcinoma), personal habits (smokers and alcoholic), genetic predisposition and residence place. There was a remarkable decrease in the concentration of Ca and a marked increment in the Cu in the epidermoide tissue samples with regard to those of adenocarcinoma or of small cells cancer. Also, decrements were detected in K and increments of Fe, Co and Cu in the sample belonging to people that resided in Mexico City with regard to those that resided in the State of Mexico.
Incremental Parallelization of Non-Data-Parallel Programs Using the Charon Message-Passing Library
NASA Technical Reports Server (NTRS)
VanderWijngaart, Rob F.
2000-01-01
Message passing is among the most popular techniques for parallelizing scientific programs on distributed-memory architectures. The reasons for its success are wide availability (MPI), efficiency, and full tuning control provided to the programmer. A major drawback, however, is that incremental parallelization, as offered by compiler directives, is not generally possible, because all data structures have to be changed throughout the program simultaneously. Charon remedies this situation through mappings between distributed and non-distributed data. It allows breaking up the parallelization into small steps, guaranteeing correctness at every stage. Several tools are available to help convert legacy codes into high-performance message-passing programs. They usually target data-parallel applications, whose loops carrying most of the work can be distributed among all processors without much dependency analysis. Others do a full dependency analysis and then convert the code virtually automatically. Even more toolkits are available that aid construction from scratch of message passing programs. None, however, allows piecemeal translation of codes with complex data dependencies (i.e. non-data-parallel programs) into message passing codes. The Charon library (available in both C and Fortran) provides incremental parallelization capabilities by linking legacy code arrays with distributed arrays. During the conversion process, non-distributed and distributed arrays exist side by side, and simple mapping functions allow the programmer to switch between the two in any location in the program. Charon also provides wrapper functions that leave the structure of the legacy code intact, but that allow execution on truly distributed data. Finally, the library provides a rich set of communication functions that support virtually all patterns of remote data demands in realistic structured grid scientific programs, including transposition, nearest-neighbor communication, pipelining, gather/scatter, and redistribution. At the end of the conversion process most intermediate Charon function calls will have been removed, the non-distributed arrays will have been deleted, and virtually the only remaining Charon functions calls are the high-level, highly optimized communications. Distribution of the data is under complete control of the programmer, although a wide range of useful distributions is easily available through predefined functions. A crucial aspect of the library is that it does not allocate space for distributed arrays, but accepts programmer-specified memory. This has two major consequences. First, codes parallelized using Charon do not suffer from encapsulation; user data is always directly accessible. This provides high efficiency, and also retains the possibility of using message passing directly for highly irregular communications. Second, non-distributed arrays can be interpreted as (trivial) distributions in the Charon sense, which allows them to be mapped to truly distributed arrays, and vice versa. This is the mechanism that enables incremental parallelization. In this paper we provide a brief introduction of the library and then focus on the actual steps in the parallelization process, using some representative examples from, among others, the NAS Parallel Benchmarks. We show how a complicated two-dimensional pipeline-the prototypical non-data-parallel algorithm- can be constructed with ease. To demonstrate the flexibility of the library, we give examples of the stepwise, efficient parallel implementation of nonlocal boundary conditions common in aircraft simulations, as well as the construction of the sequence of grids required for multigrid.
Lautt, W W; Legare, D J; Greenway, C V
1987-11-01
In dogs anesthetized with pentobarbital, central vena caval pressure (CVP), portal venous pressure (PVP), and intrahepatic lobar venous pressure (proximal to the hepatic venous sphincters) were measured. The objective was to determine some characteristics of the intrahepatic vascular resistance sites (proximal and distal to the hepatic venous sphincters) including testing predictions made using a recent mathematical model of distensible hepatic venous resistance. The stimulus used was a brief rise in CVP produced by transient occlusion of the thoracic vena cava in control state and when vascular resistance was elevated by infusions of norepinephrine or histamine, or by nerve stimulation. The percent transmission of the downstream pressure rise to upstream sites past areas of vascular resistance was elevated. Even small increments in CVP are partially transmitted upstream. The data are incompatible with the vascular waterfall phenomenon which predicts that venous pressure increments are not transmitted upstream until a critical pressure is overcome and then further increments would be 100% transmitted. The hepatic sphincters show the following characteristics. First, small rises in CVP are transmitted less than large elevations; as the CVP rises, the sphincters passively distend and allow a greater percent transmission upstream, thus a large rise in CVP is more fully transmitted than a small rise in CVP. Second, the amount of pressure transmission upstream is determined by the vascular resistance across which the pressure is transmitted. As nerves, norepinephrine, or histamine cause the hepatic sphincters to contract, the percent transmission becomes less and the distensibility of the sphincters is reduced. Similar characteristics are shown for the "presinusoidal" vascular resistance and the hepatic venous sphincter resistance.(ABSTRACT TRUNCATED AT 250 WORDS)
2016-11-01
1 1.3 Analyzing massive concrete pile-founded structures .................................................. 1 1.4 Pile...at the impact deck for the Lock and Dam 3 structural system at each incremental analysis step with C33=0.55...55 Table 4.2. Axial force, pile cap moment, and mudline moment for the three piles in the Lock and Dam 3 structural system at each
Progressive mechanical indentation of large-format Li-ion cells
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Hsin; Kumar, Abhishek; Simunovic, Srdjan
We used large format Li-ion cells to study the mechanical responses of single cells of thickness 6.5 mm and stacks of three cells under compressive loading. We carried out various sequences of increasing depth indentations using a 1.0 inch (25.4 mm) diameter steel ball with steel plate as a rigid support surface. The indentation depths were between 0.025 and 0.250 with main indentation increments tests of 0.025 steps. Increment steps of 0.100 and 0.005 were used to pinpoint the onset of internal-short that occurred between 0.245 and 0.250 . The indented cells were disassembled and inspected for internal damage. Loadmore » vs. time curves were compared with the developed computer models. Separator thinning leading to the short circuit was simulated using both isotropic and anisotropic mechanical properties. This study show that separators behave differently when tested as a single layer vs. a stack in a typical pouch cell. The collective responses of the multiple layers must be taken into account in failure analysis. A model that resolves the details of the individual internal cell components was able to simulate the internal deformation of the large format cells and the onset of failure assumed to coincide with the onset of internal short circuit.« less
Progressive mechanical indentation of large-format Li-ion cells
NASA Astrophysics Data System (ADS)
Wang, Hsin; Kumar, Abhishek; Simunovic, Srdjan; Allu, Srikanth; Kalnaus, Sergiy; Turner, John A.; Helmers, Jacob C.; Rules, Evan T.; Winchester, Clinton S.; Gorney, Philip
2017-02-01
Large format Li-ion cells were used to study the mechanical responses of single cells of thickness 6.5 mm and stacks of three cells under compressive loading. Various sequences of increasing depth indentations were carried out using a 1.0 inch (25.4 mm) diameter steel ball with steel plate as a rigid support surface. The indentation depths were between 0.025″ and 0.250″ with main indentation increments tests of 0.025″ steps. Increment steps of 0.100″ and 0.005″ were used to pinpoint the onset of internal-short that occurred between 0.245″ and 0.250″. The indented cells were disassembled and inspected for internal damage. Load vs. time curves were compared with the developed computer models. Separator thinning leading to the short circuit was simulated using both isotropic and anisotropic mechanical properties. Our study show that separators behave differently when tested as a single layer vs. a stack in a typical pouch cell. The collective responses of the multiple layers must be taken into account in failure analysis. A model that resolves the details of the individual internal cell components was able to simulate the internal deformation of the large format cells and the onset of failure assumed to coincide with the onset of internal short circuit.
Progressive mechanical indentation of large-format Li-ion cells
Wang, Hsin; Kumar, Abhishek; Simunovic, Srdjan; ...
2016-12-07
We used large format Li-ion cells to study the mechanical responses of single cells of thickness 6.5 mm and stacks of three cells under compressive loading. We carried out various sequences of increasing depth indentations using a 1.0 inch (25.4 mm) diameter steel ball with steel plate as a rigid support surface. The indentation depths were between 0.025 and 0.250 with main indentation increments tests of 0.025 steps. Increment steps of 0.100 and 0.005 were used to pinpoint the onset of internal-short that occurred between 0.245 and 0.250 . The indented cells were disassembled and inspected for internal damage. Loadmore » vs. time curves were compared with the developed computer models. Separator thinning leading to the short circuit was simulated using both isotropic and anisotropic mechanical properties. This study show that separators behave differently when tested as a single layer vs. a stack in a typical pouch cell. The collective responses of the multiple layers must be taken into account in failure analysis. A model that resolves the details of the individual internal cell components was able to simulate the internal deformation of the large format cells and the onset of failure assumed to coincide with the onset of internal short circuit.« less
Rakesh Minocha; Walter C. Shortle
1993-01-01
Two simple and fast methods for the extraction of major inorganic cations (Ca, Mg, Mn, K) from small quantities of stemwood and needles of woody plants were developed. A 3.2- or 6.4-mm cobalt drill bit was used to shave samples from disks and increment cores of stemwood. For ion extraction, wood (ground or shavings) or needles were either homogenzied using a Tekmar...
A Platform for Developing Autonomy Technologies for Small Military Robots
2008-12-01
angular increments around the disk so described. A line scanner oriented so the plane of detected points is horizontal (e.g., the axis about which...behaviors can be implemented. Thus it will contain the custom scripts , executables, and data that compose the actual behavior of the robot. Currently, the...operating system was constructed to be relatively small and boot fast. Debian GNU/Linux, however, provides an installation script that downloads a
Tapping Transaction Costs to Forecast Acquisition Cost Breaches
2016-01-01
Diameter Bomb Increment II (SDB II) Space Based Infrared System (SBIRS) High Program Standard Missile (SM) - 2 Block IV Stryker Family of Vehicles...some limitations. First, the activities included in this category will vary somewhat from contractor to contractor. As a result, a small portion of...contract may vary over time as new costs are defined by the contractor as being related to SE/PM. This could explain a small portion of the increase in
Research on optimal DEM cell size for 3D visualization of loess terraces
NASA Astrophysics Data System (ADS)
Zhao, Weidong; Tang, Guo'an; Ji, Bin; Ma, Lei
2009-10-01
In order to represent the complex artificial terrains like loess terraces in Shanxi Province in northwest China, a new 3D visual method namely Terraces Elevation Incremental Visual Method (TEIVM) is put forth by the authors. 406 elevation points and 14 enclosed constrained lines are sampled according to the TIN-based Sampling Method (TSM) and DEM Elevation Points and Lines Classification (DEPLC). The elevation points and constrained lines are used to construct Constrained Delaunay Triangulated Irregular Networks (CD-TINs) of the loess terraces. In order to visualize the loess terraces well by use of optimal combination of cell size and Elevation Increment Value (EIV), the CD-TINs is converted to Grid-based DEM (G-DEM) by use of different combination of cell size and EIV with linear interpolating method called Bilinear Interpolation Method (BIM). Our case study shows that the new visual method can visualize the loess terraces steps very well when the combination of cell size and EIV is reasonable. The optimal combination is that the cell size is 1 m and the EIV is 6 m. Results of case study also show that the cell size should be at least smaller than half of both the terraces average width and the average vertical offset of terraces steps for representing the planar shapes of the terraces surfaces and steps well, while the EIV also should be larger than 4.6 times of the terraces average height. The TEIVM and results above is of great significance to the highly refined visualization of artificial terrains like loess terraces.
Incremental Refinement of FAÇADE Models with Attribute Grammar from 3d Point Clouds
NASA Astrophysics Data System (ADS)
Dehbi, Y.; Staat, C.; Mandtler, L.; Pl¨umer, L.
2016-06-01
Data acquisition using unmanned aerial vehicles (UAVs) has gotten more and more attention over the last years. Especially in the field of building reconstruction the incremental interpretation of such data is a demanding task. In this context formal grammars play an important role for the top-down identification and reconstruction of building objects. Up to now, the available approaches expect offline data in order to parse an a-priori known grammar. For mapping on demand an on the fly reconstruction based on UAV data is required. An incremental interpretation of the data stream is inevitable. This paper presents an incremental parser of grammar rules for an automatic 3D building reconstruction. The parser enables a model refinement based on new observations with respect to a weighted attribute context-free grammar (WACFG). The falsification or rejection of hypotheses is supported as well. The parser can deal with and adapt available parse trees acquired from previous interpretations or predictions. Parse trees derived so far are updated in an iterative way using transformation rules. A diagnostic step searches for mismatches between current and new nodes. Prior knowledge on façades is incorporated. It is given by probability densities as well as architectural patterns. Since we cannot always assume normal distributions, the derivation of location and shape parameters of building objects is based on a kernel density estimation (KDE). While the level of detail is continuously improved, the geometrical, semantic and topological consistency is ensured.
NASA Astrophysics Data System (ADS)
Most, S.; Nowak, W.; Bijeljic, B.
2014-12-01
Transport processes in porous media are frequently simulated as particle movement. This process can be formulated as a stochastic process of particle position increments. At the pore scale, the geometry and micro-heterogeneities prohibit the commonly made assumption of independent and normally distributed increments to represent dispersion. Many recent particle methods seek to loosen this assumption. Recent experimental data suggest that we have not yet reached the end of the need to generalize, because particle increments show statistical dependency beyond linear correlation and over many time steps. The goal of this work is to better understand the validity regions of commonly made assumptions. We are investigating after what transport distances can we observe: A statistical dependence between increments, that can be modelled as an order-k Markov process, boils down to order 1. This would be the Markovian distance for the process, where the validity of yet-unexplored non-Gaussian-but-Markovian random walks would start. A bivariate statistical dependence that simplifies to a multi-Gaussian dependence based on simple linear correlation (validity of correlated PTRW). Complete absence of statistical dependence (validity of classical PTRW/CTRW). The approach is to derive a statistical model for pore-scale transport from a powerful experimental data set via copula analysis. The model is formulated as a non-Gaussian, mutually dependent Markov process of higher order, which allows us to investigate the validity ranges of simpler models.
Reynolds Number Effects on the Performance of Ailerons and Spoilers (Invited)
NASA Technical Reports Server (NTRS)
Mineck, R. E.
2001-01-01
The influence of Reynolds number on the performance of outboard spoilers and ailerons was investigated on a generic subsonic transport configuration in the National Transonic Facility over a chord Reynolds number range from 3 to 30 million and a Mach number range from 0.70 to 0.94. Spoiler deflection angles of 0, 10, and 20 degrees and aileron deflection angles of -10, 0, and 10 degrees were tested. Aeroelastic effects were minimized by testing at constant normalized dynamic pressure conditions over intermediate Reynolds number ranges. Results indicated that the increment in rolling moment due to spoiler deflection generally becomes more negative as the Reynolds number increases from 3 x 10(exp 6) to 22 x 10 (exp 6) with only small changes between Reynolds numbers of 22 x 10(exp 6) and 30 x 10(exp 6). The change in the increment in rolling moment coefficient with Reynolds number for the aileron deflected configuration is generally small with a general trend of increasing magnitude with increasing Reynolds number.
Stepped frequency ground penetrating radar
Vadnais, Kenneth G.; Bashforth, Michael B.; Lewallen, Tricia S.; Nammath, Sharyn R.
1994-01-01
A stepped frequency ground penetrating radar system is described comprising an RF signal generating section capable of producing stepped frequency signals in spaced and equal increments of time and frequency over a preselected bandwidth which serves as a common RF signal source for both a transmit portion and a receive portion of the system. In the transmit portion of the system the signal is processed into in-phase and quadrature signals which are then amplified and then transmitted toward a target. The reflected signals from the target are then received by a receive antenna and mixed with a reference signal from the common RF signal source in a mixer whose output is then fed through a low pass filter. The DC output, after amplification and demodulation, is digitized and converted into a frequency domain signal by a Fast Fourier Transform. A plot of the frequency domain signals from all of the stepped frequencies broadcast toward and received from the target yields information concerning the range (distance) and cross section (size) of the target.
NASA Astrophysics Data System (ADS)
Yang, Jun; Wang, Ze-Xin; Lu, Sheng; Lv, Wei-gang; Jiang, Xi-zhi; Sun, Lei
2017-03-01
The micro-arc oxidation process was conducted on ZK60 Mg alloy under two and three steps voltage-increasing modes by DC pulse electrical source. The effect of each mode on current-time responses during MAO process and the coating characteristic were analysed and discussed systematically. The microstructure, thickness and corrosion resistance of MAO coatings were evaluated by scanning electron microscopy (SEM), energy disperse spectroscopy (EDS), microscope with super-depth of field and electrochemical impedance spectroscopy (EIS). The results indicate that two and three steps voltage-increasing modes can improve weak spark discharges with insufficient breakdown strength in later period during the MAO process. Due to higher value of voltage and voltage increment, the coating with maximum thickness of about 20.20μm formed under two steps voltage-increasing mode shows the best corrosion resistance. In addition, the coating fabricated under three steps voltage-increasing mode shows a smoother coating with better corrosion resistance due to the lower amplitude of voltage-increasing.
Investigating the incremental validity of cognitive variables in early mathematics screening.
Clarke, Ben; Shanley, Lina; Kosty, Derek; Baker, Scott K; Cary, Mari Strand; Fien, Hank; Smolkowski, Keith
2018-03-26
The purpose of this study was to investigate the incremental validity of a set of domain general cognitive measures added to a traditional screening battery of early numeracy measures. The sample consisted of 458 kindergarten students of whom 285 were designated as severely at-risk for mathematics difficulty. Hierarchical multiple regression results indicated that Wechsler Abbreviated Scales of Intelligence (WASI) Matrix Reasoning and Vocabulary subtests, and Digit Span Forward and Backward measures explained a small, but unique portion of the variance in kindergarten students' mathematics performance on the Test of Early Mathematics Ability-Third Edition (TEMA-3) when controlling for Early Numeracy Curriculum Based Measurement (EN-CBM) screening measures (R² change = .01). Furthermore, the incremental validity of the domain general cognitive measures was relatively stronger for the severely at-risk sample. We discuss results from the study in light of instructional decision-making and note the findings do not justify adding domain general cognitive assessments to mathematics screening batteries. (PsycINFO Database Record (c) 2018 APA, all rights reserved).
NASA Astrophysics Data System (ADS)
Lee, Byungjin; Lee, Young Jae; Sung, Sangkyung
2018-05-01
A novel attitude determination method is investigated that is computationally efficient and implementable in low cost sensor and embedded platform. Recent result on attitude reference system design is adapted to further develop a three-dimensional attitude determination algorithm through the relative velocity incremental measurements. For this, velocity incremental vectors, computed respectively from INS and GPS with different update rate, are compared to generate filter measurement for attitude estimation. In the quaternion-based Kalman filter configuration, an Euler-like attitude perturbation angle is uniquely introduced for reducing filter states and simplifying propagation processes. Furthermore, assuming a small angle approximation between attitude update periods, it is shown that the reduced order filter greatly simplifies the propagation processes. For performance verification, both simulation and experimental studies are completed. A low cost MEMS IMU and GPS receiver are employed for system integration, and comparison with the true trajectory or a high-grade navigation system demonstrates the performance of the proposed algorithm.
Two Different Views on the World Around Us: The World of Uniformity versus Diversity.
Kwon, JaeHwan; Nayakankuppam, Dhananjay
2016-01-01
We propose that when individuals believe in fixed traits of personality (entity theorists), they are likely to expect a world of "uniformity." As such, they easily infer a population statistic from a small sample of data with confidence. In contrast, individuals who believe in malleable traits of personality (incremental theorists) are likely to presume a world of "diversity," such that they "hesitate" to infer a population statistic from a similarly sized sample. In four laboratory experiments, we found that compared to incremental theorists, entity theorists estimated a population mean from a sample with a greater level of confidence (Studies 1a and 1b), expected more homogeneity among the entities within a population (Study 2), and perceived an extreme value to be more indicative of an outlier (Study 3). These results suggest that individuals are likely to use their implicit self-theory orientations (entity theory versus incremental theory) to see a population in general as a constitution either of homogeneous or heterogeneous entities.
Code of Federal Regulations, 2010 CFR
2010-07-01
... describing the devices for air pollution control and process changes that you will use to comply with the emission limits and other requirements of this subpart. If you plan to reduce your small municipal waste...
Warfighter Information Network-Tactical Increment 3 (WIN-T Inc 3)
2013-12-01
T vehicles employed at BCT, Fires, (Ch-1) WIN-T Inc 3 December 2013 SAR April 16, 2014 16:49:41 UNCLASSIFIED 13 AVN , BfSB, and select force...passengers and crew from small arms fire, mines, IED and other anti-vehicle/ personnel threats. AVN , BfSB, and select force pooled assets...small arms fire, mines, IED and other anti-vehicle/ personnel threats. AVN , BfSB, and select force pooled assets operating within the
Spatial and velocity statistics of inertial particles in turbulent flows
NASA Astrophysics Data System (ADS)
Bec, J.; Biferale, L.; Cencini, M.; Lanotte, A. S.; Toschi, F.
2011-12-01
Spatial and velocity statistics of heavy point-like particles in incompressible, homogeneous, and isotropic three-dimensional turbulence is studied by means of direct numerical simulations at two values of the Taylor-scale Reynolds number Reλ ~ 200 and Reλ ~ 400, corresponding to resolutions of 5123 and 20483 grid points, respectively. Particles Stokes number values range from St ≈ 0.2 to 70. Stationary small-scale particle distribution is shown to display a singular -multifractal- measure, characterized by a set of generalized fractal dimensions with a strong sensitivity on the Stokes number and a possible, small Reynolds number dependency. Velocity increments between two inertial particles depend on the relative weight between smooth events - where particle velocity is approximately the same of the fluid velocity-, and caustic contributions - when two close particles have very different velocities. The latter events lead to a non-differentiable small-scale behaviour for the relative velocity. The relative weight of these two contributions changes at varying the importance of inertia. We show that moments of the velocity difference display a quasi bi-fractal-behavior and that the scaling properties of velocity increments for not too small Stokes number are in good agreement with a recent theoretical prediction made by K. Gustavsson and B. Mehlig arXiv: 1012.1789v1 [physics.flu-dyn], connecting the saturation of velocity scaling exponents with the fractal dimension of particle clustering.
A near-optimum procedure for selecting stations in a streamgaging network
Lanfear, Kenneth J.
2005-01-01
Two questions are fundamental to Federal government goals for a network of streamgages which are operated by the U.S. Geological Survey: (1) how well does the present network of streamagaging stations meet defined Federal goals and (2) what is the optimum set of stations to add or reactivate to support remaining goals? The solution involves an incremental-stepping procedure that is based on Basic Feasible Incremental Solutions (BFIS?s) where each BFIS satisfies at least one Federal streamgaging goal. A set of minimum Federal goals for streamgaging is defined to include water measurements for legal compacts and decrees, flooding, water budgets, regionalization of streamflow characteristics, and water quality. Fully satisfying all these goals by using the assumptions outlined in this paper would require adding 887 new streamgaging stations to the U.S. Geological Survey network and reactivating an additional 857 stations that are currently inactive.
NASA Astrophysics Data System (ADS)
Liu, Fang; Luo, Qingming; Xu, Guodong; Li, Pengcheng
2003-12-01
Near infrared spectroscopy (NIRS) has been developed as a non-invasive method to assess O2 delivery, O2 consumption and blood flow, in diverse local muscle groups at rest and during exercise. The aim of this study was to investigate local O2 consumption in exercising muscle by use of near-infrared spectroscopy (NIRS). Ten elite athletes of different sport items were tested in rest and during step incremental load exercise. Local variations of quadriceps muscles were investigated with our wireless NIRS blood oxygen monitor system. The results show that the changes of blood oxygen relate on the sport items, type of muscle, kinetic capacity et al. These results indicate that NIRS is a potential useful tool to detect local muscle oxygenation and blood flow profiles; therefore it might be easily applied for evaluating the effect of athletes training.
The effects of age and step length on joint kinematics and kinetics of large out-and-back steps.
Schulz, Brian W; Ashton-Miller, James A; Alexander, Neil B
2008-06-01
Maximum step length (MSL) is a clinical test that has been shown to correlate with age, various measures of fall risk, and knee and hip joint extension speed, strength, and power capacities, but little is known about the kinematics and kinetics of the large out-and-back step utilized. Body motions and ground reaction forces were recorded for 11 unimpaired younger and 10 older women while attaining maximum step length. Joint kinematics and kinetics were calculated using inverse dynamics. The effects of age group and step length on the biomechanics of these large out-and-back steps were determined. Maximum step length was 40% greater in the younger than in the older women (P<0.0001). Peak knee and hip, but not ankle, angle, velocity, moment, and power were generally greater for younger women and longer steps. After controlling for age group, step length generally explained significant additional variance in hip and torso kinematics and kinetics (incremental R2=0.09-0.37). The young reached their peak knee extension moment immediately after landing of the step out, while the old reached their peak knee extension moment just before the return step liftoff (P=0.03). Maximum step length is strongly associated with hip kinematics and kinetics. Delays in peak knee extension moment that appear to be unrelated to step length, may indicate a reduced ability of older women to rapidly apply force to the ground with the stepping leg and thus arrest the momentum of a fall.
The effects of age and step length on joint kinematics and kinetics of large out-and-back steps
Schulz, Brian W.; Ashton-Miller, James A.; Alexander, Neil B.
2008-01-01
Background Maximum Step Length is a clinical test that has been shown to correlate with age, various measures of fall risk, and knee and hip joint extension speed, strength, and power capacities, but little is known about the kinematics and kinetics of the large out-and-back step utilized. Methods Body motions and ground reaction forces were recorded for 11 unimpaired younger and 10 older women while attaining Maximum Step Length. Joint kinematics and kinetics were calculated using inverse dynamics. The effects of age group and step length on the biomechanics of these large out-and-back steps were determined. Findings Maximum Step Length was 40% greater in the younger than in the older women (p<0.0001). Peak knee and hip, but not ankle, angle, velocity, moment, and power were generally greater for younger women and longer steps. After controlling for age group, step length generally explained significant additional variance in hip and torso kinematics and kinetics (incremental R2=0.09–0.37). The young reached their peak knee extension moment immediately after landing of the step out, while the old reached their peak knee extension moment just before the return step lift off (p=0.03). Interpretation Maximum Step Length is strongly associated with hip kinematics and kinetics. Delays in peak knee extension moment that appear to be unrelated to step length, may indicate a reduced ability of older women to rapidly apply force to the ground with the stepping leg and thus arrest the momentum of a fall. PMID:18308435
ERIC Educational Resources Information Center
Bonafiglia, Jacob T.; Sawula, Laura J.; Gurd, Brendon J.
2018-01-01
The purpose of this study was to determine if the counting talk test can be used to discern whether an individual is exercising above or at/below maximal lactate steady state. Twenty-two participants completed VO[subscript 2]peak and counting talk test incremental step tests followed by an endurance test at 65% of work rate at VO[subscript 2]peak…
ERIC Educational Resources Information Center
Grant, Kathryn E.; Farahmand, Farahnaz; Meyerson, David A.; Dubois, David L.; Tolan, Patrick H.; Gaylord-Harden, Noni K.; Barnett, Alexandra; Horwath, Jordan; Doxie, Jackie; Tyler, Donald; Harrison, Aubrey; Johnson, Sarah; Duffy, Sophia
2014-01-01
This manuscript summarizes an iterative process used to develop a new intervention for low-income urban youth at risk for negative academic outcomes (e.g., disengagement, failure, drop-out). A series of seven steps, building incrementally one upon the other, are described: 1) identify targets of the intervention; 2) develop logic model; 3)…
NASA Astrophysics Data System (ADS)
Liu, C.; Macdonald, F. A.; Raub, T.; Wang, Z.; Evans, D. A.
2012-12-01
We report Mg isotope profiles of two cap-carbonates: Nuccaleena formation from south Australia (mostly dolostones) and Tsagaan Oloom Formation from southwest Mongolia (including dolostones, aragonite crystal fans, and lime-mudstones). These data provide additional constraints on the chemical evolution of Neoproterozoic Oceans after the Marinoan deglaciation. An incremental leaching technique using ammonium acetate and various concentrations of acetic acid and hydrochloric acid was applied to separate metals in various forms from cap-carbonates (including surface adsorbed phases, calcite, dolomite and clay minerals). The leachates were then passed through chromatographic columns to extract pure Mg and Sr, which were then analyzed for their isotopic compositions by MC-ICP-MS (Neptune) at Yale University. Elemental ratios (Mg/Ca and Sr/Ca) in each leaching steps were also measured. Our results show that small variations of δ26MgDSM3 with leaching steps were observed in most dolostone samples when secondary calcite is absent. In contrast, large Mg isotope variations (up to 1.5 per mil) were shown in the leaching steps of limestone and crystal fans. The primary δ26MgDSM3 value of each sample was chosen from the leachate that has the lowest 87Sr/86Sr ratios. The δ26MgDSM3 value of Nuccaleena dolostone increases from -2.2‰ at the basal part of the section to -1.7‰ in the middle, and then turns back to -2.0‰ on the top, with a positive correlation between 26Mg/24Mg and 87Sr/86Sr ratios, implying that the high δ26MgDSM3 values may be caused by alteration or inherit from continental-derived fluids. In contrast, small δ26MgDSM3 variations in Tsagaan Oloom dolostones were exhibited in different leaching steps or cross the section (~-1.7‰), with high 87Sr/86Sr ratios (~0.7090), resembling cap dolostones from middle part of Nuccaleena dolostone, implying that they are formed in a similar environment. However, the δ26MgDSM3 value of upper lime-mudstones and crystal fans from Tsagaan Oloom Formation oscillate between -3.4‰ to -1.9‰ with a negative correlation between δ26MgDSM3 and Ca/Mg, and without any correlation with 87Sr/86Sr ratios, indicating mixing of co-precipitated multiple carbonate phases with different mineralogy.
Conversion of type of quantum well structure
NASA Technical Reports Server (NTRS)
Ning, Cun-Zheng (Inventor)
2007-01-01
A method for converting a Type 2 quantum well semiconductor material to a Type 1 material. A second layer of undoped material is placed between first and third layers of selectively doped material, which are separated from the second layer by undoped layers having small widths. Doping profiles are chosen so that a first electrical potential increment across a first layer-second layer interface is equal to a first selected value and/or a second electrical potential increment across a second layer-third layer interface is equal to a second selected value. The semiconductor structure thus produced is useful as a laser material and as an incident light detector material in various wavelength regions, such as a mid-infrared region.
Conversion of Type of Quantum Well Structure
NASA Technical Reports Server (NTRS)
Ning, Cun-Zheng (Inventor)
2007-01-01
A method for converting a Type 2 quantum well semiconductor material to a Type 1 material. A second layer of undoped material is placed between first and third layers of selectively doped material, which are separated from the second layer by undoped layers having small widths. Doping profiles are chosen so that a first electrical potential increment across a first layer-second layer interface is equal to a first selected value and/or a second electrical potential increment across a second layer-third layer interface is equal to a second selected value. The semiconductor structure thus produced is useful as a laser material and as an incident light detector material in various wavelength regions, such as a mid-infrared region.
Innovations in Mission Architectures for Human and Robotic Exploration Beyond Low Earth Orbit
NASA Technical Reports Server (NTRS)
Cooke, Douglas R.; Joosten, B. Kent; Lo, Martin W.; Ford, Ken; Hansen, Jack
2002-01-01
Through the application of advanced technologies, mission concepts, and new ideas in combining capabilities, architectures for missions beyond Earth orbit have been dramatically simplified. These concepts enable a stepping stone approach to discovery driven, technology enabled exploration. Numbers and masses of vehicles required are greatly reduced, yet enable the pursuit of a broader range of objectives. The scope of missions addressed range from the assembly and maintenance of arrays of telescopes for emplacement at the Earth-Sun L2, to Human missions to asteroids, the moon and Mars. Vehicle designs are developed for proof of concept, to validate mission approaches and understand the value of new technologies. The stepping stone approach employs an incremental buildup of capabilities; allowing for decision points on exploration objectives. It enables testing of technologies to achieve greater reliability and understanding of costs for the next steps in exploration.
Averós, Xavier; Lorea, Areta; Beltrán de Heredia, Ignacia; Arranz, Josune; Ruiz, Roberto; Estevez, Inma
2014-01-01
Space availability is essential to grant the welfare of animals. To determine the effect of space availability on movement and space use in pregnant ewes (Ovis aries), 54 individuals were studied during the last 11 weeks of gestation. Three treatments were tested (1, 2, and 3 m2/ewe; 6 ewes/group). Ewes' positions were collected for 15 minutes using continuous scan samplings two days/week. Total and net distance, net/total distance ratio, maximum and minimum step length, movement activity, angular dispersion, nearest, furthest and mean neighbour distance, peripheral location ratio, and corrected peripheral location ratio were calculated. Restriction in space availability resulted in smaller total travelled distance, net to total distance ratio, maximum step length, and angular dispersion but higher movement activity at 1 m2/ewe as compared to 2 and 3 m2/ewe (P<0.01). On the other hand, nearest and furthest neighbour distances increased from 1 to 3 m2/ewe (P<0.001). Largest total distance, maximum and minimum step length, and movement activity, as well as lowest net/total distance ratio and angular dispersion were observed during the first weeks (P<0.05) while inter-individual distances increased through gestation. Results indicate that movement patterns and space use in ewes were clearly restricted by limitations of space availability to 1 m2/ewe. This reflected in shorter, more sinuous trajectories composed of shorter steps, lower inter-individual distances and higher movement activity potentially linked with higher restlessness levels. On the contrary, differences between 2 and 3 m2/ewe, for most variables indicate that increasing space availability from 2 to 3 m2/ewe would appear to have limited benefits, reflected mostly in a further increment in the inter-individual distances among group members. No major variations in spatial requirements were detected through gestation, except for slight increments in inter-individual distances and an initial adaptation period, with ewes being restless and highly motivated to explore their new environment.
Improve Math Teaching with Incremental Improvements
ERIC Educational Resources Information Center
Star, Jon R.
2016-01-01
Past educational reforms have failed because they didn't meet teachers where they were. They expected major changes in practices that may have been unrealistic for many teachers even under ideal professional learning conditions. Instead of promoting broad scale changes, improvement may be more likely if they are composed of small yet powerful…
76 FR 10082 - Office of International Trade; State Trade and Export Promotion (STEP) Grant Program
Federal Register 2010, 2011, 2012, 2013, 2014
2011-02-23
... (STEP) Grant Program AGENCY: U.S. Small Business Administration (SBA). ACTION: Notice of grant... Program Announcement No. OIT-STEP-2011- 01 to invite the States, the District of Columbia and the U.S. Territories to apply for a STEP grant to carry out export promotion programs that assist eligible small...
Westerhout, K Y; Verheggen, B G; Schreder, C H; Augustin, M
2012-01-01
An economic evaluation was conducted to assess the outcomes and costs as well as cost-effectiveness of the following grass-pollen immunotherapies: OA (Oralair; Stallergenes S.A., Antony, France) vs GRZ (Grazax; ALK-Abelló, Hørsholm, Denmark), and ALD (Alk Depot SQ; ALK-Abelló) (immunotherapy agents alongside symptomatic medication) and symptomatic treatment alone for grass pollen allergic rhinoconjunctivitis. The costs and outcomes of 3-year treatment were assessed for a period of 9 years using a Markov model. Treatment efficacy was estimated using an indirect comparison of available clinical trials with placebo as a common comparator. Estimates for immunotherapy discontinuation, occurrence of asthma, health state utilities, drug costs, resource use, and healthcare costs were derived from published sources. The analysis was conducted from the insurant's perspective including public and private health insurance payments and co-payments by insurants. Outcomes were reported as quality-adjusted life years (QALYs) and symptom-free days. The uncertainty around incremental model results was tested by means of extensive deterministic univariate and probabilistic multivariate sensitivity analyses. In the base case analysis the model predicted a cost-utility ratio of OA vs symptomatic treatment of €14,728 per QALY; incremental costs were €1356 (95%CI: €1230; €1484) and incremental QALYs 0.092 (95%CI: 0.052; 0.140). OA was the dominant strategy compared to GRZ and ALD, with estimated incremental costs of -€1142 (95%CI: -€1255; -€1038) and -€54 (95%CI: -€188; €85) and incremental QALYs of 0.015 (95%CI: -0.025; 0.056) and 0.027 (95%CI: -0.022; 0.075), respectively. At a willingness-to-pay threshold of €20,000, the probability of OA being the most cost-effective treatment was predicted to be 79%. Univariate sensitivity analyses show that incremental outcomes were moderately sensitive to changes in efficacy estimates. The main study limitation was the requirement of an indirect comparison involving several steps to assess relative treatment effects. The analysis suggests OA to be cost-effective compared to GRZ and ALD, and a symptomatic treatment. Sensitivity analyses showed that uncertainty surrounding treatment efficacy estimates affected the model outcomes.
Wagh, Mihir S; Montane, Roberto
2012-02-01
The upper GI tract and the colon are readily accessible endoscopically, but the small intestine is relatively difficult to evaluate. To demonstrate the feasibility of using suction as a means of locomotion and to assess the initial design of a suction enteroscope. Feasibility study. Animal laboratory. Various prototype suction devices designed in our laboratory were tested in swine small intestine in a force test station. For in vivo experiments in live anesthetized animals, two suction devices (1 fixed tip and 1 movable tip) were attached to the outside of the endoscope. By creating suction in the fixed tip, the endoscope was anchored while the movable tip was advanced. Suction was then applied to the extended tip to attach it to the distal bowel. Suction on the fixed tip was then released and the movable tip with suction pulled back, resulting in advancement of the endoscope. These steps were sequentially repeated. Intestinal segments were sent for pathologic assessment after testing. Force generated ranged from 0.278 to 4.74 N with 64.3 to 88 kPa vacuum pressure. A linear relationship was seen between the pull force and vacuum pressures and tip surface area. During in vivo experiments, the endoscope was advanced in 25-cm segmental increments with sequential suction-and-release maneuvers. No significant bowel trauma was seen on pathology and necropsy. The enteroscopy system requires further refinement. A novel suction enteroscope was designed and tested. Suction tip characteristics played a critical role impacting the functionality of this enteroscopy system. Copyright © 2012 American Society for Gastrointestinal Endoscopy. Published by Mosby, Inc. All rights reserved.
Gao, Yaozong; Zhan, Yiqiang
2015-01-01
Image-guided radiotherapy (IGRT) requires fast and accurate localization of the prostate in 3-D treatment-guided radiotherapy, which is challenging due to low tissue contrast and large anatomical variation across patients. On the other hand, the IGRT workflow involves collecting a series of computed tomography (CT) images from the same patient under treatment. These images contain valuable patient-specific information yet are often neglected by previous works. In this paper, we propose a novel learning framework, namely incremental learning with selective memory (ILSM), to effectively learn the patient-specific appearance characteristics from these patient-specific images. Specifically, starting with a population-based discriminative appearance model, ILSM aims to “personalize” the model to fit patient-specific appearance characteristics. The model is personalized with two steps: backward pruning that discards obsolete population-based knowledge and forward learning that incorporates patient-specific characteristics. By effectively combining the patient-specific characteristics with the general population statistics, the incrementally learned appearance model can localize the prostate of a specific patient much more accurately. This work has three contributions: 1) the proposed incremental learning framework can capture patient-specific characteristics more effectively, compared to traditional learning schemes, such as pure patient-specific learning, population-based learning, and mixture learning with patient-specific and population data; 2) this learning framework does not have any parametric model assumption, hence, allowing the adoption of any discriminative classifier; and 3) using ILSM, we can localize the prostate in treatment CTs accurately (DSC ∼0.89) and fast (∼4 s), which satisfies the real-world clinical requirements of IGRT. PMID:24495983
Speeding up nuclear magnetic resonance spectroscopy by the use of SMAll Recovery Times - SMART NMR
NASA Astrophysics Data System (ADS)
Vitorge, Bruno; Bodenhausen, Geoffrey; Pelupessy, Philippe
2010-11-01
A drastic reduction of the time required for two-dimensional NMR experiments can be achieved by reducing or skipping the recovery delay between successive experiments. Novel SMAll Recovery Times (SMART) methods use orthogonal pulsed field gradients in three spatial directions to select the desired pathways and suppress interference effects. Two-dimensional spectra of dilute amino acids with concentrations as low as 2 mM can be recorded in about 0.1 s per increment in the indirect domain.
Lacour, Jean-René; Messonnier, Laurent; Bourdin, Muriel
2007-09-01
To assess whether the ability to demonstrate a plateau in oxygen consumption VO2 could be related to adaptation to exercise, the data obtained over a period of 10 years on 94 elite oarsmen who had participated in annual testing were re-evaluated. The test consisted in an incremental step protocol until volitional exhaustion. VO2, heart rate (HR), blood lactate ([La]b) and respiratory exchange ratio (RER) were measured at each step. The maximal oxygen consumption (VO2max), the power corresponding to VO2maxPamax and the maximal power achieved (Ppeak) were recorded. Thirty-eight oarsmen achieved a VO2 plateau and were designated as Pla; 56 did not and were designed as N-Pla. The Pla and N-Pla VO2max, Pamax and maximal HR values were similar. In comparison with N-Pla, the Pla group displayed a rightward shift of the [La]b versus power curve, accounted for by both the increased percentage of VO2max corresponding to 4 mmol l(-1) and the decreased value of [La]b corresponding to Pamax (P<0.05). Pla oarsmen attained a higher Ppeak expressed as % of Pamax (P<0.05) and also showed better ergometer performance (P<0.05). In a sub-group of 53 oarsmen constituted on the basis of Pamax values close to 400 W, for a given power output, the Pla subjects had significantly lower HR, RER, and [La]b values at each sub-maximal stage of the test. These results suggest that achieving a [Formula: see text] plateau during completion of an incremental step protocol accounts for greater muscle ability to maintain homeostasis during exercise. These differences give the oarsmen an advantage in rowing competitions.
Dominici, Nadia; Daprati, Elena; Nico, Daniele; Cappellini, Germana; Ivanenko, Yuri P; Lacquaniti, Francesco
2009-03-01
When walking, step length provides critical information on traveled distance along the ongoing path [corrected] Little is known on the role that knowledge about body dimensions plays within this process. Here we directly addressed this question by evaluating whether changes in body proportions interfere with computation of traveled distance for targets located outside the reaching space. We studied locomotion and distance estimation in an achondroplastic child (ACH, 11 yr) before and after surgical elongation of the shank segments of both lower limbs and in healthy adults walking on stilts, designed to mimic shank-segment elongation. Kinematic analysis of gait revealed that dynamic coupling of the thigh, shank, and foot segments changed substantially as a result of elongation. Step length remained unvaried, in spite of the significant increase in total limb length ( approximately 1.5-fold). These relatively shorter strides resulted from smaller oscillations of the shank segment, as would be predicted by proportional increments in limb size and not by asymmetrical segmental increment as in the present case (length of thighs was not modified). Distance estimation was measured by walking with eyes closed toward a memorized target. Before surgery, the behavior of ACH was comparable to that of typically developing participants. In contrast, following shank elongation, the ACH walked significantly shorter distances when aiming at the same targets. Comparable changes in limb kinematics, stride length, and estimation of traveled distance were found in adults wearing on stilts, suggesting that path integration errors in both cases were related to alterations in the intersegmental coordination of the walking limbs. The results are consistent with a dynamic locomotor body schema used for controlling step length and path estimation, based on inherent relationships between gait parameters and body proportions.
A revised approach to the ULDB design
NASA Astrophysics Data System (ADS)
Smith, M.; Cathey, H.
The National Aeronautics and Space Administration Balloon Program has experienced problems in the scaling up of the proposed Ultra Long Duration Balloon. Full deployment of the balloon envelope has been the issue for the larger balloons. There are a number of factors that contribute to this phenomenon. Analytical treatments of the deployment issue are currently underway. It has also been acknowledged that the current fabrication approach using foreshortening is costly, labor intensive, and requires significant handling during production thereby increasing the chances of inducing damage to the envelope. Raven Industries has proposed a new design and fabrication approach that should increase the probability of balloon deployment, does not require foreshortening, will reduce the handling, production labor, and reduce the final balloon cost. This paper will present a description of the logic and approach used to develop this innovation. This development consists of a serial set of steps with decision points that build upon the results of the previous steps. The first steps include limited material development and testing. This will be followed by load testing of bi-axial reinforced cylinders to determine the effect of eliminating the foreshortening. This series of tests have the goal of measuring the strain in the material as it is bi-axially loaded in a condition that closely replicated the application in the full-scale balloon. Constant lobe radius pumpkin shaped test structures will be designed and analyzed. This matrix of model tests, in conjunction with the deployment analyses, will help develop a curve that should clearly present the deployment relationship for this kind of design. This will allow the ``design space'' for this type of balloon to be initially determined. The materials used, analyses, and ground testing results of both cylinders and small pumpkin structures will be presented. Following ground testing, a series of test flights, staged in increments of increasing suspended load and balloon volume, will be conducted. The first small scale test flight has been proposed for early Spring 2004. Results of this test flight of this new design and approach will presented. Two additional domestic test flights from Ft. Sumner, New Mexico, and Palestine, Texas, and one circumglobal test flight from Australia are planned as part of this development. Future plans for both ground testing and test flights will also be presented.
A Revised Approach to the ULDB Design
NASA Technical Reports Server (NTRS)
Smith, Michael; Cathey, H. M., Jr.
2004-01-01
The National Aeronautics and Space Administration Balloon Program has experienced problems in the scaling up of the proposed Ultra Long Duration Balloon. Full deployment of the balloon envelope has been the issue for the larger balloons. There are a number of factors that contribute to this phenomenon. Analytical treatments of the deployment issue are currently underway. It has also been acknowledged that the current fabrication approach using foreshortening is costly, labor intensive, and requires significant handling during production thereby increasing the chances of inducing damage to the envelope. Raven Industries has proposed a new design and fabrication approach that should increase the probability of balloon deployment, does not require foreshortening, will reduce the handling, production labor, and reduce the final balloon cost. This paper will present a description of the logic and approach used to develop this innovation. This development consists of a serial set of steps with decision points that build upon the results of the previous steps. The first steps include limited material development and testing. This will be followed by load testing of bi-axial reinforced cylinders to determine the effect of eliminating the foreshortening. This series of tests have the goal of measuring the strain in the material as it is bi-axially loaded in a condition that closely replicated the application in the full-scale balloon. Constant lobe radius pumpkin shaped test structures will be designed and analyzed. This matrix of model tests, in conjunction with the deployment analyses, will help develop a curve that should clearly present the deployment relationship for this kind of design. This will allow the "design space" for this type of balloon to be initially determined. The materials used, analyses, and ground testing results of both cylinders and small pumpkin structures will be presented. Following ground testing, a series of test flights, staged in increments of increasing suspended load and balloon volume, will be conducted. The first small scale test flight has been proposed for early Spring 2004. Results of this test flight of this new design and approach will presented. Two additional domestic test flights from Ft. Sumner, New Mexico, and Palestine, Texas, and one circumglobal test flight from Australia are planned as part of this development. Future plans for both ground testing and test flights will also be presented.
Bonjour, Timothy J; Charny, Grigory; Thaxton, Robert E
2016-11-01
Rapid effective trauma resuscitations (TRs) decrease patient morbidity and mortality. Few studies have evaluated TR care times. Effective time goals and superior human patient simulator (HPS) training can improve patient survivability. The purpose of this study was to compare live TR to HPS resuscitation times to determine mean incremental resuscitation times and ascertain if simulation was educationally equivalent. The study was conducted at San Antonio Military Medical Center, Department of Defense Level I trauma center. This was a prospective observational study measuring incremental step times by trauma teams during trauma and simulation patient resuscitations. Trauma and simulation patient arms had 60 patients for statistical significance. Participants included Emergency Medicine residents and Physician Assistant residents as the trauma team leader. The trauma patient arm revealed a mean evaluation time of 10:33 and simulation arm 10:23. Comparable time characteristics in the airway, intravenous access, blood sample collection, and blood pressure data subsets were seen. TR mean times were similar to the HPS arm subsets demonstrating simulation as an effective educational tool. Effective stepwise approaches, incremental time goals, and superior HPS training can improve patient survivability and improved departmental productivity using TR teams. Reprint & Copyright © 2016 Association of Military Surgeons of the U.S.
Modifications to Improve Data Acquisition and Analysis for Camouflage Design
1983-01-01
terrains into facsimiles of the original scenes in 3, 4# or 5 colors in CIELAB notation. Tasks that were addressed included optimization of the...a histogram algorithm (HIST) was used as a first step In the clustering of the CIELAB values of the scene pixels. This algorithm Is highly efficient...however, an optimal process and the CIELAB coordinates of the final color domains can be Influenced by the color coordinate Increments used In the
Attrill, D C; Davies, R M; King, T A; Dickinson, M R; Blinkhorn, A S
2004-01-01
To quantify the temperature increments in a simulated dental pulp following irradiation with an Er:YAG laser, and to compare those increments when the laser is applied with and without water spray. Two cavities were prepared on either the buccal or lingual aspect of sound extracted teeth using the laser. One cavity was prepared with water spray, the other without and the order of preparation randomised. Identical preparation parameters were used for both cavities. Temperature increments were measured in the pulp chamber using a calibrated thermocouple and a novel pulp simulant. Maximum increments were 4.0 degrees C (water) and 24.7 degrees C (no water). Water was shown to be highly significant in reducing the overall temperature increments in all cases (p<0.001; paired t-test). None of the samples prepared up to a maximum of 135 J cumulative energy prepared with water spray exceeded that threshold at which pulpal damage can be considered to occur. Only 25% of those prepared without water spray remained below this threshold. Extrapolation of the figures suggests probably tolerable limits of continuous laser irradiation with water in excess to 160 J. With the incorporation of small breaks in the continuity of laser irradiation that occur in the in vivo situation, the cumulative energy dose tolerated by the pulp should far exceed these figures. The Er:YAG laser must be used in conjunction with water during cavity preparation. As such it should be considered as an effective tool for clinical use based on predicted pulpal responses to thermal stimuli.
Zapata-Vázquez, Rita Esther; Álvarez-Cervera, Fernando José; Alonzo-Vázquez, Felipe Manuel; García-Lira, José Ramón; Granados-García, Víctor; Pérez-Herrera, Norma Elena; Medina-Moreno, Manuel
2017-12-01
To conduct an economic evaluation of intracranial pressure (ICP) monitoring on the basis of current evidence from pediatric patients with severe traumatic brain injury, through a statistical model. The statistical model is a decision tree, whose branches take into account the severity of the lesion, the hospitalization costs, and the quality-adjusted life-year for the first 6 months post-trauma. The inputs consist of probability distributions calculated from a sample of 33 surviving children with severe traumatic brain injury, divided into two groups: with ICP monitoring (monitoring group) and without ICP monitoring (control group). The uncertainty of the parameters from the sample was quantified through a probabilistic sensitivity analysis using the Monte-Carlo simulation method. The model overcomes the drawbacks of small sample sizes, unequal groups, and the ethical difficulty in randomly assigning patients to a control group (without monitoring). The incremental cost in the monitoring group was Mex$3,934 (Mexican pesos), with an increase in quality-adjusted life-year of 0.05. The incremental cost-effectiveness ratio was Mex$81,062. The cost-effectiveness acceptability curve had a maximum at 54% of the cost effective iterations. The incremental net health benefit for a willingness to pay equal to 1 time the per capita gross domestic product for Mexico was 0.03, and the incremental net monetary benefit was Mex$5,358. The results of the model suggest that ICP monitoring is cost effective because there was a monetary gain in terms of the incremental net monetary benefit. Copyright © 2017. Published by Elsevier Inc.
Barlow, R. B.; Ramtoola, S.
1980-01-01
1 From measurements of the affinity constants of hydratropyltropine and its methiodide for muscarine-sensitive acetylcholine receptors in the guinea-pig ileum, the increment in log K for the hydroxyl group in atropine is 2.06 and in the methiodide it is 2.16. These effects are slightly bigger than any so far recorded with these receptors. 2 The estimate of the increment in apparent molal volume for the hydroxyl group is 1.1 cm3/mol in atropine and 1.0 cm3/mol in the methobromide. 3 The large effect of the group on affinity may be linked to its small apparent size in water as suggested in the previous paper. PMID:7470742
NASA Technical Reports Server (NTRS)
Bridges, P. G.; Cross, E. J., Jr.; Boatwright, D. W.
1977-01-01
The overall drag of the aircraft is expressed in terms of the measured increment of power required to overcome a corresponding known increment of drag, which is generated by a towed drogue. The simplest form of the governing equations, D = delta D SHP/delta SHP, is such that all of the parameters on the right side of the equation can be measured in flight. An evaluation of the governing equations has been performed using data generated by flight test of a Beechcraft T-34B. The simplicity of this technique and its proven applicability to sailplanes and small aircraft is well known. However, the method fails to account for airframe-propulsion system.
Reducing voluntary, avoidable turnover through selection.
Barrick, Murray R; Zimmerman, Ryan D
2005-01-01
The authors investigated the efficacy of several variables used to predict voluntary, organizationally avoidable turnover even before the employee is hired. Analyses conducted on applicant data collected in 2 separate organizations (N = 445) confirmed that biodata, clear-purpose attitudes and intentions, and disguised-purpose dispositional retention scales predicted voluntary, avoidable turnover (rs ranged from -.16 to -.22, R = .37, adjusted R = .33). Results also revealed that biodata scales and disguised-purpose retention scales added incremental validity, whereas clear-purpose retention scales did not explain significant incremental variance in turnover beyond what was explained by biodata and disguised-purpose scales. Furthermore, disparate impact (subgroup differences on race, sex, and age) was consistently small (average d = 0.12 when the majority group scored higher than the minority group).
USDA-ARS?s Scientific Manuscript database
There are few experimental data available on how herbicide sorption coefficients change across small increments within soil profiles. Soil profiles were obtained from three landform elements (eroded upper slope, deposition zone, and eroded waterway) in a strongly eroded agricultural field and segmen...
ERIC Educational Resources Information Center
Schmitt, Neal; Billington, Abigail; Keeney, Jessica; Reeder, Matthew; Pleskac, Timothy J.; Sinha, Ruchi; Zorzie, Mark
2011-01-01
Noncognitive attributes as the researchers have measured them do correlate with college GPA, but the incremental validity associated with these measures is relatively small. The noncognitive measures are correlated with other valued dimensions of student performance beyond the achievement reflected in college grades. There were much smaller…
Energy Conservation Programs | Climate Neutral Research Campuses | NREL
. Recognize accomplishments. One common theme is that successful programs check in often with the target very small scale with one building or one department. The success and savings from that effort can then be used to grow incrementally. Harvard University adopted this approach, where investment in one
A Note on the Incremental Validity of Aggregate Predictors.
ERIC Educational Resources Information Center
Day, H. D.; Marshall, David
Three computer simulations were conducted to show that very high aggregate predictive validity coefficients can occur when the across-case variability in absolute score stability occurring in both the predictor and criterion matrices is quite small. In light of the increase in internal consistency reliability achieved by the method of aggregation…
Fluorescence microscopy for measuring fibril angles in pine tracheids
Ralph O. Marts
1955-01-01
Observation and measurement of fibril angles in increment cores or similar small samples from living pine trees was facilitated by the use of fluorescence microscopy. Although some autofluorescence was present, brighter images could be obtained by staining the specimens with a 0.1% aqueous solution of a fluorochrome (Calcozine flavine TG extra concentrated, Calcozine...
The Year-Two Decline: Exploring the Incremental Experiences of a 1:1 Technology Initiative
ERIC Educational Resources Information Center
Swallow, Meredith
2015-01-01
Reports on one-to-one (1:1) technology initiatives emphasize overall favorable results; however, comprehensive multiyear studies looked at understate the progressive experiences of teachers and students. A small body of research suggested the second year of 1:1 technology programs manifested difficulties and struggles which significantly…
Reynolds Number Effects on the Performance of Lateral Control Devices
NASA Technical Reports Server (NTRS)
Mineck, Raymond E.
2000-01-01
The influence of Reynolds number on the performance of outboard spoilers and ailerons was investigated on a generic subsonic transport configuration in the National Transonic Facility over a chord Reynolds number range 41 from 3x10(exp 6) to 30xl0(exp 6) and a Mach number range from 0.50 to 0.94, Spoiler deflection angles of 0, 10, 15, and 20 deg and aileron deflection angles of -10, 0, and 10 deg were tested. Aeroelastic effects were minimized by testing at constant normalized dynamic pressure conditions over intermediate Reynolds number ranges. Results indicated that the increment in rolling moment due to spoiler deflection generally becomes more negative as the Reynolds number increases from 3x10(exp 6) to 22x10(exp 6) with only small changes between Reynolds numbers of 22x10(exp 6) and 30x10(exp 6). The change in the increment in rolling moment coefficient with Reynolds number for the aileron deflected configuration is generally small with a general trend of increasing magnitude with increasing Reynolds number.
Gait Coordination in Parkinson Disease: Effects of Step Length and Cadence Manipulations
Williams, April J.; Peterson, Daniel S.; Earhart, Gammon M.
2013-01-01
Background Gait impairments are well documented in those with PD. Prior studies suggest that gait impairments may be worse and ongoing in those with PD who demonstrate FOG compared to those with PD who do not. Purpose Our aim was to determine the effects of manipulating step length and cadence individually, and together, on gait coordination in those with PD who experience FOG, those with PD who do not experience FOG, healthy older adults, and healthy young adults. Methods Eleven participants with PD and FOG, 16 with PD and no FOG, 18 healthy older, and 19 healthy young adults walked across a GAITRite walkway under four conditions: Natural, Fast (+50% of preferred cadence), Small (−50% of preferred step length), and SmallFast (+50% cadence and −50% step length). Coordination (i.e. phase coordination index) was measured for each participant during each condition and analyzed using mixed model repeated measure ANOVAs. Results FOG was not elicited. Decreasing step length or decreasing step length and increasing cadence together affected coordination. Small steps combined with fast cadence resulted in poorer coordination in both groups with PD compared to healthy young adults and in those with PD and FOG compared to healthy older adults. Conclusions Coordination deficits can be identified in those with PD by having them walk with small steps combined with fast cadence. Short steps produced at high rate elicit worse coordination than short steps or fast steps alone. PMID:23333356
Kunz, Wolfgang G; Hunink, M G Myriam; Sommer, Wieland H; Beyer, Sebastian E; Meinel, Felix G; Dorn, Franziska; Wirth, Stefan; Reiser, Maximilian F; Ertl-Wagner, Birgit; Thierfelder, Kolja M
2016-11-01
Endovascular therapy in addition to standard care (EVT+SC) has been demonstrated to be more effective than SC in acute ischemic large vessel occlusion stroke. Our aim was to determine the cost-effectiveness of EVT+SC depending on patients' initial National Institutes of Health Stroke Scale (NIHSS) score, time from symptom onset, Alberta Stroke Program Early CT Score (ASPECTS), and occlusion location. A decision model based on Markov simulations estimated lifetime costs and quality-adjusted life years (QALYs) associated with both strategies applied in a US setting. Model input parameters were obtained from the literature, including recently pooled outcome data of 5 randomized controlled trials (ESCAPE [Endovascular Treatment for Small Core and Proximal Occlusion Ischemic Stroke], EXTEND-IA [Extending the Time for Thrombolysis in Emergency Neurological Deficits-Intra-Arterial], MR CLEAN [Multicenter Randomized Clinical Trial of Endovascular Treatment for Acute Ischemic Stroke in the Netherlands], REVASCAT [Randomized Trial of Revascularization With Solitaire FR Device Versus Best Medical Therapy in the Treatment of Acute Stroke Due to Anterior Circulation Large Vessel Occlusion Presenting Within 8 Hours of Symptom Onset], and SWIFT PRIME [Solitaire With the Intention for Thrombectomy as Primary Endovascular Treatment]). Probabilistic sensitivity analysis was performed to estimate uncertainty of the model results. Net monetary benefits, incremental costs, incremental effectiveness, and incremental cost-effectiveness ratios were derived from the probabilistic sensitivity analysis. The willingness-to-pay was set to $50 000/QALY. Overall, EVT+SC was cost-effective compared with SC (incremental cost: $4938, incremental effectiveness: 1.59 QALYs, and incremental cost-effectiveness ratio: $3110/QALY) in 100% of simulations. In all patient subgroups, EVT+SC led to gained QALYs (range: 0.47-2.12), and mean incremental cost-effectiveness ratios were considered cost-effective. However, subgroups with ASPECTS ≤5 or with M2 occlusions showed considerably higher incremental cost-effectiveness ratios ($14 273/QALY and $28 812/QALY, respectively) and only reached suboptimal acceptability in the probabilistic sensitivity analysis (75.5% and 59.4%, respectively). All other subgroups had acceptability rates of 90% to 100%. EVT+SC is cost-effective in most subgroups. In patients with ASPECTS ≤5 or with M2 occlusions, cost-effectiveness remains uncertain based on current data. © 2016 American Heart Association, Inc.
NASA Astrophysics Data System (ADS)
DeMarco, Adam Ward
The turbulent motions with the atmospheric boundary layer exist over a wide range of spatial and temporal scales and are very difficult to characterize. Thus, to explore the behavior of such complex flow enviroments, it is customary to examine their properties from a statistical perspective. Utilizing the probability density functions of velocity and temperature increments, deltau and deltaT, respectively, this work investigates their multiscale behavior to uncover the unique traits that have yet to be thoroughly studied. Utilizing diverse datasets, including idealized, wind tunnel experiments, atmospheric turbulence field measurements, multi-year ABL tower observations, and mesoscale models simulations, this study reveals remarkable similiarities (and some differences) between the small and larger scale components of the probability density functions increments fields. This comprehensive analysis also utilizes a set of statistical distributions to showcase their ability to capture features of the velocity and temperature increments' probability density functions (pdfs) across multiscale atmospheric motions. An approach is proposed for estimating their pdfs utilizing the maximum likelihood estimation (MLE) technique, which has never been conducted utilizing atmospheric data. Using this technique, we reveal the ability to estimate higher-order moments accurately with a limited sample size, which has been a persistent concern for atmospheric turbulence research. With the use robust Goodness of Fit (GoF) metrics, we quantitatively reveal the accuracy of the distributions to the diverse dataset. Through this analysis, it is shown that the normal inverse Gaussian (NIG) distribution is a prime candidate to be used as an estimate of the increment pdfs fields. Therefore, using the NIG model and its parameters, we display the variations in the increments over a range of scales revealing some unique scale-dependent qualities under various stability and ow conditions. This novel approach can provide a method of characterizing increment fields with the sole use of only four pdf parameters. Also, we investigate the capability of the current state-of-the-art mesoscale atmospheric models to predict the features and highlight the potential for use for future model development. With the knowledge gained in this study, a number of applications can benefit by using our methodology, including the wind energy and optical wave propagation fields.
NASA Technical Reports Server (NTRS)
Wong, R. C.; Owen, H. A., Jr.; Wilson, T. G.
1981-01-01
Small-signal models are derived for the power stage of the voltage step-up (boost) and the current step-up (buck) converters. The modeling covers operation in both the continuous-mmf mode and the discontinuous-mmf mode. The power stage in the regulated current step-up converter on board the Dynamics Explorer Satellite is used as an example to illustrate the procedures in obtaining the small-signal functions characterizing a regulated converter.
Ziaimatin, Hasti; Groza, Tudor; Tudorache, Tania; Hunter, Jane
2016-12-01
Collaboration platforms provide a dynamic environment where the content is subject to ongoing evolution through expert contributions. The knowledge embedded in such platforms is not static as it evolves through incremental refinements - or micro-contributions. Such refinements provide vast resources of tacit knowledge and experience. In our previous work, we proposed and evaluated a Semantic and Time-dependent Expertise Profiling (STEP) approach for capturing expertise from micro-contributions. In this paper we extend our investigation to structured micro-contributions that emerge from an ontology engineering environment, such as the one built for developing the International Classification of Diseases (ICD) revision 11. We take advantage of the semantically related nature of these structured micro-contributions to showcase two major aspects: (i) a novel semantic similarity metric, in addition to an approach for creating bottom-up baseline expertise profiles using expertise centroids; and (ii) the application of STEP in this new environment combined with the use of the same semantic similarity measure to both compare STEP against baseline profiles, as well as to investigate the coverage of these baseline profiles by STEP.
Ouyang, Er-Ming; Wang, Wei; Long, Neng; Li, Huai
2009-04-15
Startup experiment was conducted for thermophilic anaerobic sequencing batch reactor (ASBR) treating thermal-hydrolyzed sewage sludge using the strategy of the step-wise temperature increment: 35 degrees C-->40 degrees C-->47 degrees C-->53 degrees C. The results showed that the first step-increase (from 35 degrees C to 40 degrees C) and final step-increase (from 47 degrees C to 53 degrees C) had only a slight effect on the digestion process. The second step-increase (from 40 degrees C to 47 degrees C) resulted in a severe disturbance: the biogas production, methane content, CODeffluent and microorganism all have strong disturbance. At the steady stage of thermophilic ASBR treating thermal-hydrolyzed sewage sludge, the average daily gas production, methane content, specific methane production (CH4/CODinfluent), TCOD removal rate and SCOD removal rate were 2.038 L/d, 72.0%, 188.8 mL/g, 63.8%, 83.3% respectively. The results of SEM and DGGE indicated that the dominant species are obviously different at early stage and steady stage.
Tian, Zhe; Zhang, Yu; Li, Yuyou; Chi, Yongzhi; Yang, Min
2015-02-01
The purpose of this study was to explore how fast the thermophilic anaerobic microbial community could be established during the one-step startup of thermophilic anaerobic digestion from a mesophilic digester. Stable thermophilic anaerobic digestion was achieved within 20 days from a mesophilic digester treating sewage sludge by adopting the one-step startup strategy. The succession of archaeal and bacterial populations over a period of 60 days after the temperature increment was followed by using 454-pyrosequencing and quantitative PCR. After the increase of temperature, thermophilic methanogenic community was established within 11 days, which was characterized by the fast colonization of Methanosarcina thermophila and two hydrogenotrophic methanogens (Methanothermobacter spp. and Methanoculleus spp.). At the same time, the bacterial community was dominated by Fervidobacterium, whose relative abundance rapidly increased from 0 to 28.52 % in 18 days, followed by other potential thermophilic genera, such as Clostridium, Coprothermobacter, Anaerobaculum and EM3. The above result demonstrated that the one-step startup strategy could allow the rapid establishment of the thermophilic anaerobic microbial community. Copyright © 2014 Elsevier Ltd. All rights reserved.
Two Different Views on the World Around Us: The World of Uniformity versus Diversity
Nayakankuppam, Dhananjay
2016-01-01
We propose that when individuals believe in fixed traits of personality (entity theorists), they are likely to expect a world of “uniformity.” As such, they easily infer a population statistic from a small sample of data with confidence. In contrast, individuals who believe in malleable traits of personality (incremental theorists) are likely to presume a world of “diversity,” such that they “hesitate” to infer a population statistic from a similarly sized sample. In four laboratory experiments, we found that compared to incremental theorists, entity theorists estimated a population mean from a sample with a greater level of confidence (Studies 1a and 1b), expected more homogeneity among the entities within a population (Study 2), and perceived an extreme value to be more indicative of an outlier (Study 3). These results suggest that individuals are likely to use their implicit self-theory orientations (entity theory versus incremental theory) to see a population in general as a constitution either of homogeneous or heterogeneous entities. PMID:27977788
Enlargement and contracture of C2-ceramide channels.
Siskind, Leah J; Davoody, Amirparviz; Lewin, Naomi; Marshall, Stephanie; Colombini, Marco
2003-09-01
Ceramides are known to play a major regulatory role in apoptosis by inducing cytochrome c release from mitochondria. We have previously reported that ceramide, but not dihydroceramide, forms large and stable channels in phospholipid membranes and outer membranes of isolated mitochondria. C(2)-ceramide channel formation is characterized by conductance increments ranging from <1 to >200 nS. These conductance increments often represent the enlargement and contracture of channels rather than the opening and closure of independent channels. Enlargement is supported by the observation that many small conductance increments can lead to a large decrement. Also the initial conductances favor cations, but this selectivity drops dramatically with increasing total conductance. La(+3) causes rapid ceramide channel disassembly in a manner indicative of large conducting structures. These channels have a propensity to contract by a defined size (often multiples of 4 nS) indicating the formation of cylindrical channels with preferred diameters rather than a continuum of sizes. The results are consistent with ceramides forming barrel-stave channels whose size can change by loss or insertion of multiple ceramide columns.
Enlargement and Contracture of C2-Ceramide Channels
Siskind, Leah J.; Davoody, Amirparviz; Lewin, Naomi; Marshall, Stephanie; Colombini, Marco
2003-01-01
Ceramides are known to play a major regulatory role in apoptosis by inducing cytochrome c release from mitochondria. We have previously reported that ceramide, but not dihydroceramide, forms large and stable channels in phospholipid membranes and outer membranes of isolated mitochondria. C2-ceramide channel formation is characterized by conductance increments ranging from <1 to >200 nS. These conductance increments often represent the enlargement and contracture of channels rather than the opening and closure of independent channels. Enlargement is supported by the observation that many small conductance increments can lead to a large decrement. Also the initial conductances favor cations, but this selectivity drops dramatically with increasing total conductance. La+3 causes rapid ceramide channel disassembly in a manner indicative of large conducting structures. These channels have a propensity to contract by a defined size (often multiples of 4 nS) indicating the formation of cylindrical channels with preferred diameters rather than a continuum of sizes. The results are consistent with ceramides forming barrel-stave channels whose size can change by loss or insertion of multiple ceramide columns. PMID:12944273
Stop the escalators: using the built environment to increase usual daily activity
Westfall, John M; Fernald, Doug H
2010-01-01
Background Obesity is an epidemic in the United States. Two-thirds of the population is overweight and does not get enough exercise. Eastern cities are full of escalators that transport obese Americans to and from the subway. Walking stairs is a moderate activity requiring 3–6 metabolic equivalent tasks (METS) and burning 3.5–7 kcal/min. We determined the caloric expenditure and potential weight change of the population of one eastern city if all the subway riders walked the stairs rather than ride the escalators. Methods There are 5,000,000 daily journeys made on the New York City Subway. Subway entrances include a stairway or escalator of approximately 25 steps. Each step up requires 0.11–0.15 kcals; each step down requires 0.05 kcals. To lose one pound requires burning 3500 kcals. We assumed each rider made a round trip so about 2.5 million individual people ride the subway each day. Results By walking stairs rather than riding escalators, the riders of the New York Subway would lose more than 2.6 million pounds per year. Discussion The average subway rider would lose about one pound per year. While this may sound insignificant, in one decade the average subway rider would lose 10 pounds, effectively reversing the trend in the United States of gaining 10 pounds per decade. This conservative estimate of the number of stairs ascended daily means that subway riders might lose even more weight. We believe that this novel approach might lead to other public and private efforts to increase physical activity such as elevators that only stop on even numbered floors, making stairwells more attractive and well lit, and stopping moving sidewalks. The built environment may support small, incremental changes in usual daily physical activity that can have significant impact on populations and individuals. PMID:27774003
NASA Astrophysics Data System (ADS)
Alam, Md Jahangir; Goodall, Jonathan L.
2012-04-01
The goal of this research was to quantify the relative impact of hydrologic and nitrogen source changes on incremental nitrogen yield in the contiguous United States. Using nitrogen source estimates from various federal data bases, remotely sensed land use data from the National Land Cover Data program, and observed instream loadings from the United States Geological Survey National Stream Quality Accounting Network program, we calibrated and applied the spatially referenced regression model SPARROW to estimate incremental nitrogen yield for the contiguous United States. We ran different model scenarios to separate the effects of changes in source contributions from hydrologic changes for the years 1992 and 2001, assuming that only state conditions changed and that model coefficients describing the stream water-quality response to changes in state conditions remained constant between 1992 and 2001. Model results show a decrease of 8.2% in the median incremental nitrogen yield over the period of analysis with the vast majority of this decrease due to changes in hydrologic conditions rather than decreases in nitrogen sources. For example, when we changed the 1992 version of the model to have nitrogen source data from 2001, the model results showed only a small increase in median incremental nitrogen yield (0.12%). However, when we changed the 1992 version of the model to have hydrologic conditions from 2001, model results showed a decrease of approximately 8.7% in median incremental nitrogen yield. We did, however, find notable differences in incremental yield estimates for different sources of nitrogen after controlling for hydrologic changes, particularly for population related sources. For example, the median incremental yield for population related sources increased by 8.4% after controlling for hydrologic changes. This is in contrast to a 2.8% decrease in population related sources when hydrologic changes are included in the analysis. Likewise we found that median incremental yield from urban watersheds increased by 6.8% after controlling for hydrologic changes—in contrast to the median incremental nitrogen yield from cropland watersheds, which decreased by 2.1% over the same time period. These results suggest that, after accounting for hydrologic changes, population related sources became a more significant contributor of nitrogen yield to streams in the contiguous United States over the period of analysis. However, this study was not able to account for the influence of human management practices such as improvements in wastewater treatment plants or Best Management Practices that likely improved water quality, due to a lack of data for quantifying the impact of these practices for the study area.
Modeling CANDU-6 liquid zone controllers for effects of thorium-based fuels
DOE Office of Scientific and Technical Information (OSTI.GOV)
St-Aubin, E.; Marleau, G.
2012-07-01
We use the DRAGON code to model the CANDU-6 liquid zone controllers and evaluate the effects of thorium-based fuels on their incremental cross sections and reactivity worth. We optimize both the numerical quadrature and spatial discretization for 2D cell models in order to provide accurate fuel properties for 3D liquid zone controller supercell models. We propose a low computer cost parameterized pseudo-exact 3D cluster geometries modeling approach that avoids tracking issues on small external surfaces. This methodology provides consistent incremental cross sections and reactivity worths when the thickness of the buffer region is reduced. When compared with an approximate annularmore » geometry representation of the fuel and coolant region, we observe that the cluster description of fuel bundles in the supercell models does not increase considerably the precision of the results while increasing substantially the CPU time. In addition, this comparison shows that it is imperative to finely describe the liquid zone controller geometry since it has a strong impact of the incremental cross sections. This paper also shows that liquid zone controller reactivity worth is greatly decreased in presence of thorium-based fuels compared to the reference natural uranium fuel, since the fission and the fast to thermal scattering incremental cross sections are higher for the new fuels. (authors)« less
Mosaly, Prithima R; Mazur, Lukasz M; Marks, Lawrence B
2017-10-01
The methods employed to quantify the baseline pupil size and task-evoked pupillary response (TEPR) may affect the overall study results. To test this hypothesis, the objective of this study was to assess variability in baseline pupil size and TEPR during two basic working memory tasks: constant load of 3-letters memorisation-recall (10 trials), and incremental load memorisation-recall (two trials of each load level), using two commonly used methods (1) change from trail/load specific baseline, (2) change from constant baseline. Results indicated that there was a significant shift in baseline between the trails for constant load, and between the load levels for incremental load. The TEPR was independent of shifts in baseline using method 1 only for constant load, and method 2 only for higher levels of incremental load condition. These important findings suggest that the assessment of both the baseline and methods to quantify TEPR are critical in ergonomics application, especially in studies with small number of trials per subject per condition. Practitioner Summary: Quantification of TEPR can be affected by shifts in baseline pupil size that are most likely affected by non-cognitive factors when other external factors are kept constant. Therefore, quantification methods employed to compute both baseline and TEPR are critical in understanding the information processing of humans in practical ergonomics settings.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ueda, Yoshihiro, E-mail: ueda-yo@mc.pref.osaka.jp; Miyazaki, Masayoshi; Nishiyama, Kinji
2012-07-01
Purpose: To evaluate setup error and interfractional changes in tumor motion magnitude using an electric portal imaging device in cine mode (EPID cine) during the course of stereotactic body radiation therapy (SBRT) for non-small-cell lung cancer (NSCLC) and to calculate margins to compensate for these variations. Materials and Methods: Subjects were 28 patients with Stage I NSCLC who underwent SBRT. Respiratory-correlated four-dimensional computed tomography (4D-CT) at simulation was binned into 10 respiratory phases, which provided average intensity projection CT data sets (AIP). On 4D-CT, peak-to-peak motion of the tumor (M-4DCT) in the craniocaudal direction was assessed and the tumor centermore » (mean tumor position [MTP]) of the AIP (MTP-4DCT) was determined. At treatment, the tumor on cone beam CT was registered to that on AIP for patient setup. During three sessions of irradiation, peak-to-peak motion of the tumor (M-cine) and the mean tumor position (MTP-cine) were obtained using EPID cine and in-house software. Based on changes in tumor motion magnitude ( Increment M) and patient setup error ( Increment MTP), defined as differences between M-4DCT and M-cine and between MTP-4DCT and MTP-cine, a margin to compensate for these variations was calculated with Stroom's formula. Results: The means ({+-}standard deviation: SD) of M-4DCT and M-cine were 3.1 ({+-}3.4) and 4.0 ({+-}3.6) mm, respectively. The means ({+-}SD) of Increment M and Increment MTP were 0.9 ({+-}1.3) and 0.2 ({+-}2.4) mm, respectively. Internal target volume-planning target volume (ITV-PTV) margins to compensate for Increment M, Increment MTP, and both combined were 3.7, 5.2, and 6.4 mm, respectively. Conclusion: EPID cine is a useful modality for assessing interfractional variations of tumor motion. The ITV-PTV margins to compensate for these variations can be calculated.« less
Atmospheric response to Saharan dust deduced from ECMWF reanalysis (ERA) temperature increments
NASA Astrophysics Data System (ADS)
Kishcha, P.; Alpert, P.; Barkan, J.; Kirchner, I.; Machenhauer, B.
2003-09-01
This study focuses on the atmospheric temperature response to dust deduced from a new source of data the European Reanalysis (ERA) increments. These increments are the systematic errors of global climate models, generated in the reanalysis procedure. The model errors result not only from the lack of desert dust but also from a complex combination of many kinds of model errors. Over the Sahara desert the lack of dust radiative effect is believed to be a predominant model defect which should significantly affect the increments. This dust effect was examined by considering correlation between the increments and remotely sensed dust. Comparisons were made between April temporal variations of the ERA analysis increments and the variations of the Total Ozone Mapping Spectrometer aerosol index (AI) between 1979 and 1993. The distinctive structure was identified in the distribution of correlation composed of three nested areas with high positive correlation (>0.5), low correlation and high negative correlation (<-0.5). The innermost positive correlation area (PCA) is a large area near the center of the Sahara desert. For some local maxima inside this area the correlation even exceeds 0.8. The outermost negative correlation area (NCA) is not uniform. It consists of some areas over the eastern and western parts of North Africa with a relatively small amount of dust. Inside those areas both positive and negative high correlations exist at pressure levels ranging from 850 to 700 hPa, with the peak values near 775 hPa. Dust-forced heating (cooling) inside the PCA (NCA) is accompanied by changes in the static instability of the atmosphere above the dust layer. The reanalysis data of the European Center for Medium Range Weather Forecast (ECMWF) suggest that the PCA (NCA) corresponds mainly to anticyclonic (cyclonic) flow, negative (positive) vorticity and downward (upward) airflow. These findings are associated with the interaction between dust-forced heating/cooling and atmospheric circulation. This paper contributes to a better understanding of dust radiative processes missed in the model.
Metsaranta, Juha M.; Lieffers, Victor J.
2008-01-01
Background and Aims Changes in size inequality in tree populations are often attributed to changes in the mode of competition over time. The mode of competition may also fluctuate annually in response to variation in growing conditions. Factors causing growth rate to vary can also influence competition processes, and thus influence how size hierarchies develop. Methods Detailed data obtained by tree-ring reconstruction were used to study annual changes in size and size increment inequality in several even-aged, fire-origin jack pine (Pinus banksiana) stands in the boreal shield and boreal plains ecozones in Saskatchewan and Manitoba, Canada, by using the Gini and Lorenz asymmetry coefficients. Key Results The inequality of size was related to variables reflecting long-term stand dynamics (e.g. stand density, mean tree size and average competition, as quantified using a distance-weighted absolute size index). The inequality of size increment was greater and more variable than the inequality of size. Inequality of size increment was significantly related to annual growth rate at the stand level, and was higher when growth rate was low. Inequality of size increment was usually due primarily to large numbers of trees with low growth rates, except during years with low growth rate when it was often due to small numbers of trees with high growth rates. The amount of competition to which individual trees were subject was not strongly related to the inequality of size increment. Conclusions Differences in growth rate among trees during years of poor growth may form the basis for development of size hierarchies on which asymmetric competition can act. A complete understanding of the dynamics of these forests requires further evaluation of the way in which factors that influence variation in annual growth rate also affect the mode of competition and the development of size hierarchies. PMID:18089583
Metsaranta, Juha M; Lieffers, Victor J
2008-03-01
Changes in size inequality in tree populations are often attributed to changes in the mode of competition over time. The mode of competition may also fluctuate annually in response to variation in growing conditions. Factors causing growth rate to vary can also influence competition processes, and thus influence how size hierarchies develop. Detailed data obtained by tree-ring reconstruction were used to study annual changes in size and size increment inequality in several even-aged, fire-origin jack pine (Pinus banksiana) stands in the boreal shield and boreal plains ecozones in Saskatchewan and Manitoba, Canada, by using the Gini and Lorenz asymmetry coefficients. The inequality of size was related to variables reflecting long-term stand dynamics (e.g. stand density, mean tree size and average competition, as quantified using a distance-weighted absolute size index). The inequality of size increment was greater and more variable than the inequality of size. Inequality of size increment was significantly related to annual growth rate at the stand level, and was higher when growth rate was low. Inequality of size increment was usually due primarily to large numbers of trees with low growth rates, except during years with low growth rate when it was often due to small numbers of trees with high growth rates. The amount of competition to which individual trees were subject was not strongly related to the inequality of size increment. Differences in growth rate among trees during years of poor growth may form the basis for development of size hierarchies on which asymmetric competition can act. A complete understanding of the dynamics of these forests requires further evaluation of the way in which factors that influence variation in annual growth rate also affect the mode of competition and the development of size hierarchies.
NASA Capabilities That Could Impact Terrestrial Smart Grids of the Future
NASA Technical Reports Server (NTRS)
Beach, Raymond F.
2015-01-01
Incremental steps to steadily build, test, refine, and qualify capabilities that lead to affordable flight elements and a deep space capability. Potential Deep Space Vehicle Power system characteristics: power 10 kilowatts average; two independent power channels with multi-level cross-strapping; solar array power 24 plus kilowatts; multi-junction arrays; lithium Ion battery storage 200 plus ampere-hours; sized for deep space or low lunar orbit operation; distribution120 volts secondary (SAE AS 5698); 2 kilowatt power transfer between vehicles.
Code of Federal Regulations, 2011 CFR
2011-07-01
... treatment steps to small, medium-size and large water systems. 141.81 Section 141.81 Protection of... to small, medium-size and large water systems. (a) Systems shall complete the applicable corrosion...) or (b)(3) of this section. (2) A small system (serving ≤3300 persons) and a medium-size system...
Code of Federal Regulations, 2013 CFR
2013-07-01
... treatment steps to small, medium-size and large water systems. 141.81 Section 141.81 Protection of... to small, medium-size and large water systems. (a) Systems shall complete the applicable corrosion...) or (b)(3) of this section. (2) A small system (serving ≤3300 persons) and a medium-size system...
Code of Federal Regulations, 2012 CFR
2012-07-01
... treatment steps to small, medium-size and large water systems. 141.81 Section 141.81 Protection of... to small, medium-size and large water systems. (a) Systems shall complete the applicable corrosion...) or (b)(3) of this section. (2) A small system (serving ≤3300 persons) and a medium-size system...
Code of Federal Regulations, 2010 CFR
2010-07-01
... treatment steps to small, medium-size and large water systems. 141.81 Section 141.81 Protection of... to small, medium-size and large water systems. (a) Systems shall complete the applicable corrosion...) or (b)(3) of this section. (2) A small system (serving ≤3300 persons) and a medium-size system...
The cost-effectiveness of air bags by seating position.
Graham, J D; Thompson, K M; Goldie, S J; Segui-Gomez, M; Weinstein, M C
1997-11-05
Motor vehicle crashes continue to cause significant mortality and morbidity in the United States. Installation of air bags in new passenger vehicles is a major initiative in the field of injury prevention. To assess the net health consequences and cost-effectiveness of driver's side and front passenger air bags from a societal perspective, taking into account the increased risk to children who occupy the front passenger seat and the diminished effectiveness for older adults. A deterministic state transition model tracked a hypothetical cohort of new vehicles over a 20-year period for 3 strategies: (1) installation of safety belts, (2) installation of driver's side air bags in addition to safety belts, and (3) installation of front passenger air bags in addition to safety belts and driver's side air bags. Changes in health outcomes, valued in terms of quality-adjusted life-years (QALYs) and costs (in 1993 dollars), were projected following the recommendations of the Panel on Cost-effectiveness in Health and Medicine. US population-based and convenience sample data were used. Incremental cost-effectiveness ratios. Safety belts are cost saving, even at 50% use. The addition of driver's side air bags to safety belts results in net health benefits at an incremental cost of $24000 per QALY saved. The further addition of front passenger air bags results in an incremental net benefit at a higher incremental cost of $61000 per QALY saved. Results were sensitive to the unit cost of air bag systems, their effectiveness, baseline fatality rates, the ratio of injuries to fatalities, and the real discount rate. Both air bag systems save life-years at costs that are comparable to many medical and public health practices. Immediate steps can be taken to enhance the cost-effectiveness of front passenger air bags, such as moving children to the rear seat.
The impact of therapeutic reference pricing on innovation in cardiovascular medicine.
Sheridan, Desmond; Attridge, Jim
2006-12-01
Therapeutic reference pricing (TRP) places medicines to treat the same medical condition into groups or 'clusters' with a single common reimbursed price. Underpinning this economic measure is an implicit assumption that the products included in the cluster have an equivalent effect on a typical patient with this disease. 'Truly innovative' products can be exempt from inclusion in the cluster. This increasingly common approach to cost containment allocates products into one of two categories - truly innovative or therapeutically equivalent. This study examines the implications of TRP against the step-wise evolution of drugs for cardiovascular conditions over the past 50 years. It illustrates the complex interactions between advances in understanding of cellular and molecular disease mechanisms, diagnostic techniques, treatment concepts, and the synthesis, testing and commercialisation of products. It confirms the highly unpredictable and incremental nature of the innovation process. Medical progress in terms of improvement in patient outcomes over the long-term depends on the cumulative effect of year after year of painstaking incremental advances. It shows that the parallel processes of advances in scientific knowledge and the industrial 'investment-innovative cycle' involve highly developed sets of complementary capabilities and resources. A framework is developed to assess the impact of TRP upon research and development investment decisions and the development of therapeutic classes. We conclude that a simple categorisation of products as either 'truly innovative' or 'therapeutically equivalent' is inconsistent with the incremental processes of innovation and the resulting differentiated product streams revealed by our analysis. Widespread introduction of TRP would probably have prematurely curtailed development of many incremental innovations that became the preferred 'product of choice' by physicians for some indications and patients in managing the incidence of cardiovascular disease.
Monsalves-Alvarez, Matias; Castro-Sepulveda, Mauricio; Zapata-Lamana, Rafael; Rosales-Soto, Giovanni; Salazar, Gabriela
2015-10-01
childhood obesity is a worldwide health concern. For this issue different intervention have being planned to increase physical activity patterns and reduce the excess of weight in children with limited or no success. the aim of this study is to evaluate the results of a pilot intervention consisting in three 15-minute breaks conducted by educators and supervised by physical education teachers on motor skills and nutritional status in preschool children. sample was 70 preschool children (32 boys and 38 girls), age 4 ± 0,6 years. The physical activity classes were performed three times a week, 45 minutes daily, distributed in three 15 minutes breaks. The circuits were planned to have; jumps, sprints, carrying medicinal balls, gallops and crawling. Motor skill tests that were performed Standing long jump (SLJ) and Twelve meter run. with the intervention no significant differences in nutritional status where found on mean Z score (boys p = 0.49, girls p = 0.77). An increment on weight and height was fount after the intervention (p < 0.0001). Regarding the 12 meter run test, we found significant changes after the intervention when we normalize by weight in boys (p = 0.002) and girls (p < 0.0001). Our results have shown than boys significantly increased their SLJ and SLJ normalized by weight (p < 0.0001); a similar result was found in girls after the intervention (p < 0.0001) suggesting the increment of power independent of weight gain. in conclusion, this pilot study found that an intervention with more intense activities in small breaks (15 minutes), and guided by the educators could improve essential motor skills (running and jumping) in preschool children of a semi-rural sector independent of nutritional status. This gaining in motor skills is the first step to increase physical activity levels in preschool children. Copyright AULA MEDICA EDICIONES 2014. Published by AULA MEDICA. All rights reserved.
Wildi, Karin; Zellweger, Christa; Twerenbold, Raphael; Jaeger, Cedric; Reichlin, Tobias; Haaf, Philip; Faoro, Jonathan; Giménez, Maria Rubini; Fischer, Andreas; Nelles, Berit; Druey, Sophie; Krivoshei, Lian; Hillinger, Petra; Puelacher, Christian; Herrmann, Thomas; Campodarve, Isabel; Rentsch, Katharina; Steuer, Stephan; Osswald, Stefan; Mueller, Christian
2015-01-01
The incremental value of copeptin, a novel marker of endogenous stress, for rapid rule-out of non-ST-elevation myocardial infarction (NSTEMI) is unclear when sensitive or even high-sensitivity cardiac troponin cTn (hs-cTn) assays are used. In an international multicenter study we evaluated 1929 consecutive patients with symptoms suggestive of acute myocardial infarction (AMI). Measurements of copeptin, three sensitive and three hs-cTn assays were performed at presentation in a blinded fashion. The final diagnosis was adjudicated by two independent cardiologists using all clinical information including coronary angiography and levels of hs-cTnT. The incremental value in the diagnosis of NSTEMI was quantified using four outcome measures: area under the receiver-operating characteristic curve (AUC), integrated discrimination improvement (IDI), sensitivity and negative predictive value (NPV). Early presenters (< 4h since chest pain onset) were a pre-defined subgroup. NSTEMI was the adjudicated final diagnosis in 358 (18.6%) patients. As compared to the use of cTn alone, copeptin significantly increased AUC for two (33%) and IDI (between 0.010 and 0.041 (all p < 0.01)), sensitivity and NPV for all six cTn assays (100%); NPV to 96-99% when the 99 th percentile of the respective cTnI assay was combined with a copeptin level of 9 pmol/l (all p < 0.01). The incremental value in early presenters was similar to that of the overall cohort. When used for rapid rule-out of NSTEM in combination with sensitive or hs-cTnI assays, copeptin provides a numerically small, but statistically and likely also clinically significant incremental value. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Phelan, Brian R.; Ranney, Kenneth I.; Ressler, Marc A.; Clark, John T.; Sherbondy, Kelly D.; Kirose, Getachew A.; Harrison, Arthur C.; Galanos, Daniel T.; Saponaro, Philip J.; Treible, Wayne R.; Narayanan, Ram M.
2017-05-01
The U.S. Army Research Laboratory has developed the Spectrally Agile Frequency-Incrementing Reconfigurable (SAFIRE) radar, which is capable of imaging concealed/buried targets using forward- and side-looking configurations. The SAFIRE radar is vehicle-mounted and operates from 300 MHz-2 GHz; the step size can be adjusted in multiples of 1 MHz. It is also spectrally agile and capable of excising frequency bands, which makes it ideal for operation in congested and/or contested radio frequency (RF) environments. Furthermore, the SAFIRE radar receiver has a super-heterodyne architecture, which was designed so that intermodulation products caused by interfering signals could be easily filtered from the desired received signal. The SAFIRE system also includes electro-optical (EO) and infrared (IR) cameras, which can be fused with radar data and displayed in a stereoscopic augmented reality user interface. In this paper, recent upgrades to the SAFIRE system are discussed and results from the SAFIRE's initial field tests are presented.
Variable path length spectrophotometric probe
O'Rourke, Patrick E.; McCarty, Jerry E.; Haggard, Ricky A.
1992-01-01
A compact, variable pathlength, fiber optic probe for spectrophotometric measurements of fluids in situ. The probe comprises a probe body with a shaft having a polished end penetrating one side of the probe, a pair of optic fibers, parallel and coterminous, entering the probe opposite the reflecting shaft, and a collimating lens to direct light from one of the fibers to the reflecting surface of the shaft and to direct the reflected light to the second optic fiber. The probe body has an inlet and an outlet port to allow the liquid to enter the probe body and pass between the lens and the reflecting surface of the shaft. A linear stepper motor is connected to the shaft to cause the shaft to advance toward or away from the lens in increments so that absorption measurements can be made at each of the incremental steps. The shaft is sealed to the probe body by a bellows seal to allow freedom of movement of the shaft and yet avoid leakage from the interior of the probe.
A Method for Optimal Load Dispatch of a Multi-zone Power System with Zonal Exchange Constraints
NASA Astrophysics Data System (ADS)
Hazarika, Durlav; Das, Ranjay
2018-04-01
This paper presented a method for economic generation scheduling of a multi-zone power system having inter zonal operational constraints. For this purpose, the generator rescheduling for a multi area power system having inter zonal operational constraints has been represented as a two step optimal generation scheduling problem. At first, the optimal generation scheduling has been carried out for the zone having surplus or deficient generation with proper spinning reserve using co-ordination equation. The power exchange required for the deficit zones and zones having no generation are estimated based on load demand and generation for the zone. The incremental transmission loss formulas for the transmission lines participating in the power transfer process among the zones are formulated. Using these, incremental transmission loss expression in co-ordination equation, the optimal generation scheduling for the zonal exchange has been determined. Simulation is carried out on IEEE 118 bus test system to examine the applicability and validity of the method.
NASA Astrophysics Data System (ADS)
Blinov, N. A.; Zolotkov, V. N.; Lezin, A. Yu; Cheburkin, N. V.
1990-04-01
An analysis is made of transient stimulated scattering in a vibrationally nonequilibrium gas excited by a non-self-sustained discharge. A stability theory approach is used to describe the behavior of perturbation wave packets, yielding asymptotic expressions for the maximal increments of an instability of stimulated small-angle scattering by entropic and acoustic modes.
Dambreville, Charline; Labarre, Audrey; Thibaudier, Yann; Hurteau, Marie-France
2015-01-01
When speed changes during locomotion, both temporal and spatial parameters of the pattern must adjust. Moreover, at slow speeds the step-to-step pattern becomes increasingly variable. The objectives of the present study were to assess if the spinal locomotor network adjusts both temporal and spatial parameters from slow to moderate stepping speeds and to determine if it contributes to step-to-step variability in left-right symmetry observed at slow speeds. To determine the role of the spinal locomotor network, the spinal cord of 6 adult cats was transected (spinalized) at low thoracic levels and the cats were trained to recover hindlimb locomotion. Cats were implanted with electrodes to chronically record electromyography (EMG) in several hindlimb muscles. Experiments began once a stable hindlimb locomotor pattern emerged. During experiments, EMG and bilateral video recordings were made during treadmill locomotion from 0.1 to 0.4 m/s in 0.05 m/s increments. Cycle and stance durations significantly decreased with increasing speed, whereas swing duration remained unaffected. Extensor burst duration significantly decreased with increasing speed, whereas sartorius burst duration remained unchanged. Stride length, step length, and the relative distance of the paw at stance offset significantly increased with increasing speed, whereas the relative distance at stance onset and both the temporal and spatial phasing between hindlimbs were unaffected. Both temporal and spatial step-to-step left-right asymmetry decreased with increasing speed. Therefore, the spinal cord is capable of adjusting both temporal and spatial parameters during treadmill locomotion, and it is responsible, at least in part, for the step-to-step variability in left-right symmetry observed at slow speeds. PMID:26084910
Innovations in mission architectures for exploration beyond low Earth orbit
NASA Technical Reports Server (NTRS)
Cooke, D. R.; Joosten, B. J.; Lo, M. W.; Ford, K. M.; Hansen, R. J.
2003-01-01
Through the application of advanced technologies and mission concepts, architectures for missions beyond Earth orbit have been dramatically simplified. These concepts enable a stepping stone approach to science driven; technology enabled human and robotic exploration. Numbers and masses of vehicles required are greatly reduced, yet the pursuit of a broader range of science objectives is enabled. The scope of human missions considered range from the assembly and maintenance of large aperture telescopes for emplacement at the Sun-Earth libration point L2, to human missions to asteroids, the moon and Mars. The vehicle designs are developed for proof of concept, to validate mission approaches and understand the value of new technologies. The stepping stone approach employs an incremental buildup of capabilities, which allows for future decision points on exploration objectives. It enables testing of technologies to achieve greater reliability and understanding of costs for the next steps in exploration. c2003 American Institute of Aeronautics and Astronautics. Published by Elsevier Science Ltd. All rights reserved.
STEP UP for American Small Businesses Act
Sen. Cantwell, Maria [D-WA
2014-12-09
Senate - 12/09/2014 Read twice and referred to the Committee on Small Business and Entrepreneurship. (All Actions) Tracker: This bill has the status IntroducedHere are the steps for Status of Legislation:
Helicity statistics in homogeneous and isotropic turbulence and turbulence models
NASA Astrophysics Data System (ADS)
Sahoo, Ganapati; De Pietro, Massimo; Biferale, Luca
2017-02-01
We study the statistical properties of helicity in direct numerical simulations of fully developed homogeneous and isotropic turbulence and in a class of turbulence shell models. We consider correlation functions based on combinations of vorticity and velocity increments that are not invariant under mirror symmetry. We also study the scaling properties of high-order structure functions based on the moments of the velocity increments projected on a subset of modes with either positive or negative helicity (chirality). We show that mirror symmetry is recovered at small scales, i.e., chiral terms are subleading and they are well captured by a dimensional argument plus anomalous corrections. These findings are also supported by a high Reynolds numbers study of helical shell models with the same chiral symmetry of Navier-Stokes equations.
Testing the Fraunhofer line discriminator by sensing fluorescent dye
NASA Technical Reports Server (NTRS)
Stoertz, G. E.
1969-01-01
The experimental Fraunhofer Line Discriminator (FLD) has detected increments of Rhodamine WT dye as small as 1 ppb in 1/2 meter depths. It can be inferred that increments considerably smaller than 1 ppb will be detectable in depths considerably greater than 1/2 meter. Turbidity of the water drastically reduces luminescence or even completely blocks the transmission of detectable luminescence to the FLD. Attenuation of light within the water by turbidity and by the dye itself are the major factors to be considered in interpreting FLD records and in relating luminescence coefficient to dye concentration. An airborne test in an H-19 helicopter established feasibility of operating the FLD from the aircraft power supply, and established that the rotor blades do not visibly affect the monitoring of incident solar radiation.
Uncertainties in derived temperature-height profiles
NASA Technical Reports Server (NTRS)
Minzner, R. A.
1974-01-01
Nomographs were developed for relating uncertainty in temperature T to uncertainty in the observed height profiles of both pressure p and density rho. The relative uncertainty delta T/T is seen to depend not only upon the relative uncertainties delta P/P or delta rho/rho, and to a small extent upon the value of T or H, but primarily upon the sampling-height increment Delta h, the height increment between successive observations of p or delta. For a fixed value of delta p/p, the value of delta T/T varies inversely with Delta h. No limit exists in the fineness of usable height resolution of T which may be derived from densities, while a fine height resolution in pressure-height data leads to temperatures with unacceptably large uncertainties.
The Improvement Cycle: Analyzing Our Experience
NASA Technical Reports Server (NTRS)
Pajerski, Rose; Waligora, Sharon
1996-01-01
NASA's Software Engineering Laboratory (SEL), one of the earliest pioneers in the areas of software process improvement and measurement, has had a significant impact on the software business at NASA Goddard. At the heart of the SEL's improvement program is a belief that software products can be improved by optimizing the software engineering process used to develop them and a long-term improvement strategy that facilitates small incremental improvements that accumulate into significant gains. As a result of its efforts, the SEL has incrementally reduced development costs by 60%, decreased error rates by 85%, and reduced cycle time by 25%. In this paper, we analyze the SEL's experiences on three major improvement initiatives to better understand the cyclic nature of the improvement process and to understand why some improvements take much longer than others.
Computing Risk to West Coast Intertidal Rocky Habitat due to ...
Compared to marshes, little information is available on the potential for rocky intertidal habitats to migrate upward in response to sea level rise (SLR). To address this gap, we utilized topobathy LiDAR digital elevation models (DEMs) downloaded from NOAA’s Digital Coast GIS data repository to estimate percent change in the area of rocky intertidal habitat in 10 cm increments with eustatic sea level rise. The analysis was conducted at the scale of the four Marine Ecoregions of the World (MEOW) ecoregions located along the continental west coast of the United States (CONUS). Environmental Sensitivity Index (ESI) map data were used to identify rocky shoreline. Such stretches of shoreline were extracted for each of the four ecoregions and buffered by 100 m to include the intertidal and evaluate the potential area for upland habitat migration. All available LiDAR topobathy DEMs from Digital Coast were extracted using the resulting polygons and two rasters were synthesized from the results, a 10 cm increment zone raster and a non-planimetric surface area raster for zonal summation. Current rocky intertidal non-planimetric surface areas for each ecoregion were computed between Mean Higher High Water (MHHW) and Mean Lower Low Water (MLLW) levels established from published datum sheets for tidal stations central to each MEOW ecoregion. Percent change in non-planimetric surface area for the same relative ranges were calculated in 10 cm incremental steps of eustatic S
NASA Technical Reports Server (NTRS)
Hoffman, Ross N.
1993-01-01
A preliminary assessment of the impact of the ERS 1 scatterometer wind data on the current European Centre for Medium-Range Weather Forecasts analysis and forecast system has been carried out. Although the scatterometer data results in changes to the analyses and forecasts, there is no consistent improvement or degradation. Our results are based on comparing analyses and forecasts from assimilation cycles. The two sets of analyses are very similar except for the low level wind fields over the ocean. Impacts on the analyzed wind fields are greater over the southern ocean, where other data are scarce. For the most part the mass field increments are too small to balance the wind increments. The effect of the nonlinear normal mode initialization on the analysis differences is quite small, but we observe that the differences tend to wash out in the subsequent 6-hour forecast. In the Northern Hemisphere, analysis differences are very small, except directly at the scatterometer locations. Forecast comparisons reveal large differences in the Southern Hemisphere after 72 hours. Notable differences in the Northern Hemisphere do not appear until late in the forecast. Overall, however, the Southern Hemisphere impacts are neutral. The experiments described are preliminary in several respects. We expect these data to ultimately prove useful for global data assimilation.
Aluminum and Alzheimer's disease: after a century of controversy, is there a plausible link?
Tomljenovic, Lucija
2011-01-01
The brain is a highly compartmentalized organ exceptionally susceptible to accumulation of metabolic errors. Alzheimer's disease (AD) is the most prevalent neurodegenerative disease of the elderly and is characterized by regional specificity of neural aberrations associated with higher cognitive functions. Aluminum (Al) is the most abundant neurotoxic metal on earth, widely bioavailable to humans and repeatedly shown to accumulate in AD-susceptible neuronal foci. In spite of this, the role of Al in AD has been heavily disputed based on the following claims: 1) bioavailable Al cannot enter the brain in sufficient amounts to cause damage, 2) excess Al is efficiently excreted from the body, and 3) Al accumulation in neurons is a consequence rather than a cause of neuronal loss. Research, however, reveals that: 1) very small amounts of Al are needed to produce neurotoxicity and this criterion is satisfied through dietary Al intake, 2) Al sequesters different transport mechanisms to actively traverse brain barriers, 3) incremental acquisition of small amounts of Al over a lifetime favors its selective accumulation in brain tissues, and 4) since 1911, experimental evidence has repeatedly demonstrated that chronic Al intoxication reproduces neuropathological hallmarks of AD. Misconceptions about Al bioavailability may have misled scientists regarding the significance of Al in the pathogenesis of AD. The hypothesis that Al significantly contributes to AD is built upon very solid experimental evidence and should not be dismissed. Immediate steps should be taken to lessen human exposure to Al, which may be the single most aggravating and avoidable factor related to AD.
Abutment design for implant-supported indirect composite molar crowns: reliability and fractography.
Bonfante, Estevam Augusto; Suzuki, Marcelo; Lubelski, William; Thompson, Van P; de Carvalho, Ricardo Marins; Witek, Lukasz; Coelho, Paulo G
2012-12-01
To investigate the reliability of titanium abutments veneered with indirect composites for implant-supported crowns and the possibility to trace back the fracture origin by qualitative fractographic analysis. Large base (LB) (6.4-mm diameter base, with a 4-mm high cone in the center for composite retention), small base (SB-4) (5.2-mm base, 4-mm high cone), and small base with cone shortened to 2 mm (SB-2) Ti abutments were used. Each abutment received incremental layers of indirect resin composite until completing the anatomy of a maxillary molar crown. Step-stress accelerated-life fatigue testing (n = 18 each) was performed in water. Weibull curves with use stress of 200 N for 50,000 and 100,000 cycles were calculated. Probability Weibull plots examined the differences between groups. Specimens were inspected in light-polarized and scanning electron microscopes for fractographic analysis. Use level probability Weibull plots showed Beta values of 0.27 for LB, 0.32 for SB-4, and 0.26 for SB-2, indicating that failures were not influenced by fatigue and damage accumulation. The data replotted as Weibull distribution showed no significant difference in the characteristic strengths between LB (794 N) and SB-4 abutments (836 N), which were both significantly higher than SB-2 (601 N). Failure mode was cohesive within the composite for all groups. Fractographic markings showed that failures initiated at the indentation area and propagated toward the margins of cohesively failed composite. Reliability was not influenced by abutment design. Qualitative fractographic analysis of the failed indirect composite was feasible. © 2012 by the American College of Prosthodontists.
Sakharov, Dmitry A; Maltseva, Diana V; Riabenko, Evgeniy A; Shkurnikov, Maxim U; Northoff, Hinnak; Tonevitsky, Alexander G; Grigoriev, Anatoly I
2012-03-01
High and moderate intensity endurance exercise alters gene expression in human white blood cells (WBCs), but the understanding of how this effect occurs is limited. To increase our knowledge of the nature of this process, we investigated the effects of passing the anaerobic threshold (AnT) on the gene expression profile in WBCs of athletes. Nineteen highly trained skiers participated in a treadmill test with an incremental step protocol until exhaustion (ramp test to exhaustion, RTE). The average total time to exhaustion was 14:40 min and time after AnT was 4:50 min. Two weeks later, seven of these skiers participated in a moderate treadmill test (MT) at 80% peak O(2) uptake for 30 min, which was slightly below their AnTs. Blood samples were obtained before and immediately after both tests. RTE was associated with substantially greater leukocytosis and acidosis than MT. Gene expression in WBCs was measured using whole genome microarray expression analysis before and immediately after each test. A total of 310 upregulated genes were found after RTE, and 69 genes after MT of which 64 were identical to RTE. Both tests influenced a variety of known gene pathways related to inflammation, stress response, signal transduction and apoptosis. A large group of differentially expressed previously unknown small nucleolar RNA and small Cajal body RNA was found. In conclusion, a 15-min test to exhaustion was associated with substantially greater changes of gene expression than a 30-min test just below the AnT.
Space Availability in Confined Sheep during Pregnancy, Effects in Movement Patterns and Use of Space
Averós, Xavier; Lorea, Areta; Beltrán de Heredia, Ignacia; Arranz, Josune; Ruiz, Roberto; Estevez, Inma
2014-01-01
Space availability is essential to grant the welfare of animals. To determine the effect of space availability on movement and space use in pregnant ewes (Ovis aries), 54 individuals were studied during the last 11 weeks of gestation. Three treatments were tested (1, 2, and 3 m2/ewe; 6 ewes/group). Ewes' positions were collected for 15 minutes using continuous scan samplings two days/week. Total and net distance, net/total distance ratio, maximum and minimum step length, movement activity, angular dispersion, nearest, furthest and mean neighbour distance, peripheral location ratio, and corrected peripheral location ratio were calculated. Restriction in space availability resulted in smaller total travelled distance, net to total distance ratio, maximum step length, and angular dispersion but higher movement activity at 1 m2/ewe as compared to 2 and 3 m2/ewe (P<0.01). On the other hand, nearest and furthest neighbour distances increased from 1 to 3 m2/ewe (P<0.001). Largest total distance, maximum and minimum step length, and movement activity, as well as lowest net/total distance ratio and angular dispersion were observed during the first weeks (P<0.05) while inter-individual distances increased through gestation. Results indicate that movement patterns and space use in ewes were clearly restricted by limitations of space availability to 1 m2/ewe. This reflected in shorter, more sinuous trajectories composed of shorter steps, lower inter-individual distances and higher movement activity potentially linked with higher restlessness levels. On the contrary, differences between 2 and 3 m2/ewe, for most variables indicate that increasing space availability from 2 to 3 m2/ewe would appear to have limited benefits, reflected mostly in a further increment in the inter-individual distances among group members. No major variations in spatial requirements were detected through gestation, except for slight increments in inter-individual distances and an initial adaptation period, with ewes being restless and highly motivated to explore their new environment. PMID:24733027
Mundt, Marlon P; Parthasarathy, Sujaya; Chi, Felicia W; Sterling, Stacy; Campbell, Cynthia I
2012-11-01
Adolescents who attend 12-step groups following alcohol and other drug (AOD) treatment are more likely to remain abstinent and to avoid relapse post-treatment. We examined whether 12-step attendance is also associated with a corresponding reduction in health care use and costs. We used difference-in-difference analysis to compare changes in seven-year follow-up health care use and costs by changes in 12-step participation. Four Kaiser Permanente Northern California AOD treatment programs enrolled 403 adolescents, 13-18-years old, into a longitudinal cohort study upon AOD treatment entry. Participants self-reported 12-step meeting attendance at six-month, one-year, three-year, and five-year follow-up. Outcomes included counts of hospital inpatient days, emergency room (ER) visits, primary care visits, psychiatric visits, AOD treatment costs and total medical care costs. Each additional 12-step meeting attended was associated with an incremental medical cost reduction of 4.7% during seven-year follow-up. The medical cost offset was largely due to reductions in hospital inpatient days, psychiatric visits, and AOD treatment costs. We estimate total medical use cost savings at $145 per year (in 2010 U.S. dollars) per additional 12-step meeting attended. The findings suggest that 12-step participation conveys medical cost offsets for youth who undergo AOD treatment. Reduced costs may be related to improved AOD outcomes due to 12-step participation, improved general health due to changes in social network following 12-step participation, or better compliance to both AOD treatment and 12-step meetings. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.
An office-place stepping device to promote workplace physical activity.
McAlpine, David A; Manohar, Chinmay U; McCrady, Shelly K; Hensrud, Donald; Levine, James A
2007-12-01
It was proposed that an office-place stepping device is associated with significant and substantial increases in energy expenditure compared to sitting energy expenditure. The objective was to assess the effect of using an office-place stepping device on the energy expenditure of lean and obese office workers. The office-place stepping device is an inexpensive, near-silent, low-impact device that can be housed under a standard desk and plugged into an office PC for self-monitoring. Energy expenditure was measured in lean and obese subjects using the stepping device and during rest, sitting and walking. 19 subjects (27+/-9 years, 85+/-23 kg): 9 lean (BMI<25 kg/m2) and 10 obese (BMI>29 kg/m2) attended the experimental office facility. Energy expenditure was measured at rest, while seated in an office chair, standing, walking on a treadmill and while using the office-place stepping device. The office-place stepping device was associated with an increase in energy expenditure above sitting in an office chair by 289+/-102 kcal/hour (p<0.001). The increase in energy expenditure was greater for obese (335+/-99 kcal/hour) than for lean subjects (235+/-80 kcal/hour; p = 0.03). The increments in energy expenditure were similar to exercise-style walking. The office-place stepping device could be an approach for office workers to increase their energy expenditure. If the stepping device was used to replace sitting by 2 hours per day and if other components of energy balance were constant, weight loss of 20 kg/year could occur.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pogson, EM; Liverpool and Macarthur Cancer Therapy Centres, Liverpool, NSW; Ingham Institute for Applied Medical Research, Sydney, NSW
Purpose: To identify the robustness of different treatment techniques in respect to simulated linac errors on the dose distribution to the target volume and organs at risk for step and shoot IMRT (ssIMRT), VMAT and Autoplan generated VMAT nasopharynx plans. Methods: A nasopharynx patient dataset was retrospectively replanned with three different techniques: 7 beam ssIMRT, one arc manual generated VMAT and one arc automatically generated VMAT. Treatment simulated uncertainties: gantry, collimator, MLC field size and MLC shifts, were introduced into these plans at increments of 5,2,1,−1,−2 and −5 (degrees or mm) and recalculated in Pinnacle. The mean and maximum dosesmore » were calculated for the high dose PTV, parotids, brainstem, and spinal cord and then compared to the original baseline plan. Results: Simulated gantry angle errors have <1% effect on the PTV, ssIMRT is most sensitive. The small collimator errors (±1 and ±2 degrees) impacted the mean PTV dose by <2% for all techniques, however for the ±5 degree errors mean target varied by up to 7% for the Autoplan VMAT and 10% for the max dose to the spinal cord and brain stem, seen in all techniques. The simulated MLC shifts introduced the largest errors for the Autoplan VMAT, with the larger MLC modulation presumably being the cause. The most critical error observed, was the MLC field size error, where even small errors of 1 mm, caused significant changes to both the PTV and the OAR. The ssIMRT is the least sensitive and the Autoplan the most sensitive, with target errors of up to 20% over and under dosages observed. Conclusion: For a nasopharynx patient the plan robustness observed is highest for the ssIMRT plan and lowest for the Autoplan generated VMAT plan. This could be caused by the more complex MLC modulation seen for the VMAT plans. This project is supported by a grant from NSW Cancer Council.« less
Chopra, Sameer; de Castro Abreu, Andre Luis; Berger, Andre K; Sehgal, Shuchi; Gill, Inderbir; Aron, Monish; Desai, Mihir M
2017-01-01
To describe our, step-by-step, technique for robotic intracorporeal neobladder formation. The main surgical steps to forming the intracorporeal orthotopic ileal neobladder are: isolation of 65 cm of small bowel; small bowel anastomosis; bowel detubularisation; suture of the posterior wall of the neobladder; neobladder-urethral anastomosis and cross folding of the pouch; and uretero-enteral anastomosis. Improvements have been made to these steps to enhance time efficiency without compromising neobladder configuration. Our technical improvements have resulted in an improvement in operative time from 450 to 360 min. We describe an updated step-by-step technique of robot-assisted intracorporeal orthotopic ileal neobladder formation. © 2016 The Authors BJU International © 2016 BJU International Published by John Wiley & Sons Ltd.
Socioeconomic Indicators for Small Towns. Small Town Strategy.
ERIC Educational Resources Information Center
Oregon State Univ., Corvallis. Cooperative Extension Service.
Prepared to help small towns assess community population and economic trends, this publication provides a step-by-step guide for establishing an on-going local data collection system, which is based on four local indicators and will provide accurate, up-to-date estimates of population, family income, and gross sales within a town's trade area. The…
Validity of the alcohol purchase task: a meta-analysis.
Kiselica, Andrew M; Webber, Troy A; Bornovalova, Marina A
2016-05-01
Behavioral economists assess alcohol consumption as a function of unit price. This method allows construction of demand curves and demand indices, which are thought to provide precise numerical estimates of risk for alcohol problems. One of the more commonly used behavioral economic measures is the Alcohol Purchase Task (APT). Although the APT has shown promise as a measure of risk for alcohol problems, the construct validity and incremental utility of the APT remain unclear. This paper presents a meta-analysis of the APT literature. Sixteen studies were included in the meta-analysis. Studies were gathered via searches of the PsycInfo, PubMed, Web of Science and EconLit research databases. Random-effects meta-analyses with inverse variance weighting were used to calculate summary effect sizes for each demand index-drinking outcome relationship. Moderation of these effects by drinking status (regular versus heavy drinkers) was examined. Additionally, tests of the incremental utility of the APT indices in predicting drinking problems above and beyond measuring alcohol consumption were performed. The APT indices were correlated in the expected directions with drinking outcomes, although many effects were small in size. These effects were typically not moderated by the drinking status of the samples. Additionally, the intensity metric demonstrated incremental utility in predicting alcohol use disorder symptoms beyond measuring drinking. The Alcohol Purchase Task appears to have good construct validity, but limited incremental utility in estimating risk for alcohol problems. © 2015 Society for the Study of Addiction.
Identification of platelet refractoriness in oncohematologic patients
Ferreira, Aline Aparecida; Zulli, Roberto; Soares, Sheila; de Castro, Vagner; Moraes-Souza, Helio
2011-01-01
OBJECTIVES: To identify the occurrence and the causes of platelet refractoriness in oncohematologic patients. INTRODUCTION: Platelet refractoriness (unsatisfactory post-transfusion platelet increment) is a severe problem that impairs the treatment of oncohematologic patients and is not routinely investigated in most Brazilian services. METHODS: Forty-four episodes of platelet concentrate transfusion were evaluated in 16 patients according to the following parameters: corrected count increment, clinical conditions and detection of anti-platelet antibodies by the platelet immunofluorescence test (PIFT) and panel reactive antibodies against human leukocyte antigen class I (PRA-HLA). RESULTS: Of the 16 patients evaluated (median age: 53 years), nine (56%) were women, seven of them with a history of pregnancy. An unsatisfactory increment was observed in 43% of the transfusion events, being more frequent in transfusions of random platelet concentrates (54%). Platelet refractoriness was confirmed in three patients (19%), who presented immunologic and non-immunologic causes. Alloantibodies were identified in eight patients (50%) by the PIFT and in three (19%) by the PRA-HLA. Among alloimmunized patients, nine (64%) had a history of transfusion, and three as a result of pregnancy (43%). Of the former, two were refractory (29%). No significant differences were observed, probably as a result of the small sample size. CONCLUSION: The high rate of unsatisfactory platelet increment, refractoriness and alloimmunization observed support the need to set up protocols for the investigation of this complication in all chronically transfused patients, a fundamental requirement for the guarantee of adequate management. PMID:21437433
Three-dimensional architecture of macrofibrils in the human scalp hair cortex.
Harland, Duane P; Walls, Richard J; Vernon, James A; Dyer, Jolon M; Woods, Joy L; Bell, Fraser
2014-03-01
Human scalp hairs are comprised of a central cortex enveloped by plate-like cuticle cells. The elongate cortex cells of mature fibres are composed primarily of macrofibrils-bundles of hard-keratin intermediate filaments (IFs) chemically cross-linked within a globular protein matrix. In wool, three cell types (ortho-, meso- and paracortex) contain macrofibrils with distinctly different filament arrangements and matrix fractions, but in human hair macrofibril-cell type relationships are less clear. Here we show that hair macrofibrils all have a similar matrix fraction (∼0.4) and are typically composed of a double-twist architecture in which a central IF is surrounded by concentric rings of tangentially-angled IFs. The defining parameter is the incremental angle increase (IF-increment) between IFs of successive rings. Unlike the wool orthocortex, hair double-twist macrofibrils have considerable inter-macrofibril variation in IF increment (0.05-0.35°/nm), and macrofibril size and IF increment are negatively correlated. Correspondingly, angular difference between central and outer-most IFs is up to 40° in small macrofibrils, but only 5-10° in large macrofibrils. Single cells were observed containing mixtures of macrofibrils with different diameters. These new observations advance our understanding of the nano-level and cell-level organisation of human hair, with implications for interpretation of structure with respect the potential roles of cortex cell types in defining the mechanical properties of hair. Copyright © 2014 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Wang, Hongjin; Hsieh, Sheng-Jen; Peng, Bo; Zhou, Xunfei
2016-07-01
A method without requirements on knowledge about thermal properties of coatings or those of substrates will be interested in the industrial application. Supervised machine learning regressions may provide possible solution to the problem. This paper compares the performances of two regression models (artificial neural networks (ANN) and support vector machines for regression (SVM)) with respect to coating thickness estimations made based on surface temperature increments collected via time resolved thermography. We describe SVM roles in coating thickness prediction. Non-dimensional analyses are conducted to illustrate the effects of coating thicknesses and various factors on surface temperature increments. It's theoretically possible to correlate coating thickness with surface increment. Based on the analyses, the laser power is selected in such a way: during the heating, the temperature increment is high enough to determine the coating thickness variance but low enough to avoid surface melting. Sixty-one pain-coated samples with coating thicknesses varying from 63.5 μm to 571 μm are used to train models. Hyper-parameters of the models are optimized by 10-folder cross validation. Another 28 sets of data are then collected to test the performance of the three methods. The study shows that SVM can provide reliable predictions of unknown data, due to its deterministic characteristics, and it works well when used for a small input data group. The SVM model generates more accurate coating thickness estimates than the ANN model.
Melisa L. Holman; David L. Peterson
2006-01-01
We compared annual basal area increment (BAI) at different spatial scales among all size classes and species at diverse locations in the wet western and dry northeastern Olympic Mountains. Weak growth correlations at small spatial scales (average R = 0.084-0.406) suggest that trees are responding to local growth conditions. However, significant...
Growth and soil moisture in thinned lodgepole pine.
Walter G. Dahms
1971-01-01
A lodgepole pine levels-of-growing-stock study showed that trees growing at lower stand densities had longer crowns and grew more rapidly in diameter but did not grow significantly faster in height. Gross cubic-volume increment decreased with decreasing stand density. The decrease was small per unit of density at the higher densities but much greater at the lower...
Diameter Growth in Even- and Uneven-Aged Northern Hardwoods in New Hampshire Under Partial Cutting
William B. Leak
2004-01-01
One important concern in the conversion of even-aged stands to an uneven aged condition through individual-tree or small-group cutting is the growth response throughout the diameter-class distribution, especially of the understoty trees Increment-core sampling of an older, uneven-aged northern hardwood stand in New Hampshire under management for about 50 years...
ERIC Educational Resources Information Center
Peng, Shanzhong; Ferreira, Fernando A. F.; Zheng, He
2017-01-01
In this study, we develop a firm-dominated incremental cooperation model. Following the critical review of current literature and various cooperation models, we identified a number of strengths and shortcomings that form the basis for our framework. The objective of our theoretical model is to contribute to overcome the existing gap within…
Cromwell, Ian; van der Hoek, Kimberly; Malfair Taylor, Suzanne C; Melosky, Barbara; Peacock, Stuart
2012-06-01
Erlotinib has been approved as a third-line treatment for advanced non-small-cell lung cancer (NSCLC) in British Columbia (BC). A cost-effectiveness analysis was conducted to compare costs and effectiveness in patients who received third-line erlotinib to those in a historical patient cohort that would have been eligible had erlotinib been available. In a population of patients who have been treated with drugs for advanced NSCLC, overall survival (OS), progression-to-death survival (PTD) and probability of survival one year after end of second-line (1YS) were determined using a Kaplan-Meier survival analysis. Costs were collected retrospectively from the perspective of the BC health care system. Incremental mean OS was 90 days (0.25 LYG), and incremental mean cost was $11,102 (CDN 2009), resulting in a mean ICER of $36,838/LYG. Univariate sensitivity analysis yielded ICERs ranging from $21,300 to $51,700/LYG. Our analysis suggests that erlotinib may be an effective and cost-effective third-line treatment for advanced NSCLC compared to best supportive care. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.
Lift hysteresis at stall as an unsteady boundary-layer phenomenon
NASA Technical Reports Server (NTRS)
Moore, Franklin K
1956-01-01
Analysis of rotating stall of compressor blade rows requires specification of a dynamic lift curve for the airfoil section at or near stall, presumably including the effect of lift hysteresis. Consideration of the magnus lift of a rotating cylinder suggests performing an unsteady boundary-layer calculation to find the movement of the separation points of an airfoil fixed in a stream of variable incidence. The consideration of the shedding of vorticity into the wake should yield an estimate of lift increment proportional to time rate of change of angle of attack. This increment is the amplitude of the hysteresis loop. An approximate analysis is carried out according to the foregoing ideas for a 6:1 elliptic airfoil at the angle of attack for maximum lift. The assumptions of small perturbations from maximum lift are made, permitting neglect of distributed vorticity in the wake. The calculated hysteresis loop is counterclockwise. Finally, a discussion of the forms of hysteresis loops is presented; and, for small reduced frequency of oscillation, it is concluded that the concept of a viscous "time lag" is appropriate only for harmonic variations of angle of attack with time at mean conditions other than maximum lift.
Statistical characteristics of surrogate data based on geophysical measurements
NASA Astrophysics Data System (ADS)
Venema, V.; Bachner, S.; Rust, H. W.; Simmer, C.
2006-09-01
In this study, the statistical properties of a range of measurements are compared with those of their surrogate time series. Seven different records are studied, amongst others, historical time series of mean daily temperature, daily rain sums and runoff from two rivers, and cloud measurements. Seven different algorithms are used to generate the surrogate time series. The best-known method is the iterative amplitude adjusted Fourier transform (IAAFT) algorithm, which is able to reproduce the measured distribution as well as the power spectrum. Using this setup, the measurements and their surrogates are compared with respect to their power spectrum, increment distribution, structure functions, annual percentiles and return values. It is found that the surrogates that reproduce the power spectrum and the distribution of the measurements are able to closely match the increment distributions and the structure functions of the measurements, but this often does not hold for surrogates that only mimic the power spectrum of the measurement. However, even the best performing surrogates do not have asymmetric increment distributions, i.e., they cannot reproduce nonlinear dynamical processes that are asymmetric in time. Furthermore, we have found deviations of the structure functions on small scales.
The design of transfer trajectory for Ivar asteroid exploration mission
NASA Astrophysics Data System (ADS)
Qiao, Dong; Cui, Hutao; Cui, Pingyuan
2009-12-01
An impending demand for exploring the small bodies, such as the comets and the asteroids, envisioned the Chinese Deep Space exploration mission to the Near Earth asteroid Ivar. A design and optimal method of transfer trajectory for asteroid Ivar is discussed in this paper. The transfer trajectory for rendezvous with asteroid Ivar is designed by means of Earth gravity assist with deep space maneuver (Delta-VEGA) technology. A Delta-VEGA transfer trajectory is realized by several trajectory segments, which connect the deep space maneuver and swingby point. Each trajectory segment is found by solving Lambert problem. Through adjusting deep maneuver and arrival time, the match condition of swingby is satisfied. To reduce the total mission velocity increments further, a procedure is developed which minimizes total velocity increments for this scheme of transfer trajectory for asteroid Ivar. The trajectory optimization problem is solved with a quasi-Newton algorithm utilizing analytic first derivatives, which are derived from the transversality conditions associated with the optimization formulation and primer vector theory. The simulation results show the scheme for transfer trajectory causes C3 and total velocity increments decrease of 48.80% and 13.20%, respectively.
Interaction of the alpha-toxin of Staphylococcus aureus with the liposome membrane.
Ikigai, H; Nakae, T
1987-02-15
When the liposome membrane is exposed to the alpha-toxin of Staphylococcus aureus, fluorescence of the tryptophan residue(s) of the toxin molecule increases concomitantly with the degree of toxin-hexamer formation (Ikigai, H., and Nakae, T. (1985) Biochem. Biophys. Res. Commun. 130, 175-181). In the present study, the toxin-membrane interaction was distinguished from the hexamer formation by the fluorescence energy transfer from the tryptophan residue(s) of the toxin molecule to the dansylated phosphatidylethanolamine in phosphatidylcholine liposome. Measurement of these two parameters yielded the following results. The effect of the toxin concentration and phospholipid concentration on these two parameters showed first order kinetics. The effect of liposome size on the energy transfer and the fluorescence increment of the tryptophan residue(s) was only detectable in small liposomes. Under moderately acidic or basic conditions, the fluorescence energy transfer always preceded the fluorescence increment of the tryptophan residue(s). The fluorescence increment at 336 nm at temperatures below 20 degrees C showed a latent period, whereas the fluorescence energy transfer did not. These results were thought to indicate that when alpha-toxin damages the target membrane, the molecule interacts with the membrane first, and then undergoes oligomerization within the membrane.
Atmospheric response to Saharan dust deduced from ECMWF reanalysis increments
NASA Astrophysics Data System (ADS)
Kishcha, P.; Alpert, P.; Barkan, J.; Kirchner, I.; Machenhauer, B.
2003-04-01
This study focuses on the atmospheric temperature response to dust deduced from a new source of data - the European Reanalysis (ERA) increments. These increments are the systematic errors of global climate models, generated in reanalysis procedure. The model errors result not only from the lack of desert dust but also from a complex combination of many kinds of model errors. Over the Sahara desert the dust radiative effect is believed to be a predominant model defect which should significantly affect the increments. This dust effect was examined by considering correlation between the increments and remotely-sensed dust. Comparisons were made between April temporal variations of the ERA analysis increments and the variations of the Total Ozone Mapping Spectrometer aerosol index (AI) between 1979 and 1993. The distinctive structure was identified in the distribution of correlation composed of three nested areas with high positive correlation (> 0.5), low correlation, and high negative correlation (<-0.5). The innermost positive correlation area (PCA) is a large area near the center of the Sahara desert. For some local maxima inside this area the correlation even exceeds 0.8. The outermost negative correlation area (NCA) is not uniform. It consists of some areas over the eastern and western parts of North Africa with a relatively small amount of dust. Inside those areas both positive and negative high correlations exist at pressure levels ranging from 850 to 700 hPa, with the peak values near 775 hPa. Dust-forced heating (cooling) inside the PCA (NCA) is accompanied by changes in the static stability of the atmosphere above the dust layer. The reanalysis data of the European Center for Medium Range Weather Forecast(ECMWF) suggests that the PCA (NCA) corresponds mainly to anticyclonic (cyclonic) flow, negative (positive) vorticity, and downward (upward) airflow. These facts indicate an interaction between dust-forced heating /cooling and atmospheric circulation. The April correlation results are supported by the analysis of vertical distribution of dust concentration, derived from the 24-hour dust prediction system at Tel Aviv University (website: http://earth.nasa.proj.ac.il/dust/current/). For other months the analysis is more complicated because of the essential increasing of humidity along with the northward progress of the ITCZ and the significant impact on the increments.
Timmer-Bonte, Johanna N H; Adang, Eddy M M; Smit, Hans J M; Biesma, Bonne; Wilschut, Frank A; Bootsma, Gerben P; de Boo, Theo M; Tjan-Heijnen, Vivianne C G
2006-07-01
Recently, a Dutch, randomized, phase III trial demonstrated that, in small-cell lung cancer patients at risk of chemotherapy-induced febrile neutropenia (FN), the addition of granulocyte colony-stimulating factor (GCSF) to prophylactic antibiotics significantly reduced the incidence of FN in cycle 1 (24% v 10%; P = .01). We hypothesized that selecting patients at risk of FN might increase the cost-effectiveness of GCSF prophylaxis. Economic analysis was conducted alongside the clinical trial and was focused on the health care perspective. Primary outcome was the difference in mean total costs per patient in cycle 1 between both prophylactic strategies. Cost-effectiveness was expressed as costs per percent-FN-prevented. For the first cycle, the mean incremental costs of adding GCSF amounted to 681 euro (95% CI, -36 to 1,397 euro) per patient. For the entire treatment period, the mean incremental costs were substantial (5,123 euro; 95% CI, 3,908 to 6,337 euro), despite a significant reduction in the incidence of FN and related savings in medical care consumption. The incremental cost-effectiveness ratio was 50 euro per percent decrease of the probability of FN (95% CI, -2 to 433 euro) in cycle 1, and the acceptability for this willingness to pay was approximately 50%. Despite the selection of patients at risk of FN, the addition of GCSF to primary antibiotic prophylaxis did not result in cost savings. If policy makers are willing to pay 240 euro for each percent gain in effect (ie, 3,360 euro for a 14% reduction in FN), the addition of GCSF can be considered cost effective.
Kawazoe, Nobuo; Liu, Guoxiang; Chiang, Chifa; Zhang, Yan; Aoyama, Atsuko
2015-01-01
ABSTRACT A new public health insurance scheme has been gradually introduced in rural provinces in China since 2003. This would likely cause an increment in the use of health services. It is known that the association between health insurance coverage and health service utilization varies among different age groups. This study aims to examine the association between extending health insurance coverage and increment in outpatient service utilization of small children in rural China, and to identify other factors associated with the outpatient service utilization. A household survey was conducted in 2 counties in north China in August 2010, targeting 107 selected households with a child aged 12–59 months. The questionnaire included modules on demographic information such as ages of children and parents, enrollment status of health insurance, the number of episodes of illness as perceived by parents, month of incidence of episode and outpatient service utilization at each episode. Based on the utilization at each episode of illness, a random effects logistic regression model was employed to analyze the association. It was found that eligibility for the reimbursement of outpatient medical expenses was not significantly associated with decision to seek care or choice of health facility. This might be in part due to the low level of reimbursement which could discourage the use of insured, and to the close relationship with village clinic workers which would encourage the use of uninsured. Three other factors were significantly associated with increment in the outpatient service utilization; age of children, mother’s education, and number of children in a household. PMID:26412893
NASA Astrophysics Data System (ADS)
Liu, Quan; Yu, FengZhen; Li, ZhiHong; Xiong, Juan; Chen, JianJun; Yi, Ming
2018-07-01
Based on the model describing two coupled synthetic clock cells, the synchronization dynamics under stochastic noise are explored. As extrinsic noise from signal is the predominant form of noise for all gene promoters, we investigate the effects of extrinsic noise original from signal molecule by evaluating the order parameters. It is found that strong noise is beneficial for the synchronization of loose-coupling system, while it destroys the synchronization of tight-coupling system. The underlying mechanisms of these two opposite effects are clarified numerically and theoretically. Our research illustrates that (i) when the coupling strength is small, the noise mainly adjusts the period difference of two cells and the system becomes regular. Theoretical study reveals that the mean effect of noise is like to be influx while signal flow is efflux under such a situation. (ii) With the increment of coupling strength, the cells have the same frequency. It is obvious that the noise mainly changes the phase difference between the two cells and destroys the synchronization of the system. (iii) We also demonstrate that, under certain moderate noise intensities, the noise can induce the synchronization order to be the worst. This nonlinear behavior only can be observed in a very narrow region of coupling strength.
Bayesian inference based on stationary Fokker-Planck sampling.
Berrones, Arturo
2010-06-01
A novel formalism for bayesian learning in the context of complex inference models is proposed. The method is based on the use of the stationary Fokker-Planck (SFP) approach to sample from the posterior density. Stationary Fokker-Planck sampling generalizes the Gibbs sampler algorithm for arbitrary and unknown conditional densities. By the SFP procedure, approximate analytical expressions for the conditionals and marginals of the posterior can be constructed. At each stage of SFP, the approximate conditionals are used to define a Gibbs sampling process, which is convergent to the full joint posterior. By the analytical marginals efficient learning methods in the context of artificial neural networks are outlined. Offline and incremental bayesian inference and maximum likelihood estimation from the posterior are performed in classification and regression examples. A comparison of SFP with other Monte Carlo strategies in the general problem of sampling from arbitrary densities is also presented. It is shown that SFP is able to jump large low-probability regions without the need of a careful tuning of any step-size parameter. In fact, the SFP method requires only a small set of meaningful parameters that can be selected following clear, problem-independent guidelines. The computation cost of SFP, measured in terms of loss function evaluations, grows linearly with the given model's dimension.
Nazir, J; Hart, W M
2014-06-01
To carry out a cost-utility analysis comparing initial treatment of patients with overactive bladder (OAB) with solifenacin 5 mg/day versus either trospium 20 mg twice a day or trospium 60 mg/day from the perspective of the German National Health Service. A decision analytic model with a 3 month cycle was developed to follow a cohort of OAB patients treated with either solifenacin or trospium during a 1 year period. Costs and utilities were accumulated as patients transitioned through the four cycles in the model. Some of the solifenacin patients were titrated from 5 mg to 10 mg/day at 3 months. Utility values were obtained from the published literature and pad use was based on a US resource utilization study. Adherence rates for individual treatments were derived from a United Kingdom general practitioner database review. The change in the mean number of urgency urinary incontinence episodes/day from after 12 weeks was the main outcome measure. Baseline effectiveness values for solifenacin and trospium were calculated using the Poisson distribution. Patients who failed second-line therapy were referred to a specialist visit. Results were expressed in terms of incremental cost-utility ratios. Total annual costs for solifenacin, trospium 20 mg and trospium 60 mg were €970.01, €860.05 and €875.05 respectively. Drug use represented 43%, 28% and 29% of total costs and pad use varied between 45% and 57%. Differences between cumulative utilities were small but favored solifenacin (0.6857 vs. 0.6802 to 0.6800). The baseline incremental cost-effectiveness ratio ranged from €16,657 to €19,893 per QALY. The difference in cumulative utility favoring solifenacin was small (0.0055-0.0057 QALYs). A small absolute change in the cumulative utilities can have a marked impact on the overall incremental cost-effectiveness ratios (ICERs) and care should be taken when interpreting the results. Solifenacin would appear to be cost-effective with an ICER of no more than €20,000/QALY. However, small differences in utility between the alternatives means that the results are sensitive to adjustments in the values of the assigned utilities, effectiveness and discontinuation rates.
Groza, Tudor; Tudorache, Tania; Hunter, Jane
2015-01-01
Collaboration platforms provide a dynamic environment where the content is subject to ongoing evolution through expert contributions. The knowledge embedded in such platforms is not static as it evolves through incremental refinements – or micro-contributions. Such refinements provide vast resources of tacit knowledge and experience. In our previous work, we proposed and evaluated a Semantic and Time-dependent Expertise Profiling (STEP) approach for capturing expertise from micro-contributions. In this paper we extend our investigation to structured micro-contributions that emerge from an ontology engineering environment, such as the one built for developing the International Classification of Diseases (ICD) revision 11. We take advantage of the semantically related nature of these structured micro-contributions to showcase two major aspects: (i) a novel semantic similarity metric, in addition to an approach for creating bottom-up baseline expertise profiles using expertise centroids; and (ii) the application of STEP in this new environment combined with the use of the same semantic similarity measure to both compare STEP against baseline profiles, as well as to investigate the coverage of these baseline profiles by STEP. PMID:28077914
Momentum Advection on a Staggered Mesh
NASA Astrophysics Data System (ADS)
Benson, David J.
1992-05-01
Eulerian and ALE (arbitrary Lagrangian-Eulerian) hydrodynamics programs usually split a timestep into two parts. The first part is a Lagrangian step, which calculates the incremental motion of the material. The second part is referred to as the Eulerian step, the advection step, or the remap step, and it accounts for the transport of material between cells. In most finite difference and finite element formulations, all the solution variables except the velocities are cell-centered while the velocities are edge- or vertex-centered. As a result, the advection algorithm for the momentum is, by necessity, different than the algorithm used for the other variables. This paper reviews three momentum advection methods and proposes a new one. One method, pioneered in YAQUI, creates a new staggered mesh, while the other two, used in SALE and SHALE, are cell-centered. The new method is cell-centered and its relationship to the other methods is discussed. Both pure advection and strong shock calculations are presented to substantiate the mathematical analysis. From the standpoint of numerical accuracy, both the staggered mesh and the cell-centered algorithms can give good results, while the computational costs are highly dependent on the overall architecture of a code.
Kaleth, Anthony S; Slaven, James E; Ang, Dennis C
2014-12-01
To examine the concurrent and predictive associations between the number of steps taken per day and clinical outcomes in patients with fibromyalgia (FM). A total of 199 adults with FM (mean age 46.1 years, 95% women) who were enrolled in a randomized clinical trial wore a hip-mounted accelerometer for 1 week and completed self-report measures of physical function (Fibromyalgia Impact Questionnaire-Physical Impairment [FIQ-PI], Short Form 36 [SF-36] health survey physical component score [PCS], pain intensity and interference (Brief Pain Inventory [BPI]), and depressive symptoms (Patient Health Questionnaire-8 [PHQ-8]) as part of their baseline and followup assessments. Associations of steps per day with self-report clinical measures were evaluated from baseline to week 12 using multivariate regression models adjusted for demographic and baseline covariates. Study participants were primarily sedentary, averaging 4,019 ± 1,530 steps per day. Our findings demonstrate a linear relationship between the change in steps per day and improvement in health outcomes for FM. Incremental increases on the order of 1,000 steps per day were significantly associated with (and predictive of) improvements in FIQ-PI, SF-36 PCS, BPI pain interference, and PHQ-8 (all P < 0.05). Although higher step counts were associated with lower FIQ and BPI pain intensity scores, these were not statistically significant. Step count is an easily obtained and understood objective measure of daily physical activity. An exercise prescription that includes recommendations to gradually accumulate at least 5,000 additional steps per day may result in clinically significant improvements in outcomes relevant to patients with FM. Future studies are needed to elucidate the dose-response relationship between steps per day and patient outcomes in FM. Copyright © 2014 by the American College of Rheumatology.
Stahl, Stephane; Hentschel, Pascal; Ketelsen, Dominik; Grosse, Ulrich; Held, Manuel; Wahler, Theodora; Syha, Roland; Schaller, Hans-Eberhard; Nikolaou, Konstantin; Grözinger, Gerd
2017-05-01
This prospective clinical study examined standard wrist magnetic resonance imaging (MRI) examinations and the incremental value of computed tomography (CT) in the diagnosis of Kienböck's disease (KD) with regard to reliability and precision in the different diagnostic steps during diagnostic work-up. Sixty-four consecutive patients referred between January 2009 and January 2014 with positive initial suspicion of KD according to external standard wrist MRI were prospectively included (step one). Institutional review board approval was obtained. Clinical examination by two handsurgeons were followed by wrist radiographs (step two), ultrathin-section CT, and 3T contrast-enhanced MRI (step three). Final diagnosis was established in a consensus conference involving all examiners and all examinations results available from step three. In 12/64 patients, initial suspicion was discarded at step two and in 34/64 patients, the initial suspicion of KD was finally discarded at step three. The final external MRI positive predictive value was 47%. The most common differential diagnoses at step three were intraosseous cysts (n=15), lunate pseudarthrosis (n=13), and ulnar impaction syndrome (n=5). A correlation between radiograph-based diagnoses (step two) with final diagnosis (step three) showed that initial suspicion of stage I KD had the lowest sensitivity for correct diagnosis (2/11). Technical factors associated with a false positive external MRI KD diagnosis were not found. Standard wrist MRI should be complemented with thin-section CT, and interdisciplinary interpretation of images and clinical data, to increase diagnostic accuracy in patients with suspected KD. Copyright © 2017. Published by Elsevier B.V.
NASA Astrophysics Data System (ADS)
Fei, Jiangfeng
2013-03-01
In 2006, JDRF launched the Artificial Pancreas Project (APP) to accelerate the development of a commercially-viable artificial pancreas system to closely mimic the biological function of the pancreas individuals with insulin-dependent diabetes, particularly type 1 diabetes. By automating detection of blood sugar levels and delivery of insulin in response to those levels, an artificial pancreas has the potential to transform the lives of people with type 1 diabetes. The 6-step APP development pathway serves as JDRF's APP strategic funding plan and defines the priorities of product research and development. Each step in the plan represents incremental advances in automation beginning with devices that shut off insulin delivery to prevent episodes of low blood sugar and progressing ultimately to a fully automated ``closed loop'' system that maintains blood glucose at a target level without the need to bolus for meals or adjust for exercise.
Approaching semantic interoperability in Health Level Seven
Alschuler, Liora
2010-01-01
‘Semantic Interoperability’ is a driving objective behind many of Health Level Seven's standards. The objective in this paper is to take a step back, and consider what semantic interoperability means, assess whether or not it has been achieved, and, if not, determine what concrete next steps can be taken to get closer. A framework for measuring semantic interoperability is proposed, using a technique called the ‘Single Logical Information Model’ framework, which relies on an operational definition of semantic interoperability and an understanding that interoperability improves incrementally. Whether semantic interoperability tomorrow will enable one computer to talk to another, much as one person can talk to another person, is a matter for speculation. It is assumed, however, that what gets measured gets improved, and in that spirit this framework is offered as a means to improvement. PMID:21106995
Policy to Foster Civility and Support a Healthy Academic Work Environment.
Clark, Cynthia M; Ritter, Katy
2018-06-01
Incivility in academic workplaces can have detrimental effects on individuals, teams, departments, and the campus community at large. Alternately, healthy academic workplaces generate heightened levels of employee satisfaction, engagement, and morale. This article describes the development and implementation of a comprehensive, legally defensible policy related to workplace civility and the establishment of a healthy academic work environment. A detailed policy exemplar is included to provide a structure for fostering a healthy academic work environment, a fair, consistent, confidential procedure for defining and addressing workplace incivility, a mechanism for reporting and subsequent investigation of uncivil acts if indicated, and ways to foster civility and respectful workplace behavior. The authors detail a step-by-step procedure and an incremental approach to address workplace incivility and reward policy adherence. [J Nurs Educ. 2018;57(6):325-331.]. Copyright 2018, SLACK Incorporated.
Jo, Min Sung; Sadasivam, Karthikeyan Giri; Tawfik, Wael Z; Yang, Seung Bea; Lee, Jung Ju; Ha, Jun Seok; Moon, Young Boo; Ryu, Sang Wan; Lee, June Key
2013-01-01
n-type GaN epitaxial layers were regrown on the patterned n-type GaN substrate (PNS) with different size of silicon dioxide (SiO2) nano dots to improve the crystal quality and optical properties. PNS with SiO2 nano dots promotes epitaxial lateral overgrowth (ELOG) for defect reduction and also acts as a light scattering point. Transmission electron microscopy (TEM) analysis suggested that PNS with SiO2 nano dots have superior crystalline properties. Hall measurements indicated that incrementing values in electron mobility were clear indication of reduction in threading dislocation and it was confirmed by TEM analysis. Photoluminescence (PL) intensity was enhanced by 2.0 times and 3.1 times for 1-step and 2-step PNS, respectively.
High bandgap III-V alloys for high efficiency optoelectronics
Alberi, Kirstin; Mascarenhas, Angelo; Wanlass, Mark
2017-01-10
High bandgap alloys for high efficiency optoelectronics are disclosed. An exemplary optoelectronic device may include a substrate, at least one Al.sub.1-xIn.sub.xP layer, and a step-grade buffer between the substrate and at least one Al.sub.1-xIn.sub.xP layer. The buffer may begin with a layer that is substantially lattice matched to GaAs, and may then incrementally increase the lattice constant in each sequential layer until a predetermined lattice constant of Al.sub.1-xIn.sub.xP is reached.
Scanner. [photography from a spin stabilized synchronous satellite
NASA Technical Reports Server (NTRS)
Hummer, R. F.; Upton, D. T. (Inventor)
1981-01-01
An aerial vehicle rotating in gyroscopic fashion about one of its axes has an optical system which scans an area below the vehicle in determined relation to vehicle rotation. A sensing device is provided to sense the physical condition of the area of scan and optical means are associated to direct the physical intelligence received from the scan area to the sensing means. Means are provided to incrementally move the optical means through a series of steps to effect sequential line scan of the area being viewed keyed to the rotational rate of the vehicle.
2007-05-01
concepts. Figure 3: Simplified Diagram of DOD’s Business Mission Area Federated Architecture Source: GAO analysis of DOD data. Enterprise Shared Services and...tio ns DLA A rc hi te ct ur es Tr an si tio n P la ns S ys te m s S ol ut io ns Ente prise Shared Services and System Capabiliti DOD BEA and...component business architectures’ alignment with incremental versions of the BEA will be achieved; how shared services will be identified, exposed
Reenacting the birth of an intron
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hellsten, Uffe; Aspden, Julie L.; Rio, Donald C.
2011-07-01
An intron is an extended genomic feature whose function requires multiple constrained positions - donor and acceptor splice sites, a branch point, a polypyrimidine tract and suitable splicing enhancers - that may be distributed over hundreds or thousands of nucleotides. New introns are therefore unlikely to emerge by incremental accumulation of functional sub-elements. Here we demonstrate that a functional intron can be created de novo in a single step by a segmental genomic duplication. This experiment recapitulates in vivo the birth of an intron that arose in the ancestral jawed vertebrate lineage nearly half a billion years ago.
A user-friendly tool for incremental haemodialysis prescription.
Casino, Francesco Gaetano; Basile, Carlo
2018-01-05
There is a recently heightened interest in incremental haemodialysis (IHD), the main advantage of which could likely be a better preservation of the residual kidney function of the patients. The implementation of IHD, however, is hindered by many factors, among them, the mathematical complexity of its prescription. The aim of our study was to design a user-friendly tool for IHD prescription, consisting of only a few rows of a common spreadsheet. The keystone of our spreadsheet was the following fundamental concept: the dialysis dose to be prescribed in IHD depends only on the normalized urea clearance provided by the native kidneys (KRUn) of the patient for each frequency of treatment, according to the variable target model recently proposed by Casino and Basile (The variable target model: a paradigm shift in the incremental haemodialysis prescription. Nephrol Dial Transplant 2017; 32: 182-190). The first step was to put in sequence a series of equations in order to calculate, firstly, KRUn and, then, the key parameters to be prescribed for an adequate IHD; the second step was to compare KRUn values obtained with our spreadsheet with KRUn values obtainable with the gold standard Solute-solver (Daugirdas JT et al., Solute-solver: a web-based tool for modeling urea kinetics for a broad range of hemodialysis schedules in multiple patients. Am J Kidney Dis 2009; 54: 798-809) in a sample of 40 incident haemodialysis patients. Our spreadsheet provided excellent results. The differences with Solute-solver were clinically negligible. This was confirmed by the Bland-Altman plot built to analyse the agreement between KRUn values obtained with the two methods: the difference was 0.07 ± 0.05 mL/min/35 L. Our spreadsheet is a user-friendly tool able to provide clinically acceptable results in IHD prescription. Two immediate consequences could derive: (i) a larger dissemination of IHD might occur; and (ii) our spreadsheet could represent a useful tool for an ineludibly needed full-fledged clinical trial, comparing IHD with standard thrice-weekly HD. © The Author(s) 2018. Published by Oxford University Press on behalf of ERA-EDTA. All rights reserved.
NASA Astrophysics Data System (ADS)
Ginzburg, V. N.; Kochetkov, A. A.; Potemkin, A. K.; Khazanov, E. A.
2018-04-01
It has been experimentally confirmed that self-cleaning of a laser beam from spatial noise during propagation in free space makes it possible to suppress efficiently the self-focusing instability without applying spatial filters. Measurements of the instability increment by two independent methods have demonstrated quantitative agreement with theory and high efficiency of small-scale self-focusing suppression. This opens new possibilities for using optical elements operating in transmission (frequency doublers, phase plates, beam splitters, polarisers, etc.) in beams with intensities on the order of a few TW cm‑2.
An electronic system for measuring thermophysical properties of wind tunnel models
NASA Technical Reports Server (NTRS)
Corwin, R. R.; Kramer, J. S.
1975-01-01
An electronic system is described which measures the surface temperature of a small portion of the surface of the model or sample at high speeds using an infrared radiometer. This data is processed along with heating rate data from the reference heat gauge in a small computer and prints out the desired thermophysical properties, time, surface temperature, and reference heat rate. This system allows fast and accurate property measurements over thirty temperature increments. The technique, the details of the apparatus, the procedure for making these measurements, and the results of some preliminary tests are presented.
One Step Back, Two Steps Forward: An Analytical Framework for Airpower in Small Wars
2006-06-01
Counterinsurgency Business .” Small Wars and Insurgencies 5, no. 3 (Winter 1994). 81 Watman, K. and Wilkening, D. U.S. Regional Deterrence Strategies . Santa...optimal for waging wars at the sub-state level . Small wars are conflicts where the political and diplomatic context, and not the military...use of airpower for waging war at this level . 15. NUMBER OF PAGES 99 14. SUBJECT TERMS Airpower, Small War, Leites and Wolf, insurgency
NASA Technical Reports Server (NTRS)
Marr, W. A., Jr.
1972-01-01
The behavior of finite element models employing different constitutive relations to describe the stress-strain behavior of soils is investigated. Three models, which assume small strain theory is applicable, include a nondilatant, a dilatant and a strain hardening constitutive relation. Two models are formulated using large strain theory and include a hyperbolic and a Tresca elastic perfectly plastic constitutive relation. These finite element models are used to analyze retaining walls and footings. Methods of improving the finite element solutions are investigated. For nonlinear problems better solutions can be obtained by using smaller load increment sizes and more iterations per load increment than by increasing the number of elements. Suitable methods of treating tension stresses and stresses which exceed the yield criteria are discussed.
Reversion phenomena of Cu-Cr alloys
NASA Technical Reports Server (NTRS)
Nishikawa, S.; Nagata, K.; Kobayashi, S.
1985-01-01
Cu-Cr alloys which were given various aging and reversion treatments were investigated in terms of electrical resistivity and hardness. Transmission electron microscopy was one technique employed. Some results obtained are as follows: the increment of electrical resistivity after the reversion at a constant temperature decreases as the aging temperature rises. In a constant aging condition, the increment of electrical resistivity after the reversion increases, and the time required for a maximum reversion becomes shorter as the reversion temperature rises. The reversion phenomena can be repeated, but its amount decreases rapidly by repetition. At first, the amount of reversion increases with aging time and reaches its maximum, and then tends to decrease again. Hardness changes by the reversion are very small, but the hardness tends to soften slightly. Any changes in transmission electron micrographs by the reversion treatment cannot be detected.
Mouri, Hideaki; Hori, Akihiro; Kawashima, Yoshihide
2004-12-01
The most elementary structures of turbulence, i.e., vortex tubes, are studied using velocity data obtained in a laboratory experiment for boundary layers with Reynolds numbers Re(lambda) =295-1258 . We conduct conditional averaging for enhancements of a small-scale velocity increment and obtain the typical velocity profile for vortex tubes. Their radii are of the order of the Kolmogorov length. Their circulation velocities are of the order of the root-mean-square velocity fluctuation. We also obtain the distribution of the interval between successive enhancements of the velocity increment as the measure of the spatial distribution of vortex tubes. They tend to cluster together below about the integral length and more significantly below about the Taylor microscale. These properties are independent of the Reynolds number and are hence expected to be universal.
Clausen, J L; Georgian, T; Gardner, K H; Douglas, T A
2018-01-01
This study compares conventional grab sampling to incremental sampling methodology (ISM) to characterize metal contamination at a military small-arms-range. Grab sample results had large variances, positively skewed non-normal distributions, extreme outliers, and poor agreement between duplicate samples even when samples were co-located within tens of centimeters of each other. The extreme outliers strongly influenced the grab sample means for the primary contaminants lead (Pb) and antinomy (Sb). In contrast, median and mean metal concentrations were similar for the ISM samples. ISM significantly reduced measurement uncertainty of estimates of the mean, increasing data quality (e.g., for environmental risk assessments) with fewer samples (e.g., decreasing total project costs). Based on Monte Carlo resampling simulations, grab sampling resulted in highly variable means and upper confidence limits of the mean relative to ISM.
Identification of high shears and compressive discontinuities in the inner heliosphere
DOE Office of Scientific and Technical Information (OSTI.GOV)
Greco, A.; Perri, S.
2014-04-01
Two techniques, the Partial Variance of Increments (PVI) and the Local Intermittency Measure (LIM), have been applied and compared using MESSENGER magnetic field data in the solar wind at a heliocentric distance of about 0.3 AU. The spatial properties of the turbulent field at different scales, spanning the whole inertial range of magnetic turbulence down toward the proton scales have been studied. LIM and PVI methodologies allow us to identify portions of an entire time series where magnetic energy is mostly accumulated, and regions of intermittent bursts in the magnetic field vector increments, respectively. A statistical analysis has revealed thatmore » at small time scales and for high level of the threshold, the bursts present in the PVI and the LIM series correspond to regions of high shear stress and high magnetic field compressibility.« less
‘Small Changes’ to Diet and Physical Activity Behaviors for Weight Management
Hills, Andrew P.; Byrne, Nuala M.; Lindstrom, Rachel; Hill, James O.
2013-01-01
Obesity is associated with numerous short- and long-term health consequences. Low levels of physical activity and poor dietary habits are consistent with an increased risk of obesity in an obesogenic environment. Relatively little research has investigated associations between eating and activity behaviors by using a systems biology approach and by considering the dynamics of the energy balance concept. A significant body of research indicates that a small positive energy balance over time is sufficient to cause weight gain in many individuals. In contrast, small changes in nutrition and physical activity behaviors can prevent weight gain. In the context of weight management, it may be more feasible for most people to make small compared to large short-term changes in diet and activity. This paper presents a case for the use of small and incremental changes in diet and physical activity for improved weight management in the context of a toxic obesogenic environment. PMID:23711772
Small Diameter Bomb Increment II (SDB II)
2013-12-01
in 2013: Electromagnetic Environments and Effects and Hazards of Electromagnetic Radiation to Ordnance . Reliability Growth Testing started in June...unclassified c. THIS PAGE unclassified Standard Form 298 (Rev. 8-98) Prescribed by ANSI Std Z39-18 SDB II December 2013 SAR April 16, 2014 17:24:29...Framework EMC - Electromagnetic Compatibility EMI - Electromagnetic Interference GESP - GIG Enterprise Service Profiles GIG - Global Information Grid i.e
Mechanical Limits to Size in Wave-Swept Organisms.
1983-11-10
complanata, the probability of destruction and the size- specific increase in the risk of destruction are both substantial. It is conjectured that the...barnacle, Semibalanus cariosus) the size-specific increment in the risk of destruction is small and the size limits imposed on these organisms are...constructed here provides an experimental approach to examining many potential effects of environmental stress caused by flowing water. For example, these
Transformational adaptation when incremental adaptations to climate change are insufficient
Kates, Robert W.; Travis, William R.; Wilbanks, Thomas J.
2012-01-01
All human–environment systems adapt to climate and its natural variation. Adaptation to human-induced change in climate has largely been envisioned as increments of these adaptations intended to avoid disruptions of systems at their current locations. In some places, for some systems, however, vulnerabilities and risks may be so sizeable that they require transformational rather than incremental adaptations. Three classes of transformational adaptations are those that are adopted at a much larger scale, that are truly new to a particular region or resource system, and that transform places and shift locations. We illustrate these with examples drawn from Africa, Europe, and North America. Two conditions set the stage for transformational adaptation to climate change: large vulnerability in certain regions, populations, or resource systems; and severe climate change that overwhelms even robust human use systems. However, anticipatory transformational adaptation may be difficult to implement because of uncertainties about climate change risks and adaptation benefits, the high costs of transformational actions, and institutional and behavioral actions that tend to maintain existing resource systems and policies. Implementing transformational adaptation requires effort to initiate it and then to sustain the effort over time. In initiating transformational adaptation focusing events and multiple stresses are important, combined with local leadership. In sustaining transformational adaptation, it seems likely that supportive social contexts and the availability of acceptable options and resources for actions are key enabling factors. Early steps would include incorporating transformation adaptation into risk management and initiating research to expand the menu of innovative transformational adaptations. PMID:22509036
Transformational adaptation when incremental adaptations to climate change are insufficient.
Kates, Robert W; Travis, William R; Wilbanks, Thomas J
2012-05-08
All human-environment systems adapt to climate and its natural variation. Adaptation to human-induced change in climate has largely been envisioned as increments of these adaptations intended to avoid disruptions of systems at their current locations. In some places, for some systems, however, vulnerabilities and risks may be so sizeable that they require transformational rather than incremental adaptations. Three classes of transformational adaptations are those that are adopted at a much larger scale, that are truly new to a particular region or resource system, and that transform places and shift locations. We illustrate these with examples drawn from Africa, Europe, and North America. Two conditions set the stage for transformational adaptation to climate change: large vulnerability in certain regions, populations, or resource systems; and severe climate change that overwhelms even robust human use systems. However, anticipatory transformational adaptation may be difficult to implement because of uncertainties about climate change risks and adaptation benefits, the high costs of transformational actions, and institutional and behavioral actions that tend to maintain existing resource systems and policies. Implementing transformational adaptation requires effort to initiate it and then to sustain the effort over time. In initiating transformational adaptation focusing events and multiple stresses are important, combined with local leadership. In sustaining transformational adaptation, it seems likely that supportive social contexts and the availability of acceptable options and resources for actions are key enabling factors. Early steps would include incorporating transformation adaptation into risk management and initiating research to expand the menu of innovative transformational adaptations.
A classification scheme for turbulent flows based on their joint velocity-intermittency structure
NASA Astrophysics Data System (ADS)
Keylock, C. J.; Nishimura, K.; Peinke, J.
2011-12-01
Kolmogorov's classic theory for turbulence assumed an independence between velocity increments and the value for the velocity itself. However, this assumption is questionable, particularly in complex geophysical flows. Here we propose a framework for studying velocity-intermittency coupling that is similar in essence to the popular quadrant analysis method for studying near-wall flows. However, we study the dominant (longitudinal) velocity component along with a measure of the roughness of the signal, given mathematically by its series of Hölder exponents. Thus, we permit a possible dependence between velocity and intermittency. We compare boundary layer data obtained in a wind tunnel to turbulent jets and wake flows. These flow classes all have distinct velocity-intermittency characteristics, which cause them to be readily distinguished using our technique. Our method is much simpler and quicker to apply than approaches that condition the velocity increment statistics at some scale, r, on the increment statistics at a neighbouring, larger spatial scale, r+Δ, and the velocity itself. Classification of environmental flows is then possible based on their similarities to the idealised flow classes and we demonstrate this using laboratory data for flow in a parallel-channel confluence where the region of flow recirculation in the lee of the step is discriminated as a flow class distinct from boundary layer, jet and wake flows. Hence, using our method, it is possible to assign a flow classification to complex geophysical, turbulent flows depending upon which idealised flow class they most resemble.
Rodríguez, Iván; Zambrano, Lysien; Manterola, Carlos
2016-04-01
Physiological parameters used to measure exercise intensity are oxygen uptake and heart rate. However, perceived exertion (PE) is a scale that has also been frequently applied. The objective of this study is to establish the criterion-related validity of PE scales in children during an incremental exercise test. Seven electronic databases were used. Studies aimed at assessing criterion-related validity of PE scales in healthy children during an incremental exercise test were included. Correlation coefficients were transformed into z-values and assessed in a meta-analysis by means of a fixed effects model if I2 was below 50% or a random effects model, if it was above 50%. wenty-five articles that studied 1418 children (boys: 49.2%) met the inclusion criteria. Children's average age was 10.5 years old. Exercise modalities included bike, running and stepping exercises. The weighted correlation coefficient was 0.835 (95% confidence interval: 0.762-0.887) and 0.874 (95% confidence interval: 0.794-0.924) for heart rate and oxygen uptake as reference criteria. The production paradigm and scales that had not been adapted to children showed the lowest measurement performance (p < 0.05). Measuring PE could be valid in healthy children during an incremental exercise test. Child-specific rating scales showed a better performance than those that had not been adapted to this population. Further studies with better methodological quality should be conducted in order to confirm these results. Sociedad Argentina de Pediatría.
Habchi, Johnny; Chia, Sean; Limbocker, Ryan; Mannini, Benedetta; Ahn, Minkoo; Perni, Michele; Hansson, Oskar; Arosio, Paolo; Kumita, Janet R; Challa, Pavan Kumar; Cohen, Samuel I A; Linse, Sara; Dobson, Christopher M; Knowles, Tuomas P J; Vendruscolo, Michele
2017-01-10
The aggregation of the 42-residue form of the amyloid-β peptide (Aβ42) is a pivotal event in Alzheimer's disease (AD). The use of chemical kinetics has recently enabled highly accurate quantifications of the effects of small molecules on specific microscopic steps in Aβ42 aggregation. Here, we exploit this approach to develop a rational drug discovery strategy against Aβ42 aggregation that uses as a read-out the changes in the nucleation and elongation rate constants caused by candidate small molecules. We thus identify a pool of compounds that target specific microscopic steps in Aβ42 aggregation. We then test further these small molecules in human cerebrospinal fluid and in a Caenorhabditis elegans model of AD. Our results show that this strategy represents a powerful approach to identify systematically small molecule lead compounds, thus offering an appealing opportunity to reduce the attrition problem in drug discovery.
Antibody-Mediated Small Molecule Detection Using Programmable DNA-Switches.
Rossetti, Marianna; Ippodrino, Rudy; Marini, Bruna; Palleschi, Giuseppe; Porchetta, Alessandro
2018-06-13
The development of rapid, cost-effective, and single-step methods for the detection of small molecules is crucial for improving the quality and efficiency of many applications ranging from life science to environmental analysis. Unfortunately, current methodologies still require multiple complex, time-consuming washing and incubation steps, which limit their applicability. In this work we present a competitive DNA-based platform that makes use of both programmable DNA-switches and antibodies to detect small target molecules. The strategy exploits both the advantages of proximity-based methods and structure-switching DNA-probes. The platform is modular and versatile and it can potentially be applied for the detection of any small target molecule that can be conjugated to a nucleic acid sequence. Here the rational design of programmable DNA-switches is discussed, and the sensitive, rapid, and single-step detection of different environmentally relevant small target molecules is demonstrated.
Khansaritoreh, Elmira; Dulamsuren, Choimaa; Klinge, Michael; Ariunbaatar, Tumurbaatar; Bat-Enerel, Banzragch; Batsaikhan, Ganbaatar; Ganbaatar, Kherlenchimeg; Saindovdon, Davaadorj; Yeruult, Yolk; Tsogtbaatar, Jamsran; Tuya, Daramragchaa; Leuschner, Christoph; Hauck, Markus
2017-09-01
Forest fragmentation has been found to affect biodiversity and ecosystem functioning in multiple ways. We asked whether forest size and isolation in fragmented woodlands influences the climate warming sensitivity of tree growth in the southern boreal forest of the Mongolian Larix sibirica forest steppe, a naturally fragmented woodland embedded in grassland, which is highly affected by warming, drought, and increasing anthropogenic forest destruction in recent time. We examined the influence of stand size and stand isolation on the growth performance of larch in forests of four different size classes located in a woodland-dominated forest-steppe area and small forest patches in a grassland-dominated area. We found increasing climate sensitivity and decreasing first-order autocorrelation of annual stemwood increment with decreasing stand size. Stemwood increment increased with previous year's June and August precipitation in the three smallest forest size classes, but not in the largest forests. In the grassland-dominated area, the tree growth dependence on summer rainfall was highest. Missing ring frequency has strongly increased since the 1970s in small, but not in large forests. In the grassland-dominated area, the increase was much greater than in the forest-dominated landscape. Forest regeneration decreased with decreasing stand size and was scarce or absent in the smallest forests. Our results suggest that the larch trees in small and isolated forest patches are far more susceptible to climate warming than in large continuous forests pointing to a grim future for the forests in this strongly warming region of the boreal forest that is also under high land use pressure. © 2017 John Wiley & Sons Ltd.
Cost-effectiveness of different strategies to manage patients with sciatica.
Fitzsimmons, Deborah; Phillips, Ceri J; Bennett, Hayley; Jones, Mari; Williams, Nefyn; Lewis, Ruth; Sutton, Alex; Matar, Hosam E; Din, Nafees; Burton, Kim; Nafees, Sadia; Hendry, Maggie; Rickard, Ian; Wilkinson, Claire
2014-07-01
The aim of this paper is to estimate the relative cost-effectiveness of treatment regimens for managing patients with sciatica. A deterministic model structure was constructed based on information from the findings from a systematic review of clinical effectiveness and cost-effectiveness, published sources of unit costs, and expert opinion. The assumption was that patients presenting with sciatica would be managed through one of 3 pathways (primary care, stepped approach, immediate referral to surgery). Results were expressed as incremental cost per patient with symptoms successfully resolved. Analysis also included incremental cost per utility gained over a 12-month period. One-way sensitivity analyses were used to address uncertainty. The model demonstrated that none of the strategies resulted in 100% success. For initial treatments, the most successful regime in the first pathway was nonopioids, with a probability of success of 0.613. In the second pathway, the most successful strategy was nonopioids, followed by biological agents, followed by epidural/nerve block and disk surgery, with a probability of success of 0.996. Pathway 3 (immediate surgery) was not cost-effective. Sensitivity analyses identified that the use of the highest cost estimates results in a similar overall picture. While the estimates of cost per quality-adjusted life year are higher, the economic model demonstrated that stepped approaches based on initial treatment with nonopioids are likely to represent the most cost-effective regimens for the treatment of sciatica. However, development of alternative economic modelling approaches is required. Copyright © 2014 International Association for the Study of Pain. Published by Elsevier B.V. All rights reserved.
A hybrid incremental projection method for thermal-hydraulics applications
NASA Astrophysics Data System (ADS)
Christon, Mark A.; Bakosi, Jozsef; Nadiga, Balasubramanya T.; Berndt, Markus; Francois, Marianne M.; Stagg, Alan K.; Xia, Yidong; Luo, Hong
2016-07-01
A new second-order accurate, hybrid, incremental projection method for time-dependent incompressible viscous flow is introduced in this paper. The hybrid finite-element/finite-volume discretization circumvents the well-known Ladyzhenskaya-Babuška-Brezzi conditions for stability, and does not require special treatment to filter pressure modes by either Rhie-Chow interpolation or by using a Petrov-Galerkin finite element formulation. The use of a co-velocity with a high-resolution advection method and a linearly consistent edge-based treatment of viscous/diffusive terms yields a robust algorithm for a broad spectrum of incompressible flows. The high-resolution advection method is shown to deliver second-order spatial convergence on mixed element topology meshes, and the implicit advective treatment significantly increases the stable time-step size. The algorithm is robust and extensible, permitting the incorporation of features such as porous media flow, RANS and LES turbulence models, and semi-/fully-implicit time stepping. A series of verification and validation problems are used to illustrate the convergence properties of the algorithm. The temporal stability properties are demonstrated on a range of problems with 2 ≤ CFL ≤ 100. The new flow solver is built using the Hydra multiphysics toolkit. The Hydra toolkit is written in C++ and provides a rich suite of extensible and fully-parallel components that permit rapid application development, supports multiple discretization techniques, provides I/O interfaces, dynamic run-time load balancing and data migration, and interfaces to scalable popular linear solvers, e.g., in open-source packages such as HYPRE, PETSc, and Trilinos.
A hybrid incremental projection method for thermal-hydraulics applications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Christon, Mark A.; Bakosi, Jozsef; Nadiga, Balasubramanya T.
In this paper, a new second-order accurate, hybrid, incremental projection method for time-dependent incompressible viscous flow is introduced in this paper. The hybrid finite-element/finite-volume discretization circumvents the well-known Ladyzhenskaya–Babuška–Brezzi conditions for stability, and does not require special treatment to filter pressure modes by either Rhie–Chow interpolation or by using a Petrov–Galerkin finite element formulation. The use of a co-velocity with a high-resolution advection method and a linearly consistent edge-based treatment of viscous/diffusive terms yields a robust algorithm for a broad spectrum of incompressible flows. The high-resolution advection method is shown to deliver second-order spatial convergence on mixed element topology meshes,more » and the implicit advective treatment significantly increases the stable time-step size. The algorithm is robust and extensible, permitting the incorporation of features such as porous media flow, RANS and LES turbulence models, and semi-/fully-implicit time stepping. A series of verification and validation problems are used to illustrate the convergence properties of the algorithm. The temporal stability properties are demonstrated on a range of problems with 2 ≤ CFL ≤ 100. The new flow solver is built using the Hydra multiphysics toolkit. The Hydra toolkit is written in C++ and provides a rich suite of extensible and fully-parallel components that permit rapid application development, supports multiple discretization techniques, provides I/O interfaces, dynamic run-time load balancing and data migration, and interfaces to scalable popular linear solvers, e.g., in open-source packages such as HYPRE, PETSc, and Trilinos.« less
A hybrid incremental projection method for thermal-hydraulics applications
Christon, Mark A.; Bakosi, Jozsef; Nadiga, Balasubramanya T.; ...
2016-07-01
In this paper, a new second-order accurate, hybrid, incremental projection method for time-dependent incompressible viscous flow is introduced in this paper. The hybrid finite-element/finite-volume discretization circumvents the well-known Ladyzhenskaya–Babuška–Brezzi conditions for stability, and does not require special treatment to filter pressure modes by either Rhie–Chow interpolation or by using a Petrov–Galerkin finite element formulation. The use of a co-velocity with a high-resolution advection method and a linearly consistent edge-based treatment of viscous/diffusive terms yields a robust algorithm for a broad spectrum of incompressible flows. The high-resolution advection method is shown to deliver second-order spatial convergence on mixed element topology meshes,more » and the implicit advective treatment significantly increases the stable time-step size. The algorithm is robust and extensible, permitting the incorporation of features such as porous media flow, RANS and LES turbulence models, and semi-/fully-implicit time stepping. A series of verification and validation problems are used to illustrate the convergence properties of the algorithm. The temporal stability properties are demonstrated on a range of problems with 2 ≤ CFL ≤ 100. The new flow solver is built using the Hydra multiphysics toolkit. The Hydra toolkit is written in C++ and provides a rich suite of extensible and fully-parallel components that permit rapid application development, supports multiple discretization techniques, provides I/O interfaces, dynamic run-time load balancing and data migration, and interfaces to scalable popular linear solvers, e.g., in open-source packages such as HYPRE, PETSc, and Trilinos.« less
Power-Stepped HF Cross Modulation Experiments at HAARP
NASA Astrophysics Data System (ADS)
Greene, S.; Moore, R. C.; Langston, J. S.
2013-12-01
High frequency (HF) cross modulation experiments are a well established means for probing the HF-modified characteristics of the D-region ionosphere. In this paper, we apply experimental observations of HF cross-modulation to the related problem of ELF/VLF wave generation. HF cross-modulation measurements are used to evaluate the efficiency of ionospheric conductivity modulation during power-stepped modulated HF heating experiments. The results are compared to previously published dependencies of ELF/VLF wave amplitude on HF peak power. The experiments were performed during the March 2013 campaign at the High Frequency Active Auroral Research Program (HAARP) Observatory. HAARP was operated in a dual-beam transmission format: the first beam heated the ionosphere using sinusoidal amplitude modulation while the second beam broadcast a series of low-power probe pulses. The peak power of the modulating beam was incremented in 1-dB steps. We compare the minimum and maximum cross-modulation effect and the amplitude of the resulting cross-modulation waveform to the expected power-law dependence of ELF/VLF wave amplitude on HF power.
Membrane Fusion Induced by Small Molecules and Ions
Mondal Roy, Sutapa; Sarkar, Munna
2011-01-01
Membrane fusion is a key event in many biological processes. These processes are controlled by various fusogenic agents of which proteins and peptides from the principal group. The fusion process is characterized by three major steps, namely, inter membrane contact, lipid mixing forming the intermediate step, pore opening and finally mixing of inner contents of the cells/vesicles. These steps are governed by energy barriers, which need to be overcome to complete fusion. Structural reorganization of big molecules like proteins/peptides, supplies the required driving force to overcome the energy barrier of the different intermediate steps. Small molecules/ions do not share this advantage. Hence fusion induced by small molecules/ions is expected to be different from that induced by proteins/peptides. Although several reviews exist on membrane fusion, no recent review is devoted solely to small moleculs/ions induced membrane fusion. Here we intend to present, how a variety of small molecules/ions act as independent fusogens. The detailed mechanism of some are well understood but for many it is still an unanswered question. Clearer understanding of how a particular small molecule can control fusion will open up a vista to use these moleucles instead of proteins/peptides to induce fusion both in vivo and in vitro fusion processes. PMID:21660306
Planar isotropy of passive scalar turbulent mixing with a mean perpendicular gradient.
Danaila, L; Dusek, J; Le Gal, P; Anselmet, F; Brun, C; Pumir, A
1999-08-01
A recently proposed evolution equation [Vaienti et al., Physica D 85, 405 (1994)] for the probability density functions (PDF's) of turbulent passive scalar increments obtained under the assumptions of fully three-dimensional homogeneity and isotropy is submitted to validation using direct numerical simulation (DNS) results of the mixing of a passive scalar with a nonzero mean gradient by a homogeneous and isotropic turbulent velocity field. It is shown that this approach leads to a quantitatively correct balance between the different terms of the equation, in a plane perpendicular to the mean gradient, at small scales and at large Péclet number. A weaker assumption of homogeneity and isotropy restricted to the plane normal to the mean gradient is then considered to derive an equation describing the evolution of the PDF's as a function of the spatial scale and the scalar increments. A very good agreement between the theory and the DNS data is obtained at all scales. As a particular case of the theory, we derive a generalized form for the well-known Yaglom equation (the isotropic relation between the second-order moments for temperature increments and the third-order velocity-temperature mixed moments). This approach allows us to determine quantitatively how the integral scale properties influence the properties of mixing throughout the whole range of scales. In the simple configuration considered here, the PDF's of the scalar increments perpendicular to the mean gradient can be theoretically described once the sources of inhomogeneity and anisotropy at large scales are correctly taken into account.
NASA Astrophysics Data System (ADS)
Chen, Zhi; Hu, Kun; Stanley, H. Eugene; Novak, Vera; Ivanov, Plamen Ch.
2006-03-01
We investigate the relationship between the blood flow velocities (BFV) in the middle cerebral arteries and beat-to-beat blood pressure (BP) recorded from a finger in healthy and post-stroke subjects during the quasisteady state after perturbation for four different physiologic conditions: supine rest, head-up tilt, hyperventilation, and CO2 rebreathing in upright position. To evaluate whether instantaneous BP changes in the steady state are coupled with instantaneous changes in the BFV, we compare dynamical patterns in the instantaneous phases of these signals, obtained from the Hilbert transform, as a function of time. We find that in post-stroke subjects the instantaneous phase increments of BP and BFV exhibit well-pronounced patterns that remain stable in time for all four physiologic conditions, while in healthy subjects these patterns are different, less pronounced, and more variable. We propose an approach based on the cross-correlation of the instantaneous phase increments to quantify the coupling between BP and BFV signals. We find that the maximum correlation strength is different for the two groups and for the different conditions. For healthy subjects the amplitude of the cross-correlation between the instantaneous phase increments of BP and BFV is small and attenuates within 3-5 heartbeats. In contrast, for post-stroke subjects, this amplitude is significantly larger and cross-correlations persist up to 20 heartbeats. Further, we show that the instantaneous phase increments of BP and BFV are cross-correlated even within a single heartbeat cycle. We compare the results of our approach with three complementary methods: direct BP-BFV cross-correlation, transfer function analysis, and phase synchronization analysis. Our findings provide insight into the mechanism of cerebral vascular control in healthy subjects, suggesting that this control mechanism may involve rapid adjustments (within a heartbeat) of the cerebral vessels, so that BFV remains steady in response to changes in peripheral BP.
Chen, Zhi; Hu, Kun; Stanley, H Eugene; Novak, Vera; Ivanov, Plamen Ch
2006-03-01
We investigate the relationship between the blood flow velocities (BFV) in the middle cerebral arteries and beat-to-beat blood pressure (BP) recorded from a finger in healthy and post-stroke subjects during the quasisteady state after perturbation for four different physiologic conditions: supine rest, head-up tilt, hyperventilation, and CO2 rebreathing in upright position. To evaluate whether instantaneous BP changes in the steady state are coupled with instantaneous changes in the BFV, we compare dynamical patterns in the instantaneous phases of these signals, obtained from the Hilbert transform, as a function of time. We find that in post-stroke subjects the instantaneous phase increments of BP and BFV exhibit well-pronounced patterns that remain stable in time for all four physiologic conditions, while in healthy subjects these patterns are different, less pronounced, and more variable. We propose an approach based on the cross-correlation of the instantaneous phase increments to quantify the coupling between BP and BFV signals. We find that the maximum correlation strength is different for the two groups and for the different conditions. For healthy subjects the amplitude of the cross-correlation between the instantaneous phase increments of BP and BFV is small and attenuates within 3-5 heartbeats. In contrast, for post-stroke subjects, this amplitude is significantly larger and cross-correlations persist up to 20 heartbeats. Further, we show that the instantaneous phase increments of BP and BFV are cross-correlated even within a single heartbeat cycle. We compare the results of our approach with three complementary methods: direct BP-BFV cross-correlation, transfer function analysis, and phase synchronization analysis. Our findings provide insight into the mechanism of cerebral vascular control in healthy subjects, suggesting that this control mechanism may involve rapid adjustments (within a heartbeat) of the cerebral vessels, so that BFV remains steady in response to changes in peripheral BP.
Environmental monitoring at Mound: 1986 report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Carfagno, D.G.; Farmer, B.M.
1987-05-11
The local environment around Mound was monitored for tritium and plutonium-238. The results are reported for 1986. Environmental media analyzed included air, water, vegetation, foodstuffs, and sediment. The average concentrations of plutonium-238 and tritium were within the DOE interim air and water Derived Concentration Guides (DCG) for these radionuclides. The average incremental concentrations of plutonium-238 and tritium oxide in air measured at all offsite locations during 1986 were 0.03% and 0.01%, respectively, of the DOE DCGs for uncontrolled areas. The average incremental concentration of plutonium-238 measured at all locations in the Great Miami River during 1986 was 0.0005% of themore » DOE DCG. The average incremental concentration of tritium measured at all locations in the Great Miami River during 1986 was 0.005% of the DOE DCG. The average incremental concentrations of plutonium-238 found during 1986 in surface and area drinking water were less than 0.00006% of the DOE DCG. The average incremental concentration of tritium in surface water was less than 0.005% of the DOE DCG. All tritium in drinking water data is compared to the US EPA Drinking Water Standard. The average concentrations in local private and municipal drinking water systems were less than 25% and 1.5%, respectively. Although no DOE DCG is available for foodstuffs, the average concentrations are a small fraction of the water DCG (0.04%). The concentrations of sediment samples obtained at offsite surface water sampling locations were extremely low and therefore represent no adverse impact to the environment. The dose equivalent estimates for the average air, water, and foodstuff concentrations indicate that the levels are within 1% of the DOE standard of 100 mrem. None of these exceptions, however, had an adverse impact on the water quality of the Great Miami River or caused the river to exceed Ohio Stream Standards. 20 refs., 5 figs., 31 tabs.« less
Griffiths, Ulla K; Santos, Andreia C; Nundy, Neeti; Jacoby, Erica; Matthias, Dipika
2011-01-29
Disposable-syringe jet injectors (DSJIs) have the potential to deliver vaccines safely and affordably to millions of children around the world. We estimated the incremental costs of transitioning from needles and syringes to delivering childhood vaccines with DSJIs in Brazil, India, and South Africa. Two scenarios were assessed: (1) DSJI delivery of all vaccines at current dose and depth; (2) a change to intradermal (ID) delivery with DSJIs for hepatitis B and yellow fever vaccines, while the other vaccines are delivered by DSJIs at current dose and depth. The main advantage of ID delivery is that only a small fraction of the standard dose may be needed to obtain an immune response similar to that of subcutaneous or intramuscular injection. Cost categories included were vaccines, injection equipment, waste management, and vaccine transport. Some delivery cost items, such as training and personnel were excluded as were treatment cost savings caused by a reduction in diseases transmitted due to unsafe injections. In the standard dose and depth scenario, the incremental costs of introducing DSJIs per fully vaccinated child amount to US$ 0.57 in Brazil, US$ 0.65 in India and US$ 1.24 in South Africa. In the ID scenario, there are cost savings of US$ 0.11 per child in Brazil, and added costs of US$ 0.45 and US$ 0.76 per child in India and South Africa, respectively. The most important incremental cost item is jet injector disposable syringes. The incremental costs should be evaluated against other vaccine delivery technologies that can deliver the same benefits to patients, health care workers, and the community. DSJIs deserve consideration by global and national decision-makers as a means to expand access to ID delivery and to enhance safety at marginal additional cost. Copyright © 2010 Elsevier Ltd. All rights reserved.
76 FR 19174 - State Trade and Export Promotion (STEP) Pilot Grant Program
Federal Register 2010, 2011, 2012, 2013, 2014
2011-04-06
... SMALL BUSINESS ADMINISTRATION State Trade and Export Promotion (STEP) Pilot Grant Program AGENCY... No. OIT-STEP-2011-01, Modification 1. SUMMARY: Program announcement No. OIT-STEP-2011-01 has been... to the date of application submission for a STEP grant.] Section IV A. 1, Governor's Letter of...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wilson, Charles
University Park, Maryland (“UP”) is a small town of 2,540 residents, 919 homes, 2 churches, 1 school, 1 town hall, and 1 breakthrough community energy efficiency initiative: the Small Town Energy Program (“STEP”). STEP was developed with a mission to “create a model community energy transformation program that serves as a roadmap for other small towns across the U.S.” STEP first launched in January 2011 in UP and expanded in July 2012 to the neighboring communities of Hyattsville, Riverdale Park, and College Heights Estates, MD. STEP, which concluded in July 2013, was generously supported by a grant from the U.S.more » Department of Energy (DOE). The STEP model was designed for replication in other resource-constrained small towns similar to University Park - a sector largely neglected to date in federal and state energy efficiency programs. STEP provided a full suite of activities for replication, including: energy audits and retrofits for residential buildings, financial incentives, a community-based social marketing backbone and local community delivery partners. STEP also included the highly innovative use of an “Energy Coach” who worked one-on-one with clients throughout the program. Please see www.smalltownenergy.org for more information. In less than three years, STEP achieved the following results in University Park: • 30% of community households participated voluntarily in STEP; • 25% of homes received a Home Performance with ENERGY STAR assessment; • 16% of households made energy efficiency improvements to their home; • 64% of households proceeded with an upgrade after their assessment; • 9 Full Time Equivalent jobs were created or retained, and 39 contractors worked on STEP over the course of the project. Estimated Energy Savings - Program Totals kWh Electricity 204,407 Therms Natural Gas 24,800 Gallons of Oil 2,581 Total Estimated MMBTU Saved (Source Energy) 5,474 Total Estimated Annual Energy Cost Savings $61,343 STEP clients who had a home energy upgrade invested on average $4,500, resulting in a 13% reduction in annual energy use and utility bill savings of $325. Rebates and incentives covered 40%-50% of retrofit cost, resulting in an average simple payback of about 7 years. STEP has created a handbook in which are assembled all the key elements that went into the design and delivery of STEP. The target audiences for the handbook include interested citizens, elected officials and municipal staff who want to establish and run their own efficiency program within a small community or neighborhood, using elements, materials and lessons from STEP.« less
Short bowel mucosal morphology, proliferation and inflammation at first and repeat STEP procedures.
Mutanen, Annika; Barrett, Meredith; Feng, Yongjia; Lohi, Jouko; Rabah, Raja; Teitelbaum, Daniel H; Pakarinen, Mikko P
2018-04-17
Although serial transverse enteroplasty (STEP) improves function of dilated short bowel, a significant proportion of patients require repeat surgery. To address underlying reasons for unsuccessful STEP, we compared small intestinal mucosal characteristics between initial and repeat STEP procedures in children with short bowel syndrome (SBS). Fifteen SBS children, who underwent 13 first and 7 repeat STEP procedures with full thickness small bowel samples at median age 1.5 years (IQR 0.7-3.7) were included. The specimens were analyzed histologically for mucosal morphology, inflammation and muscular thickness. Mucosal proliferation and apoptosis was analyzed with MIB1 and Tunel immunohistochemistry. Median small bowel length increased 42% by initial STEP and 13% by repeat STEP (p=0.05), while enteral caloric intake increased from 6% to 36% (p=0.07) during 14 (12-42) months between the procedures. Abnormal mucosal inflammation was frequently observed both at initial (69%) and additional STEP (86%, p=0.52) surgery. Villus height, crypt depth, enterocyte proliferation and apoptosis as well as muscular thickness were comparable at first and repeat STEP (p>0.05 for all). Patients, who required repeat STEP tended to be younger (p=0.057) with less apoptotic crypt cells (p=0.031) at first STEP. Absence of ileocecal valve associated with increased intraepithelial leukocyte count and reduced crypt cell proliferation index (p<0.05 for both). No adaptive mucosal hyperplasia or muscular alterations occurred between first and repeat STEP. Persistent inflammation and lacking mucosal growth may contribute to continuing bowel dysfunction in SBS children, who require repeat STEP procedure, especially after removal of the ileocecal valve. Level IV, retrospective study. Copyright © 2018 Elsevier Inc. All rights reserved.
Fuchs, H S
1983-01-01
The inauguration of NASA of the position of Payload Specialists for SHUTTLE-SPACELAB flights has broken the tradition of restrictive medical physical standards in several ways: by reducing physical requirements and extensive training; by permitting the selection of older individuals and women; by selecting individuals who may fly only one or several missions and do not spend an entire career in space activities. Experience with Payload Specialists to be gained during the forthcoming SPACELAB missions, observing man in spaceflight step by step on an incremental basis, will provide valuable data for modifying the medical standards for Payload Specialists, Space Station Technicians, and Space Support Personnel who perform routine work rather than peculiar tasks. Such revisions necessarily include a modification of traditional blood pressure standards. In this paper I review the history and evolution of these standards in aeronautics and astronautics.
NASA Astrophysics Data System (ADS)
Kaldunski, Pawel; Kukielka, Leon; Patyk, Radoslaw; Kulakowska, Agnieszka; Bohdal, Lukasz; Chodor, Jaroslaw; Kukielka, Krzysztof
2018-05-01
In this paper, the numerical analysis and computer simulation of deep drawing process has been presented. The incremental model of the process in updated Lagrangian formulation with the regard of the geometrical and physical nonlinearity has been evaluated by variational and the finite element methods. The Frederic Barlat's model taking into consideration the anisotropy of materials in three main and six tangents directions has been used. The work out application in Ansys/Ls-Dyna program allows complex step by step analysis and prognoses: the shape, dimensions and state stress and strains of drawpiece. The paper presents the influence of selected anisotropic parameter in the Barlat's model on the drawpiece shape, which includes: height, sheet thickness and maximum drawing force. The important factors determining the proper formation of drawpiece and the ways of their determination have been described.
Fatigue data for polyether ether ketone (PEEK) under fully-reversed cyclic loading
Shrestha, Rakish; Simsiriwong, Jutima; Shamsaei, Nima
2016-01-01
In this article, the data obtained from the uniaxial fully-reversed fatigue experiments conducted on polyether ether ketone (PEEK), a semi-crystalline thermoplastic, are presented. The tests were performed in either strain-controlled or load-controlled mode under various levels of loading. The data are categorized into four subsets according to the type of tests, including (1) strain-controlled fatigue tests with adjusted frequency to obtain the nominal temperature rise of the specimen surface, (2) strain-controlled fatigue tests with various frequencies, (3) load-controlled fatigue tests without step loadings, and (4) load-controlled fatigue tests with step loadings. Accompanied data for each test include the fatigue life, the maximum (peak) and minimum (valley) stress–strain responses for each cycle, and the hysteresis stress–strain responses for each collected cycle in a logarithmic increment. A brief description of the experimental method is also given. PMID:26937465
Fatigue data for polyether ether ketone (PEEK) under fully-reversed cyclic loading.
Shrestha, Rakish; Simsiriwong, Jutima; Shamsaei, Nima
2016-03-01
In this article, the data obtained from the uniaxial fully-reversed fatigue experiments conducted on polyether ether ketone (PEEK), a semi-crystalline thermoplastic, are presented. The tests were performed in either strain-controlled or load-controlled mode under various levels of loading. The data are categorized into four subsets according to the type of tests, including (1) strain-controlled fatigue tests with adjusted frequency to obtain the nominal temperature rise of the specimen surface, (2) strain-controlled fatigue tests with various frequencies, (3) load-controlled fatigue tests without step loadings, and (4) load-controlled fatigue tests with step loadings. Accompanied data for each test include the fatigue life, the maximum (peak) and minimum (valley) stress-strain responses for each cycle, and the hysteresis stress-strain responses for each collected cycle in a logarithmic increment. A brief description of the experimental method is also given.
NASA Astrophysics Data System (ADS)
Fuchs, Heinz S.
The inauguration of NASA of the position of Payload Specialists for SHUTTLE-SPACELAB flights has broken the tradition of restrictive medical physical standards in several ways: by reducing physical requirements and extensive training; by permitting the selection of older individuals and women; by selecting individuals who may fly only one or several missions and do not spend an entire career in space activities. Experience with Payload Specialists to be gained during the forthcoming SPACELAB missions, observing man in spaceflight step by step on an incremental basis, will provide valuable data for modifying the medical standards for Payload Specialists, Space Station Technicians, and Space Support Personnel who perform routine work rather than peculiar tasks. Such revisions necessarily include a modification of traditional blood pressure standards. In this paper I review the history and evolution of these standards in aeronautics and astronautics.
A New Type of Motor: Pneumatic Step Motor
Stoianovici, Dan; Patriciu, Alexandru; Petrisor, Doru; Mazilu, Dumitru; Kavoussi, Louis
2011-01-01
This paper presents a new type of pneumatic motor, a pneumatic step motor (PneuStep). Directional rotary motion of discrete displacement is achieved by sequentially pressurizing the three ports of the motor. Pulsed pressure waves are generated by a remote pneumatic distributor. The motor assembly includes a motor, gearhead, and incremental position encoder in a compact, central bore construction. A special electronic driver is used to control the new motor with electric stepper indexers and standard motion control cards. The motor accepts open-loop step operation as well as closed-loop control with position feedback from the enclosed sensor. A special control feature is implemented to adapt classic control algorithms to the new motor, and is experimentally validated. The speed performance of the motor degrades with the length of the pneumatic hoses between the distributor and motor. Experimental results are presented to reveal this behavior and set the expectation level. Nevertheless, the stepper achieves easily controllable precise motion unlike other pneumatic motors. The motor was designed to be compatible with magnetic resonance medical imaging equipment, for actuating an image-guided intervention robot, for medical applications. For this reason, the motors were entirely made of nonmagnetic and dielectric materials such as plastics, ceramics, and rubbers. Encoding was performed with fiber optics, so that the motors are electricity free, exclusively using pressure and light. PneuStep is readily applicable to other pneumatic or hydraulic precision-motion applications. PMID:21528106
Zuniga, Jorge M; Housh, Terry J; Camic, Clayton L; Bergstrom, Haley C; Schmidt, Richard J; Johnson, Glen O
2014-09-01
The purpose of this study was to examine the effect of ramp and step incremental cycle ergometer tests on the assessment of the anaerobic threshold (AT) using 3 different computerized regression-based algorithms. Thirteen healthy adults (mean age and body mass [SD] = 23.4 [3.3] years and body mass = 71.7 [11.1] kg) visited the laboratory on separate occasions. Two-way repeated measures analyses of variance with appropriate follow-up procedures were used to analyze the data. The step protocol resulted in greater mean values across algorithms than the ramp protocol for the V[Combining Dot Above]O2 (step = 1.7 [0.6] L·min and ramp = 1.5 [0.4] L·min) and heart rate (HR) (step = 133 [21] b·min and ramp = 124 [15] b·min) at the AT. There were no significant mean differences, however, in power outputs at the AT between the step (115.2 [44.3] W) and the ramp (112.2 [31.2] W) protocols. Furthermore, there were no significant mean differences for V[Combining Dot Above]O2, HR, or power output across protocols among the 3 computerized regression-based algorithms used to estimate the AT. The current findings suggested that the protocol selection, but not the regression-based algorithms can affect the assessment of the V[Combining Dot Above]O2 and HR at the AT.
Messina, Marco; Njuguna, James; Palas, Chrysovalantis
2018-01-01
This work focuses on the proof-mass mechanical structural design improvement of a tri-axial piezoresistive accelerometer specifically designed for head injuries monitoring where medium-G impacts are common; for example, in sports such as racing cars or American Football. The device requires the highest sensitivity achievable with a single proof-mass approach, and a very low error (<1%) as the accuracy for these types of applications is paramount. The optimization method differs from previous work as it is based on the progressive increment of the sensor proof-mass mass moment of inertia (MMI) in all three axes. Three different designs are presented in this study, where at each step of design evolution, the MMI of the sensor proof-mass gradually increases in all axes. The work numerically demonstrates that an increment of MMI determines an increment of device sensitivity with a simultaneous reduction of cross-axis sensitivity in the particular axis under study. This is due to the linkage between the external applied stress and the distribution of mass (of the proof-mass), and therefore of its mass moment of inertia. Progressively concentrating the mass on the axes where the piezoresistors are located (i.e., x- and y-axis) by increasing the MMI in the x- and y-axis, will undoubtedly increase the longitudinal stresses applied in that areas for a given external acceleration, therefore increasing the piezoresistors fractional resistance change and eventually positively affecting the sensor sensitivity. The final device shows a sensitivity increase of about 80% in the z-axis and a reduction of cross-axis sensitivity of 18% respect to state-of-art sensors available in the literature from a previous work of the authors. Sensor design, modelling, and optimization are presented, concluding the work with results, discussion, and conclusion. PMID:29351221
Bolasco, Piergiorgio; Caria, Stefania; Egidi, Maria Francesca; Cupisti, Adamasco
2015-01-01
The start of dialysis treatment is a critical step in the care management of chronic renal failure patients. When hemodialysis is performed three times a week, rapid loss of kidney function and of urine volume output generally occur and this represents an unfavorable prognostic factor. Instead, reducing frequency of hemodialysis sessions, as well as peritoneal dialysis, can contribute to a lesser decrease of residual renal function. Unfortunately, the existing protocols for an incremental hemodialysis approach are not particularly common and they are generally limited to a twice a week hemodialysis schedule. In addition to clinical and economic reasons, an incremental approach to ESRD also contributes to better social and psychological adaptation by the patients to the dramatic change in living conditions linked to the maintenance dialysis treatment. In patients who have attitude for low-protein nutritional therapy, a once weekly dialysis schedule combined with low-protein, low-phosphorus, normal to high energy diet in the remaining six days of the week can be implemented in selected patients. In our experience, this kind of program produced important clinical results and reduction in costs and hospitalization. When compared with a three times a week dialysis schedule, a greater protection of residual renal function and of urine volume output, lower increase in 2 microglobulin, better control of phosphorus and less consumption of phosphate binders and erythropoietin were observed. Careful clinical monitoring and nutrition is essential for the safety and optimization of infrequent hemodialysis. Long-term follow-up analysis shows favorable effects on the overall survival. Furthermore, twice a week hemodialysis is not the only option for an incremental approach of dialysis commencing. In patients who have a good attitude for low-protein nutritional therapy, its arrangement with a program of once weekly dialysis represents a real and effective alternative.
Brazier, Peter; Schauer, Uwe; Hamelmann, Eckard; Holmes, Steve; Pritchard, Clive; Warner, John O
2016-01-01
Chronic asthma is a significant burden for individual sufferers, adversely impacting their quality of working and social life, as well as being a major cost to the National Health Service (NHS). Temperature-controlled laminar airflow (TLA) therapy provides asthma patients at BTS/SIGN step 4/5 an add-on treatment option that is non-invasive and has been shown in clinical studies to improve quality of life for patients with poorly controlled allergic asthma. The objective of this study was to quantify the cost-effectiveness of TLA (Airsonett AB) technology as an add-on to standard asthma management drug therapy in the UK. The main performance measure of interest is the incremental cost per quality-adjusted life year (QALY) for patients using TLA in addition to usual care versus usual care alone. The incremental cost of TLA use is based on an observational clinical study monitoring the incidence of exacerbations with treatment valued using NHS cost data. The clinical effectiveness, used to derive the incremental QALY data, is based on a randomised double-blind placebo-controlled clinical trial comprising participants with an equivalent asthma condition. For a clinical cohort of asthma patients as a whole, the incremental cost-effectiveness ratio (ICER) is £8998 per QALY gained, that is, within the £20 000/QALY cost-effectiveness benchmark used by the National Institute for Health and Care Excellence (NICE). Sensitivity analysis indicates that ICER values range from £18 883/QALY for the least severe patients through to TLA being dominant, that is, cost saving as well as improving quality of life, for individuals with the most severe and poorly controlled asthma. Based on our results, Airsonett TLA is a cost-effective addition to treatment options for stage 4/5 patients. For high-risk individuals with more severe and less well controlled asthma, the use of TLA therapy to reduce incidence of hospitalisation would be a cost saving to the NHS.
Robot-based additive manufacturing for flexible die-modelling in incremental sheet forming
NASA Astrophysics Data System (ADS)
Rieger, Michael; Störkle, Denis Daniel; Thyssen, Lars; Kuhlenkötter, Bernd
2017-10-01
The paper describes the application concept of additive manufactured dies to support the robot-based incremental sheet metal forming process (`Roboforming') for the production of sheet metal components in small batch sizes. Compared to the dieless kinematic-based generation of a shape by means of two cooperating industrial robots, the supporting robot models a die on the back of the metal sheet by using the robot-based fused layer manufacturing process (FLM). This tool chain is software-defined and preserves the high geometrical form flexibility of Roboforming while flexibly generating support structures adapted to the final part's geometry. Test series serve to confirm the feasibility of the concept by investigating the process challenges of the adhesion to the sheet surface and the general stability as well as the influence on the geometric accuracy compared to the well-known forming strategies.
Recent advances in the modelling of crack growth under fatigue loading conditions
NASA Technical Reports Server (NTRS)
Dekoning, A. U.; Tenhoeve, H. J.; Henriksen, T. K.
1994-01-01
Fatigue crack growth associated with cyclic (secondary) plastic flow near a crack front is modelled using an incremental formulation. A new description of threshold behaviour under small load cycles is included. Quasi-static crack extension under high load excursions is described using an incremental formulation of the R-(crack growth resistance)- curve concept. The integration of the equations is discussed. For constant amplitude load cycles the results will be compared with existing crack growth laws. It will be shown that the model also properly describes interaction effects of fatigue crack growth and quasi-static crack extension. To evaluate the more general applicability the model is included in the NASGRO computer code for damage tolerance analysis. For this purpose the NASGRO program was provided with the CORPUS and the STRIP-YIELD models for computation of the crack opening load levels. The implementation is discussed and recent results of the verification are presented.
NASA Astrophysics Data System (ADS)
Schwegler, Eric; Challacombe, Matt; Head-Gordon, Martin
1997-06-01
A new linear scaling method for computation of the Cartesian Gaussian-based Hartree-Fock exchange matrix is described, which employs a method numerically equivalent to standard direct SCF, and which does not enforce locality of the density matrix. With a previously described method for computing the Coulomb matrix [J. Chem. Phys. 106, 5526 (1997)], linear scaling incremental Fock builds are demonstrated for the first time. Microhartree accuracy and linear scaling are achieved for restricted Hartree-Fock calculations on sequences of water clusters and polyglycine α-helices with the 3-21G and 6-31G basis sets. Eightfold speedups are found relative to our previous method. For systems with a small ionization potential, such as graphitic sheets, the method naturally reverts to the expected quadratic behavior. Also, benchmark 3-21G calculations attaining microhartree accuracy are reported for the P53 tetramerization monomer involving 698 atoms and 3836 basis functions.
Modeling particle dispersion and deposition in indoor environments
NASA Astrophysics Data System (ADS)
Gao, N. P.; Niu, J. L.
Particle dispersion and deposition in man-made enclosed environments are closely related to the well-being of occupants. The present study developed a three-dimensional drift-flux model for particle movements in turbulent indoor airflows, and combined it into Eulerian approaches. To account for the process of particle deposition at solid boundaries, a semi-empirical deposition model was adopted in which the size-dependent deposition characteristics were well resolved. After validation against the experimental data in a scaled isothermal chamber and in a full-scale non-isothermal environmental chamber, the drift-flux model was used to investigate the deposition rates and human exposures to particles from two different sources with three typical ventilation systems: mixing ventilation (MV), displacement ventilation (DV), and under-floor air distribution (UFAD). For particles originating from the supply air, a V-shaped curve of the deposition velocity variation as a function of particle size was observed. The minimum deposition appeared at 0.1- 0.5μm. For supermicron particles, the ventilation type and air exchange rate had an ignorable effect on the deposition rate. The movements of submicron particles were like tracer gases while the gravitational settling effect should be taken into account for particles larger than 2.5μm. The temporal increment of human exposure to a step-up particle release in the supply air was determined, among many factors, by the distance between the occupant and air outlet. The larger the particle size, the lower the human exposure. For particles released from an internal heat source, the concentration stratification of small particles (diameter <10μm) in the vertical direction appeared with DV and UFAD, and it was found the advantageous principle for gaseous pollutants that a relatively less-polluted occupied zone existed in DV and UFAD was also applicable to small particles.
Severns, Paul M.
2015-01-01
Consumer-grade GPS units are a staple of modern field ecology, but the relatively large error radii reported by manufacturers (up to 10 m) ostensibly precludes their utility in measuring fine-scale movement of small animals such as insects. Here we demonstrate that for data collected at fine spatio-temporal scales, these devices can produce exceptionally accurate data on step-length and movement patterns of small animals. With an understanding of the properties of GPS error and how it arises, it is possible, using a simple field protocol, to use consumer grade GPS units to collect step-length data for the movement of small animals that introduces a median error as small as 11 cm. These small error rates were measured in controlled observations of real butterfly movement. Similar conclusions were reached using a ground-truth test track prepared with a field tape and compass and subsequently measured 20 times using the same methodology as the butterfly tracking. Median error in the ground-truth track was slightly higher than the field data, mostly between 20 and 30 cm, but even for the smallest ground-truth step (70 cm), this is still a signal-to-noise ratio of 3:1, and for steps of 3 m or more, the ratio is greater than 10:1. Such small errors relative to the movements being measured make these inexpensive units useful for measuring insect and other small animal movements on small to intermediate scales with budgets orders of magnitude lower than survey-grade units used in past studies. As an additional advantage, these units are simpler to operate, and insect or other small animal trackways can be collected more quickly than either survey-grade units or more traditional ruler/gird approaches. PMID:26312190
Breed, Greg A; Severns, Paul M
2015-01-01
Consumer-grade GPS units are a staple of modern field ecology, but the relatively large error radii reported by manufacturers (up to 10 m) ostensibly precludes their utility in measuring fine-scale movement of small animals such as insects. Here we demonstrate that for data collected at fine spatio-temporal scales, these devices can produce exceptionally accurate data on step-length and movement patterns of small animals. With an understanding of the properties of GPS error and how it arises, it is possible, using a simple field protocol, to use consumer grade GPS units to collect step-length data for the movement of small animals that introduces a median error as small as 11 cm. These small error rates were measured in controlled observations of real butterfly movement. Similar conclusions were reached using a ground-truth test track prepared with a field tape and compass and subsequently measured 20 times using the same methodology as the butterfly tracking. Median error in the ground-truth track was slightly higher than the field data, mostly between 20 and 30 cm, but even for the smallest ground-truth step (70 cm), this is still a signal-to-noise ratio of 3:1, and for steps of 3 m or more, the ratio is greater than 10:1. Such small errors relative to the movements being measured make these inexpensive units useful for measuring insect and other small animal movements on small to intermediate scales with budgets orders of magnitude lower than survey-grade units used in past studies. As an additional advantage, these units are simpler to operate, and insect or other small animal trackways can be collected more quickly than either survey-grade units or more traditional ruler/gird approaches.
1993-11-30
dependent field to the main toroidal field, which provides an effective increment to the acceleration rate if it has a negative time derivative during...regions, non- uniformities in the beam develop in the drift region, scattering in the foils affects the beam entering the laser, effects due to a second...faster destroyed by a small perturbation. Note that this analogy is adequate only when the global RT mode cannot develop - otherwise, it is the rigid pen
Sexual Assault Prevention and Response Website Analysis
2014-09-01
IRR inter-rater reliability KPI key performance indicator N17 U.S Navy 21st Century Sailor Office NASASV National Association of Services Against...determine if progress is being made to achieve the desired goal; this is typically done by establishing key performance indicators 22 ( KPI ). After...defining the KPIs , organizations must prioritize the potential solutions and devise a plan for making small incremental changes to accurately assess the
Nonlinear Optics and Organic Materials
1989-10-01
incrementally by making small changes in the generating optical harmonics. However, deficiencies in backbone or substituents. In this way the chemist can...experimental determination of Otx.l = 4.5 X 10-32 esu. ability of polymeric molecules to generate third Key parameters extracted from the UV and visible...solubility of most active organics in negative charge at the other end, thus generating a the polymer and their tendency to segregate or migrate out
2016-11-01
systems engineering had better outcomes. For example, the Small Diameter Bomb Increment I program, which delivered within cost and schedule estimates ...its current portfolio. This portfolio has experienced cost growth of 48 percent since first full estimates and average delays in delivering initial...stable design, building and testing of prototypes, and demonstration of mature production processes. • Realistic cost estimate : Sound cost estimates
Defense AT&L (Volume 37, Number 2, March-April 2008)
2008-04-01
environment. Operational suit- ability is the degree to which a system can be satisfactorily placed in field use, with consideration given to reliability...devise the most effective test-and-evaluation strategy. Whenever possible, the program should be developed and fielded in small increments and provided... ability to control access to design-related informa- tion and availability of technology, and it will raise grave security considerations. Do you develop
Second-degree atrioventricular block.
Zipes, D P
1979-09-01
1) While it is possible only one type of second-degree AV block exists electrophysiologically, the available data do not justify such a conclusion and it would seem more appropriate to remain a "splitter," and advocate separation and definition of multiple mechanisms, than to be a "lumper," and embrace a unitary concept. 2) The clinical classification of type I and type II AV block, based on present scalar electrocardiographic criteria, for the most part accurately differentiates clinically important categories of patients. Such a classification is descriptive, but serves a useful function and should be preserved, taking into account the caveats mentioned above. The site of block generally determines the clinical course for the patient. For most examples of AV block, the type I and type II classification in present use is based on the site of block. Because block in the His-Purkinje system is preceded by small or nonmeasurable increments, it is called type II AV block; but the very fact that it is preceded by small increments is because it occurs in the His-Purkinje system. Similar logic can be applied to type I AV block in the AV node. Exceptions do occur. If the site of AV block cannot be distinguished with certainity from the scalar ECG, an electrophysiologic study will generally reveal the answer.
Hong, Shaodong; Fang, Wenfeng; Hu, Zhihuang; Zhou, Ting; Yan, Yue; Qin, Tao; Tang, Yanna; Ma, Yuxiang; Zhao, Yuanyuan; Xue, Cong; Huang, Yan; Zhao, Hongyun; Zhang, Li
2014-01-01
The predictive power of age at diagnosis and smoking history for ALK rearrangements and EGFR mutations in non-small-cell lung cancer (NSCLC) remains not fully understood. In this cross-sectional study, 1160 NSCLC patients were prospectively enrolled and genotyped for EML4-ALK rearrangements and EGFR mutations. Multivariate logistic regression analysis was performed to explore the association between clinicopathological features and these two genetic aberrations. Receiver operating characteristic (ROC) curves methodology was applied to evaluate the predictive value. We showed that younger age at diagnosis was the only independent variable associated with EML4-ALK rearrangements (odds ratio (OR) per 5 years' increment, 0.68; p < 0.001), while lower tobacco exposure (OR per 5 pack-years' increment, 0.88; p < 0.001), adenocarcinoma (OR, 6.61; p < 0.001), and moderate to high differentiation (OR, 2.05; p < 0.001) were independently associated with EGFR mutations. Age at diagnosis was a very strong predictor of ALK rearrangements but poorly predicted EGFR mutations, while smoking pack-years may predict the presence of EGFR mutations and ALK rearrangements but with rather limited power. These findings should assist clinicians in assessing the likelihood of EML4-ALK rearrangements and EGFR mutations and understanding their biological implications in NSCLC. PMID:25434695
Hurtado-Parrado, Camilo; González-León, Camilo; Arias-Higuera, Mónica A; Cardona, Angelo; Medina, Lucia G; García-Muñoz, Laura; Sánchez, Christian; Cifuentes, Julián; Forigua, Juan Carlos; Ortiz, Andrea; Acevedo-Triana, Cesar A; Rico, Javier L
2017-01-01
Despite step-down inhibitory avoidance procedures that have been widely implemented in rats and mice to study learning and emotion phenomena, performance of other species in these tasks has received less attention. The case of the Mongolian gerbil is of relevance considering the discrepancies in the parameters of the step-down protocols implemented, especially the wide range of foot-shock intensities (i.e., 0.4-4.0 mA), and the lack of information on long-term performance, extinction effects, and behavioral patterning during these tasks. Experiment 1 aimed to (a) characterize gerbils' acquisition, extinction, and steady-state performance during a multisession (i.e., extended) step-down protocol adapted for implementation in a commercially-available behavioral package (Video Fear Conditioning System-MED Associates Fairfax, VT, USA), and (b) compare gerbils' performance in this task with two shock intensities - 0.5 vs. 1.0 mA-considered in the low-to-mid range. Results indicated that the 1.0 mA protocol produced more reliable and clear evidence of avoidance learning, extinction, and reacquisition in terms of increments in freezing and on-platform time as well as suppression of platform descent. Experiment 2 aimed to (a) assess whether an alternate protocol consisting of a random delivery of foot shocks could replicate the effects of Experiment 1 and (b) characterize gerbils' exploratory behavior during the step-down task (jumping, digging, rearing, and probing). Random shocks did not reproduce the effects observed with the first protocol. The data also indicated that a change from random to response-dependent shocks affects (a) the length of each visit to the platform, but not the frequency of platform descends or freezing time, and (b) the patterns of exploratory behavior, namely, suppression of digging and rearing, as well as increments in probing and jumping. Overall, the study demonstrated the feasibility of the extended step-down protocol for studying steady performance, extinction, and reacquisition of avoidance behavior in gerbils, which could be easily implemented in a commercially available system. The observation that 1.0 mA shocks produced a clear and consistent avoidance behavior suggests that implementation of higher intensities is unnecessary for reproducing aversive-conditioning effects in this species. The observed patterning of freezing, platform descents, and exploratory responses produced by the change from random to periodic shocks may relate to the active defensive system of the gerbil. Of special interest is the probing behavior, which could be interpreted as risk assessment and has not been reported in other rodent species exposed to step-down and similar tasks.
González-León, Camilo; Arias-Higuera, Mónica A.; Cardona, Angelo; Medina, Lucia G.; García-Muñoz, Laura; Sánchez, Christian; Cifuentes, Julián; Forigua, Juan Carlos; Ortiz, Andrea; Acevedo-Triana, Cesar A.; Rico, Javier L.
2017-01-01
Despite step-down inhibitory avoidance procedures that have been widely implemented in rats and mice to study learning and emotion phenomena, performance of other species in these tasks has received less attention. The case of the Mongolian gerbil is of relevance considering the discrepancies in the parameters of the step-down protocols implemented, especially the wide range of foot-shock intensities (i.e., 0.4–4.0 mA), and the lack of information on long-term performance, extinction effects, and behavioral patterning during these tasks. Experiment 1 aimed to (a) characterize gerbils’ acquisition, extinction, and steady-state performance during a multisession (i.e., extended) step-down protocol adapted for implementation in a commercially-available behavioral package (Video Fear Conditioning System—MED Associates Fairfax, VT, USA), and (b) compare gerbils’ performance in this task with two shock intensities – 0.5 vs. 1.0 mA—considered in the low-to-mid range. Results indicated that the 1.0 mA protocol produced more reliable and clear evidence of avoidance learning, extinction, and reacquisition in terms of increments in freezing and on-platform time as well as suppression of platform descent. Experiment 2 aimed to (a) assess whether an alternate protocol consisting of a random delivery of foot shocks could replicate the effects of Experiment 1 and (b) characterize gerbils’ exploratory behavior during the step-down task (jumping, digging, rearing, and probing). Random shocks did not reproduce the effects observed with the first protocol. The data also indicated that a change from random to response-dependent shocks affects (a) the length of each visit to the platform, but not the frequency of platform descends or freezing time, and (b) the patterns of exploratory behavior, namely, suppression of digging and rearing, as well as increments in probing and jumping. Overall, the study demonstrated the feasibility of the extended step-down protocol for studying steady performance, extinction, and reacquisition of avoidance behavior in gerbils, which could be easily implemented in a commercially available system. The observation that 1.0 mA shocks produced a clear and consistent avoidance behavior suggests that implementation of higher intensities is unnecessary for reproducing aversive-conditioning effects in this species. The observed patterning of freezing, platform descents, and exploratory responses produced by the change from random to periodic shocks may relate to the active defensive system of the gerbil. Of special interest is the probing behavior, which could be interpreted as risk assessment and has not been reported in other rodent species exposed to step-down and similar tasks. PMID:29152417
Supplemental fructose attenuates postprandial glycemia in Zucker fatty fa/fa rats.
Wolf, Bryan W; Humphrey, Phillip M; Hadley, Craig W; Maharry, Kati S; Garleb, Keith A; Firkins, Jeffrey L
2002-06-01
Experiments were conducted to evaluate the effects of supplemental fructose on postprandial glycemia. After overnight food deprivation, Zucker fatty fa/fa rats were given a meal glucose tolerance test. Plasma glucose response was determined for 180 min postprandially. At a dose of 0.16 g/kg body, fructose reduced (P < 0.05) the incremental area under the curve (AUC) by 34% when supplemented to a glucose challenge and by 32% when supplemented to a maltodextrin (a rapidly digested starch) challenge. Similarly, sucrose reduced (P = 0.0575) the incremental AUC for plasma glucose when rats were challenged with maltodextrin. Second-meal glycemic response was not affected by fructose supplementation to the first meal, and fructose supplementation to the second meal reduced (P < 0.05) postprandial glycemia when fructose had been supplemented to the first meal. In a dose-response study (0.1, 0.2, and 0.5 g/kg body), supplemental fructose reduced (P < 0.01) the peak rise in plasma glucose (linear and quadratic effects). In the final experiment, a low dose of fructose (0.075 g/kg body) reduced (P < 0.05) the incremental AUC by 18%. These data support the hypothesis that small amounts of oral fructose or sucrose may be useful in lowering the postprandial blood glucose response.
Land, K C; Guralnik, J M; Blazer, D G
1994-05-01
A fundamental limitation of current multistate life table methodology-evident in recent estimates of active life expectancy for the elderly-is the inability to estimate tables from data on small longitudinal panels in the presence of multiple covariates (such as sex, race, and socioeconomic status). This paper presents an approach to such an estimation based on an isomorphism between the structure of the stochastic model underlying a conventional specification of the increment-decrement life table and that of Markov panel regression models for simple state spaces. We argue that Markov panel regression procedures can be used to provide smoothed or graduated group-specific estimates of transition probabilities that are more stable across short age intervals than those computed directly from sample data. We then join these estimates with increment-decrement life table methods to compute group-specific total, active, and dependent life expectancy estimates. To illustrate the methods, we describe an empirical application to the estimation of such life expectancies specific to sex, race, and education (years of school completed) for a longitudinal panel of elderly persons. We find that education extends both total life expectancy and active life expectancy. Education thus may serve as a powerful social protective mechanism delaying the onset of health problems at older ages.
Cooke, Alexandra B; Daskalopoulou, Stella S; Dasgupta, Kaberi
2018-04-01
Accelerometer placement at the wrist is convenient and increasingly adopted despite less accurate physical activity (PA) measurement than with waist placement. Capitalizing on a study that started with wrist placement and shifted to waist placement, we compared associations between PA measures derived from different accelerometer locations with a responsive arterial health indicator, carotid-femoral pulse wave velocity (cfPWV). Cross-sectional study. We previously demonstrated an inverse association between waist-worn pedometer-assessed step counts (Yamax SW-200, 7 days) and cfPWV (-0.20m/s, 95% CI -0.28, -0.12 per 1000 step/day increment) in 366 adults. Participants concurrently wore accelerometers (ActiGraph GT3X+), most at the waist but the first 46 at the wrist. We matched this subgroup with participants from the 'waist accelerometer' group (sex, age, and pedometer-assessed steps/day) and assessed associations with cfPWV (applanation tonometry, Sphygmocor) separately in each subgroup through linear regression models. Compared to the waist group, wrist group participants had higher step counts (mean difference 3980 steps/day; 95% CI 2517, 5443), energy expenditure (967kcal/day, 95% CI 755, 1179), and moderate-to-vigorous-PA (138min; 95% CI 114, 162). Accelerometer-assessed step counts (waist) suggested an association with cfPWV (-0.28m/s, 95% CI -0.58, 0.01); but no relationship was apparent with wrist-assessed steps (0.02m/s, 95% CI -0.24, 0.27). Waist but not wrist ActiGraph PA measures signal associations between PA and cfPWV. We urge researchers to consider the importance of wear location choice on relationships with health indicators. Copyright © 2017 Sports Medicine Australia. Published by Elsevier Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Wyman, D.; Steinman, R. M.
1973-01-01
Recently Timberlake, Wyman, Skavenski, and Steinman (1972) concluded in a study of the oculomotor error signal in the fovea that 'the oculomotor dead zone is surely smaller than 10 min and may even be less than 5 min (smaller than the 0.25 to 0.5 deg dead zone reported by Rashbass (1961) with similar stimulus conditions).' The Timberlake et al. speculation is confirmed by demonstrating that the fixating eye consistently and accurately corrects target displacements as small as 3.4 min. The contact lens optical lever technique was used to study the manner in which the oculomotor system responds to small step displacements of the fixation target. Subjects did, without prior practice, use saccades to correct step displacements of the fixation target just as they correct small position errors during maintained fixation.
Stochastic solution to quantum dynamics
NASA Technical Reports Server (NTRS)
John, Sarah; Wilson, John W.
1994-01-01
The quantum Liouville equation in the Wigner representation is solved numerically by using Monte Carlo methods. For incremental time steps, the propagation is implemented as a classical evolution in phase space modified by a quantum correction. The correction, which is a momentum jump function, is simulated in the quasi-classical approximation via a stochastic process. The technique, which is developed and validated in two- and three- dimensional momentum space, extends an earlier one-dimensional work. Also, by developing a new algorithm, the application to bound state motion in an anharmonic quartic potential shows better agreement with exact solutions in two-dimensional phase space.
Lippmann, M.
1964-04-01
A cascade particle impactor capable of collecting particles and distributing them according to size is described. In addition the device is capable of collecting on a pair of slides a series of different samples so that less time is required for the changing of slides. Other features of the device are its compactness and its ruggedness making it useful under field conditions. Essentially the unit consists of a main body with a series of transverse jets discharging on a pair of parallel, spaced glass plates. The plates are capable of being moved incremental in steps to obtain the multiple samples. (AEC)
Mena-Serrano, Alexandra; Costa, Thays Regina Ferreira da; Patzlaff, Rafael Tiago; Loguercio, Alessandro Dourado; Reis, Alessandra
2014-10-01
To compare manual and sonic adhesive application modes in terms of the permeability and microtensile bond strength of a self-etching adhesive applied in the one-step or two-step protocol. Self-etching All Bond SE (Bisco) was applied as a one- or a two-step adhesive under manual or sonic vibration modes on flat occlusal dentin surfaces of 64 human molars. Half of the teeth were used to measure the hydraulic conductance of dentin at 200 cm H₂O hydrostatic pressure for 5 min immediately after the adhesive application. In the other half, composite buildups (Opallis) were constructed incrementally to create resin-dentin sticks with a cross-sectional area of 0.8 mm² to be tested in tension (0.5 mm/min) immediately after restoration placement. Data were analyzed using a two-way ANOVA and Tukey's test (α = 0.05). The fluid conductance of dentin was significantly reduced by the sonic vibration mode for both adhesives, but no effect on the bond strength values was observed for either adhesive. The sonic application mode at an oscillating frequency of 170 Hz can reduce the fluid conductance of the one- and two-step All Bond SE adhesive when applied on dentin.
Rearfoot striking runners are more economical than midfoot strikers.
Ogueta-Alday, Ana; Rodríguez-Marroyo, José Antonio; García-López, Juan
2014-03-01
This study aimed to analyze the influence of foot strike pattern on running economy and biomechanical characteristics in subelite runners with a similar performance level. Twenty subelite long-distance runners participated and were divided into two groups according to their foot strike pattern: rearfoot (RF, n = 10) and midfoot (MF, n = 10) strikers. Anthropometric characteristics were measured (height, body mass, body mass index, skinfolds, circumferences, and lengths); physiological (VO2max, anaerobic threshold, and running economy) and biomechanical characteristics (contact and flight times, step rate, and step length) were registered during both incremental and submaximal tests on a treadmill. There were no significant intergroup differences in anthropometrics, VO2max, or anaerobic threshold measures. RF strikers were 5.4%, 9.3%, and 5.0% more economical than MF at submaximal speeds (11, 13, and 15 km·h respectively, although the difference was not significant at 15 km·h, P = 0.07). Step rate and step length were not different between groups, but RF showed longer contact time (P < 0.01) and shorter flight time (P < 0.01) than MF at all running speeds. The present study showed that habitually rearfoot striking runners are more economical than midfoot strikers. Foot strike pattern affected both contact and flight times, which may explain the differences in running economy.
NASA Technical Reports Server (NTRS)
Keynton, Robert J.
1961-01-01
Tests were conducted at Mach numbers of 3.96 and 4.65 in the Langley Unitary Plan wind tunnel to determine the static longitudinal stability characteristics of a fin-stabilized rocket-vehicle configuration which had a rearward facing step located upstream of the fins. Two fin sizes and planforms, a delta and a clipped delta, were tested. The angle of attack was varied from 6 deg to -6 deg and the Reynolds number based on model 6 length was about 10 x 10. The configuration with the larger fins (clipped delta) had a center of pressure slightly rearward of and an initial normal-force-curve slope slightly higher than that of the configuration with the smaller fins (delta) as would be expected. Calculations of the stability parameters gave a slightly lower initial slope of the normal-force curve than measured data, probably because of boundary-layer separation ahead of the step. The calculated center of pressure agreed well with the measured data. Measured and calculated increments in the initial slope of the normal-force curve and in the center of pressure, due to changing fins, were in excellent agreement indicating that separated flow downstream of the step did not influence flow over the fins. This result was consistent with data from schlieren photographs.
Zoladz, J A; Szkutnik, Z; Majerczak, J; Duda, K
1998-09-01
The purpose of this study was to develop a method to determine the power output at which oxygen uptake (VO2) during an incremental exercise test begins to rise non-linearly. A group of 26 healthy non-smoking men [mean age 22.1 (SD 1.4) years, body mass 73.6 (SD 7.4) kg, height 179.4 (SD 7.5) cm, maximal oxygen uptake (VO2max) 3.726 (SD 0.363) l x min(-1)], experienced in laboratory tests, were the subjects in this study. They performed an incremental exercise test on a cycle ergometer at a pedalling rate of 70 rev x min(-1). The test started at a power output of 30 W, followed by increases amounting to 30 W every 3 min. At 5 min prior to the first exercise intensity, at the end of each stage of exercise protocol, blood samples (1 ml each) were taken from an antecubital vein. The samples were analysed for plasma lactate concentration [La]pl, partial pressure of O2 and CO2 and hydrogen ion concentration [H+]b. The lactate threshold (LT) in this study was defined as the highest power output above which [La-]pl showed a sustained increase of more than 0.5 mmol x l(-1) x step(-1). The VO2 was measured breath-by-breath. In the analysis of the change point (CP) of VO2 during the incremental exercise test, a two-phase model was assumed for the 3rd-min-data of each step of the test: Xi = at(i) + b + epsilon(i) for i = 1,2, ..., T, and E(Xi) > at(i) + b for i = T + 1, ..., n, where X1, ..., Xn are independent and epsilon(i) approximately N(0, sigma2). In the first phase, a linear relationship between VO2 and power output was assumed, whereas in the second phase an additional increase in VO2 above the values expected from the linear model was allowed. The power output at which the first phase ended was called the change point in oxygen uptake (CP-VO2). The identification of the model consisted of two steps: testing for the existence of CP and estimating its location. Both procedures were based on suitably normalised recursive residuals. We showed that in 25 out of 26 subjects it was possible to determine the CP-VO2 as described in our model. The power output at CP-VO2 amounted to 136.8 (SD 31.3) W. It was only 11 W -- non significantly -- higher than the power output corresponding to LT. The VO2 at CP-VO2 amounted to 1.828 (SD 0.356) l x min(-1) was [48.9 (SD 7.9)% VO2max]. The [La-]pl at CP-VO2, amounting to 2.57 (SD 0.69) mmol x l(-1) was significantly elevated (P < 0.01) above the resting level [1.85 (SD 0.46) mmol x l(-1)], however the [H+]b at CP-VO2 amounting to 45.1 (SD 3.0) nmol x l(-1), was not significantly different from the values at rest which amounted to 44.14 (SD 2.79) nmol x l(-1). An increase of power output of 30 W above CP-VO2 was accompanied by a significant increase in [H+]b above the resting level (P = 0.03).
Facilities Planning for Small Colleges.
ERIC Educational Resources Information Center
O'Neill, Joseph P.; And Others
This second publication in a three-part series called "Alternative Futures" is essentially a workbook that, followed step by step, allows a college to see how its use of space has changed over time. Especially designed for small colleges, the kit makes use of the information that is routinely collected, such as annual financial statements and…
Financing Your Small Business: A Workbook for Financing Small Business.
ERIC Educational Resources Information Center
Compton, Clark W.
Designed to assist established businesspeople with the development of a loan proposal, this workbook offers information on sources of financing and step-by-step guidance on applying for a loan. After chapter I discusses borrowers' and lenders' attitudes towards money, chapter II offers suggestions for determining financial needs. Chapter III lists…
Huang, Wenyong; Ye, Ronghua; Huang, Shengsong; Wang, Decai; Wang, Lanhua; Liu, Bin; Friedman, David S; He, Mingguang; Liu, Yizhi; Congdon, Nathan G
2013-01-01
The perceived difficulty of steps of manual small incision cataract surgery among trainees in rural China was assessed. Cohort study. Fifty-two trainees at the end of a manual small incision cataract surgery training programme. Participants rated the difficulty of 14 surgical steps using a 5-point scale, 1 (very easy) to 5 (very difficult). Demographic and professional information was recorded for trainees. Mean ratings for surgical steps. Questionnaires were completed by 49 trainees (94.2%, median age 38 years, 8 [16.3%] women). Twenty six (53.1%) had performed ≤50 independent cataract surgeries prior to training. Trainees rated cortical aspiration (mean score ± standard deviation = 3.10 ± 1.14) the most difficult step, followed by wound construction (2.76 ± 1.08), nuclear prolapse into the anterior chamber (2.74 ± 1.23) and lens delivery (2.51 ± 1.08). Draping the surgical field (1.06 ± 0.242), anaesthetic block administration (1.14 ± 0.354) and thermal coagulation (1.18 ± 0.441) were rated easiest. In regression models, the score for cortical aspiration was significantly inversely associated with performing >50 independent manual small incision cataract surgery surgeries during training (P = 0.01), but not with age, gender, years of experience in an eye department or total number of cataract surgeries performed prior to training. Cortical aspiration, wound construction and nuclear prolapse pose the greatest challenge for trainees learning manual small incision cataract surgery, and should receive emphasis during training. Number of cases performed is the strongest predictor of perceived difficulty of key steps. © 2013 The Authors. Clinical and Experimental Ophthalmology © 2013 Royal Australian and New Zealand College of Ophthalmologists.
Habchi, Johnny; Chia, Sean; Limbocker, Ryan; Mannini, Benedetta; Ahn, Minkoo; Perni, Michele; Hansson, Oskar; Arosio, Paolo; Kumita, Janet R.; Challa, Pavan Kumar; Cohen, Samuel I. A.; Dobson, Christopher M.; Knowles, Tuomas P. J.; Vendruscolo, Michele
2017-01-01
The aggregation of the 42-residue form of the amyloid-β peptide (Aβ42) is a pivotal event in Alzheimer’s disease (AD). The use of chemical kinetics has recently enabled highly accurate quantifications of the effects of small molecules on specific microscopic steps in Aβ42 aggregation. Here, we exploit this approach to develop a rational drug discovery strategy against Aβ42 aggregation that uses as a read-out the changes in the nucleation and elongation rate constants caused by candidate small molecules. We thus identify a pool of compounds that target specific microscopic steps in Aβ42 aggregation. We then test further these small molecules in human cerebrospinal fluid and in a Caenorhabditis elegans model of AD. Our results show that this strategy represents a powerful approach to identify systematically small molecule lead compounds, thus offering an appealing opportunity to reduce the attrition problem in drug discovery. PMID:28011763
Fast Query-Optimized Kernel-Machine Classification
NASA Technical Reports Server (NTRS)
Mazzoni, Dominic; DeCoste, Dennis
2004-01-01
A recently developed algorithm performs kernel-machine classification via incremental approximate nearest support vectors. The algorithm implements support-vector machines (SVMs) at speeds 10 to 100 times those attainable by use of conventional SVM algorithms. The algorithm offers potential benefits for classification of images, recognition of speech, recognition of handwriting, and diverse other applications in which there are requirements to discern patterns in large sets of data. SVMs constitute a subset of kernel machines (KMs), which have become popular as models for machine learning and, more specifically, for automated classification of input data on the basis of labeled training data. While similar in many ways to k-nearest-neighbors (k-NN) models and artificial neural networks (ANNs), SVMs tend to be more accurate. Using representations that scale only linearly in the numbers of training examples, while exploring nonlinear (kernelized) feature spaces that are exponentially larger than the original input dimensionality, KMs elegantly and practically overcome the classic curse of dimensionality. However, the price that one must pay for the power of KMs is that query-time complexity scales linearly with the number of training examples, making KMs often orders of magnitude more computationally expensive than are ANNs, decision trees, and other popular machine learning alternatives. The present algorithm treats an SVM classifier as a special form of a k-NN. The algorithm is based partly on an empirical observation that one can often achieve the same classification as that of an exact KM by using only small fraction of the nearest support vectors (SVs) of a query. The exact KM output is a weighted sum over the kernel values between the query and the SVs. In this algorithm, the KM output is approximated with a k-NN classifier, the output of which is a weighted sum only over the kernel values involving k selected SVs. Before query time, there are gathered statistics about how misleading the output of the k-NN model can be, relative to the outputs of the exact KM for a representative set of examples, for each possible k from 1 to the total number of SVs. From these statistics, there are derived upper and lower thresholds for each step k. These thresholds identify output levels for which the particular variant of the k-NN model already leans so strongly positively or negatively that a reversal in sign is unlikely, given the weaker SV neighbors still remaining. At query time, the partial output of each query is incrementally updated, stopping as soon as it exceeds the predetermined statistical thresholds of the current step. For an easy query, stopping can occur as early as step k = 1. For more difficult queries, stopping might not occur until nearly all SVs are touched. A key empirical observation is that this approach can tolerate very approximate nearest-neighbor orderings. In experiments, SVs and queries were projected to a subspace comprising the top few principal- component dimensions and neighbor orderings were computed in that subspace. This approach ensured that the overhead of the nearest-neighbor computations was insignificant, relative to that of the exact KM computation.
Corso, Phaedra S.; Ingels, Justin B.; Kogan, Steven M.; Foster, E. Michael; Chen, Yi-Fu; Brody, Gene H.
2013-01-01
Programmatic cost analyses of preventive interventions commonly have a number of methodological difficulties. To determine the mean total costs and properly characterize variability, one often has to deal with small sample sizes, skewed distributions, and especially missing data. Standard approaches for dealing with missing data such as multiple imputation may suffer from a small sample size, a lack of appropriate covariates, or too few details around the method used to handle the missing data. In this study, we estimate total programmatic costs for a prevention trial evaluating the Strong African American Families-Teen program. This intervention focuses on the prevention of substance abuse and risky sexual behavior. To account for missing data in the assessment of programmatic costs we compare multiple imputation to probabilistic sensitivity analysis. The latter approach uses collected cost data to create a distribution around each input parameter. We found that with the multiple imputation approach, the mean (95% confidence interval) incremental difference was $2149 ($397, $3901). With the probabilistic sensitivity analysis approach, the incremental difference was $2583 ($778, $4346). Although the true cost of the program is unknown, probabilistic sensitivity analysis may be a more viable alternative for capturing variability in estimates of programmatic costs when dealing with missing data, particularly with small sample sizes and the lack of strong predictor variables. Further, the larger standard errors produced by the probabilistic sensitivity analysis method may signal its ability to capture more of the variability in the data, thus better informing policymakers on the potentially true cost of the intervention. PMID:23299559
Corso, Phaedra S; Ingels, Justin B; Kogan, Steven M; Foster, E Michael; Chen, Yi-Fu; Brody, Gene H
2013-10-01
Programmatic cost analyses of preventive interventions commonly have a number of methodological difficulties. To determine the mean total costs and properly characterize variability, one often has to deal with small sample sizes, skewed distributions, and especially missing data. Standard approaches for dealing with missing data such as multiple imputation may suffer from a small sample size, a lack of appropriate covariates, or too few details around the method used to handle the missing data. In this study, we estimate total programmatic costs for a prevention trial evaluating the Strong African American Families-Teen program. This intervention focuses on the prevention of substance abuse and risky sexual behavior. To account for missing data in the assessment of programmatic costs we compare multiple imputation to probabilistic sensitivity analysis. The latter approach uses collected cost data to create a distribution around each input parameter. We found that with the multiple imputation approach, the mean (95 % confidence interval) incremental difference was $2,149 ($397, $3,901). With the probabilistic sensitivity analysis approach, the incremental difference was $2,583 ($778, $4,346). Although the true cost of the program is unknown, probabilistic sensitivity analysis may be a more viable alternative for capturing variability in estimates of programmatic costs when dealing with missing data, particularly with small sample sizes and the lack of strong predictor variables. Further, the larger standard errors produced by the probabilistic sensitivity analysis method may signal its ability to capture more of the variability in the data, thus better informing policymakers on the potentially true cost of the intervention.
Single-crystal 40Ar/39Ar incremental heating reveals bimodal sanidine ages in the Bishop Tuff
NASA Astrophysics Data System (ADS)
Andersen, N. L.; Jicha, B. R.; Singer, B. S.
2015-12-01
The 650 km3 Bishop Tuff (BT) is among the most studied volcanic deposits because it is an extensive marker bed deposited just after the Matuyama-Brunhes boundary. Reconstructions of the vast BT magma reservoir from which high-silica rhyolite erupted have long influenced thinking about how large silicic magma systems are assembled, crystallized, and mixed. Yet, the longevity of the high silica rhyolitic melt and exact timing of the eruption remain controversial due to recent conflicting 40Ar/39Ar sanidine vs. SIMS and ID-TIMS U-Pb zircon dates. We have undertaken 21 40Ar/39Ar incremental heating ages on 2 mm BT sanidine crystals from pumice in 3 widely separated outcrops of early-erupted fall and flow units. Plateau ages yield a bimodal distribution: a younger group has a mean of 766 ka and an older group gives a range between 772 and 782 ka. The younger population is concordant with the youngest ID-TIMS and SIMS U-Pb zircon ages recently published, as well as the astronomical age of BT in marine sediment. Of 21 crystals, 17 yield older, non-plateau, steps likely affected by excess Ar that would bias traditional 40Ar/39Ar total crystal fusion ages. The small spread in older sanidine ages, together with 25+ kyr of pre-eruptive zircon growth, suggest that the older sanidines are not partially outgassed xenocrysts. A bimodal 40Ar/39Ar age distribution implies that some fraction of rhyolitic melt cooled below the Ar closure temperature at least 10 ky prior to eruption. We propose that rapid "thawing" of a crystalline mush layer released older crystals into rhyolitic melt from which sanidine also nucleated and grew immediately prior to the eruption. High precision 40Ar/39Ar dating can thus provide essential information on thermo-physical processes at the millenial time scale that are critical to interpreting U-Pb zircon age distributions that are complicated by large uncertainties associated with zircon-melt U-Th systematics.
Integrating practical, regulatory and ethical strategies for enhancing farm animal welfare.
Mellor, D J; Stafford, K J
2001-11-01
To provide an integrated view of relationships between assessment of animal welfare. societal expectations regarding animal welfare standards, the need for regulation, and two ethical strategies for promoting animal welfare, emphasising farm animals. Ideas in relevant papers and key insights were outlined and illustrated, where appropriate, by New Zealand experience with different facets of the welfare management of farm animals. An animal's welfare is good when its nutritional, environmental, health, behavioural and mental needs are met. Compromise may occur in one or more of these areas and is assessed by scientifically-informed best judgement using parameters validated by directed research and objective analysis in clinical and practical settings. There is a wide range of perceptions of what constitutes good and bad welfare in society, so that animal welfare standards cannot be left to individual preferences to determine. Rather, the promotion of animal welfare is seen as requiring central regulation, but managed in a way that allows for adjustments based on new scientific knowledge of animals' needs and changing societal perceptions of what is acceptable and unacceptable treatment of animals. Concepts of 'minimal welfare', representing the threshold of cruelty, and 'acceptable welfare', representing higher, more acceptable standards than those that merely avoid cruelty, are outlined. They are relevant to economic analyses, which deal with determinants of animal welfare standards based on financial costs and the desire of the public to feel broadly comfortable about the treatment of the animals that are used to serve their needs. Ethical strategies for promoting animal welfare can be divided broadly into the 'gold standard' approach and the 'incremental improvement' approach. The first defines the ideal that is to be required in a particular situation and will accept nothing less than that ideal, whereas the second aims to improve welfare in a step-wise fashion by setting a series of achievable goals, seeing each small advance as worthwhile progress towards the same ideal. 'Incremental improvement' is preferred. This also has application in veterinary practice where the professional commitment to maintain good welfare standards may at times conflict with financial constraints experienced by clients.
A systematic review and metaanalysis of energy intake and weight gain in pregnancy.
Jebeile, Hiba; Mijatovic, Jovana; Louie, Jimmy Chun Yu; Prvan, Tania; Brand-Miller, Jennie C
2016-04-01
Gestational weight gain within the recommended range produces optimal pregnancy outcomes, yet many women exceed the guidelines. Official recommendations to increase energy intake by ∼ 1000 kJ/day in pregnancy may be excessive. To determine by metaanalysis of relevant studies whether greater increments in energy intake from early to late pregnancy corresponded to greater or excessive gestational weight gain. We systematically searched electronic databases for observational and intervention studies published from 1990 to the present. The databases included Ovid Medline, Cochrane Library, Excerpta Medica DataBASE (EMBASE), Cumulative Index to Nursing and Allied Health Literature (CINAHL), and Science Direct. In addition we hand-searched reference lists of all identified articles. Studies were included if they reported gestational weight gain and energy intake in early and late gestation in women of any age with a singleton pregnancy. Search also encompassed journals emerging from both developed and developing countries. Studies were individually assessed for quality based on the Quality Criteria Checklist obtained from the Evidence Analysis Manual: Steps in the academy evidence analysis process. Publication bias was plotted by the use of a funnel plot with standard mean difference against standard error. Identified studies were meta-analyzed and stratified by body mass index, study design, dietary methodology, and country status (developed/developing) by the use of a random-effects model. Of 2487 articles screened, 18 studies met inclusion criteria. On average, women gained 12.0 (2.8) kg (standardized mean difference = 1.306, P < .0005) yet reported only a small increment in energy intake that did not reach statistical significance (∼475 kJ/day, standard mean difference = 0.266, P = .016). Irrespective of baseline body mass index, study design, dietary methodology, or country status, changes in energy intake were not significantly correlated to the amount of gestational weight gain (r = 0.321, P = .11). Despite rapid physiologic weight gain, women report little or no change in energy intake during pregnancy. Current recommendations to increase energy intake by ∼ 1000 kJ/day may, therefore, encourage excessive weight gain and adverse pregnancy outcomes. Copyright © 2016 Elsevier Inc. All rights reserved.
An assessment of adult risks of paresthesia due to mercury from coal combustion
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lipfert, F.; Moskowitz, P.; Fthenakis, V.
1993-11-01
This paper presents results from a probabilistic assessment of the mercury health risks associated with a hypothetical 1000 MW coal-fired power plant. The assessment draws on the extant knowledge in each of the important steps in the chain from emissions to health effects, based on methylmercury derived from seafood. For this assessment, we define three separate sources of dietary Hg: canned tuna (affected by global Hg), marine shellfish and finfish (affected by global Hg), and freshwater gamefish (affected by both global Hg and local deposition from nearby sources). We consider emissions of both reactive and elemental mercury from the hypotheticalmore » plant (assumed to burn coal with the US average Hg content) and estimate wet and dry deposition rates; atmospheric reactions are not considered. Mercury that is not deposited within 50 km is assumed to enter the global background pool. The incremental Hg in local fish is assumed to be proportional to the incremental total Hg deposition. Three alternative dose-response models were derived from published data on specific neurological responses, in this case, adult paresthesia (skin prickling or tingling of the extremities). Preliminary estimates show the upper 95th percentile of the baseline risk attributed to seafood consumption to be around 10{sup {minus}4} (1 chance in 10,000). Based on a doubling of Hg deposition in the immediate vicinity of the hypothetical plant, the incremental local risk from seafood would be about a factor of 4 higher. These risks should be compared to the estimated background prevalence rate of paresthesia, which is about 7%.« less
An Approach to Verification and Validation of a Reliable Multicasting Protocol
NASA Technical Reports Server (NTRS)
Callahan, John R.; Montgomery, Todd L.
1994-01-01
This paper describes the process of implementing a complex communications protocol that provides reliable delivery of data in multicast-capable, packet-switching telecommunication networks. The protocol, called the Reliable Multicasting Protocol (RMP), was developed incrementally using a combination of formal and informal techniques in an attempt to ensure the correctness of its implementation. Our development process involved three concurrent activities: (1) the initial construction and incremental enhancement of a formal state model of the protocol machine; (2) the initial coding and incremental enhancement of the implementation; and (3) model-based testing of iterative implementations of the protocol. These activities were carried out by two separate teams: a design team and a V&V team. The design team built the first version of RMP with limited functionality to handle only nominal requirements of data delivery. In a series of iterative steps, the design team added new functionality to the implementation while the V&V team kept the state model in fidelity with the implementation. This was done by generating test cases based on suspected errant or offnominal behaviors predicted by the current model. If the execution of a test was different between the model and implementation, then the differences helped identify inconsistencies between the model and implementation. The dialogue between both teams drove the co-evolution of the model and implementation. Testing served as the vehicle for keeping the model and implementation in fidelity with each other. This paper describes (1) our experiences in developing our process model; and (2) three example problems found during the development of RMP.
An approach to verification and validation of a reliable multicasting protocol
NASA Technical Reports Server (NTRS)
Callahan, John R.; Montgomery, Todd L.
1995-01-01
This paper describes the process of implementing a complex communications protocol that provides reliable delivery of data in multicast-capable, packet-switching telecommunication networks. The protocol, called the Reliable Multicasting Protocol (RMP), was developed incrementally using a combination of formal and informal techniques in an attempt to ensure the correctness of its implementation. Our development process involved three concurrent activities: (1) the initial construction and incremental enhancement of a formal state model of the protocol machine; (2) the initial coding and incremental enhancement of the implementation; and (3) model-based testing of iterative implementations of the protocol. These activities were carried out by two separate teams: a design team and a V&V team. The design team built the first version of RMP with limited functionality to handle only nominal requirements of data delivery. In a series of iterative steps, the design team added new functionality to the implementation while the V&V team kept the state model in fidelity with the implementation. This was done by generating test cases based on suspected errant or off-nominal behaviors predicted by the current model. If the execution of a test was different between the model and implementation, then the differences helped identify inconsistencies between the model and implementation. The dialogue between both teams drove the co-evolution of the model and implementation. Testing served as the vehicle for keeping the model and implementation in fidelity with each other. This paper describes (1) our experiences in developing our process model; and (2) three example problems found during the development of RMP.
NASA Astrophysics Data System (ADS)
Most, S.; Jia, N.; Bijeljic, B.; Nowak, W.
2016-12-01
Pre-asymptotic characteristics are almost ubiquitous when analyzing solute transport processes in porous media. These pre-asymptotic aspects are caused by spatial coherence in the velocity field and by its heterogeneity. For the Lagrangian perspective of particle displacements, the causes of pre-asymptotic, non-Fickian transport are skewed velocity distribution, statistical dependencies between subsequent increments of particle positions (memory) and dependence between the x, y and z-components of particle increments. Valid simulation frameworks should account for these factors. We propose a particle tracking random walk (PTRW) simulation technique that can use empirical pore-space velocity distributions as input, enforces memory between subsequent random walk steps, and considers cross dependence. Thus, it is able to simulate pre-asymptotic non-Fickian transport phenomena. Our PTRW framework contains an advection/dispersion term plus a diffusion term. The advection/dispersion term produces time-series of particle increments from the velocity CDFs. These time series are equipped with memory by enforcing that the CDF values of subsequent velocities change only slightly. The latter is achieved through a random walk on the axis of CDF values between 0 and 1. The virtual diffusion coefficient for that random walk is our only fitting parameter. Cross-dependence can be enforced by constraining the random walk to certain combinations of CDF values between the three velocity components in x, y and z. We will show that this modelling framework is capable of simulating non-Fickian transport by comparison with a pore-scale transport simulation and we analyze the approach to asymptotic behavior.
Lakhtakia, Sundeep; Basha, Jahangeer; Talukdar, Rupjyoti; Gupta, Rajesh; Nabi, Zaheer; Ramchandani, Mohan; Kumar, B V N; Pal, Partha; Kalpala, Rakesh; Reddy, P Manohar; Pradeep, R; Singh, Jagadish R; Rao, G V; Reddy, D Nageshwar
2017-06-01
EUS-guided drainage using plastic stents may be inadequate for treatment of walled-off necrosis (WON). Recent studies report variable outcomes even when using covered metal stents. The aim of this study was to evaluate the efficacy of a dedicated covered biflanged metal stent (BFMS) when adopting an endoscopic "step-up approach" for drainage of symptomatic WON. We retrospectively evaluated consecutive patients with symptomatic WON who underwent EUS-guided drainage using BFMSs over a 3-year period. Reassessment was done between 48 and 72 hours for resolution. Endoscopic reinterventions were tailored in nonresponders in a stepwise manner. Step 1 encompassed declogging the blocked lumen of the BFMS. In step 2, a nasocystic tube was placed via BFMSs with intermittent irrigation. Step 3 involved direct endoscopic necrosectomy (DEN). BFMSs were removed between 4 and 8 weeks of follow-up. The main outcome measures were technical success, clinical success, adverse events, and need for DEN. Two hundred five WON patients underwent EUS-guided drainage using BFMSs. Technical success was achieved in 203 patients (99%). Periprocedure adverse events occurred in 8 patients (bleeding in 6, perforation in 2). Clinical success with BFMSs alone was seen in 153 patients (74.6%). Reintervention adopting the step-up approach was required in 49 patients (23.9%). Incremental success was achieved in 10 patients with step 1, 16 patients with step 2, and 19 patients with step 3. Overall clinical success was achieved in 198 patients (96.5%), with DEN required in 9.2%. Four patients failed treatment and required surgery (2) or percutaneous drainage (2). The endoscopic step-up approach using BFMSs was safe, effective, and yielded successful outcomes in most patients, reducing the need for DEN. Copyright © 2017 American Society for Gastrointestinal Endoscopy. Published by Elsevier Inc. All rights reserved.
Bozic, Kevin J; Pui, Christine M; Ludeman, Matthew J; Vail, Thomas P; Silverstein, Marc D
2010-09-01
Metal-on-metal hip resurfacing arthroplasty (MoM HRA) may offer potential advantages over total hip arthroplasty (THA) for certain patients with advanced osteoarthritis of the hip. However, the cost effectiveness of MoM HRA compared with THA is unclear. The purpose of this study was to compare the clinical effectiveness and cost-effectiveness of MoM HRA to THA. A Markov decision model was constructed to compare the quality-adjusted life-years (QALYs) and costs associated with HRA versus THA from the healthcare system perspective over a 30-year time horizon. We performed sensitivity analyses to evaluate the impact of patient characteristics, clinical outcome probabilities, quality of life and costs on the discounted incremental costs, incremental clinical effectiveness, and the incremental cost-effectiveness ratio (ICER) of HRA compared to THA. MoM HRA was associated with modest improvements in QALYs at a small incremental cost, and had an ICER less than $50,000 per QALY gained for men younger than 65 and for women younger than 55. MoM HRA and THA failure rates, device costs, and the difference in quality of life after conversion from HRA to THA compared to primary THA had the largest impact on costs and quality of life. MoM HRA could be clinically advantageous and cost-effective in younger men and women. Further research on the comparative effectiveness of MoM HRA versus THA should include assessments of the quality of life and resource use in addition to the clinical outcomes associated with both procedures. Level I, economic and decision analysis. See Guidelines for Authors for a complete description of levels of evidence.
Computer model of two-dimensional solute transport and dispersion in ground water
Konikow, Leonard F.; Bredehoeft, J.D.
1978-01-01
This report presents a model that simulates solute transport in flowing ground water. The model is both general and flexible in that it can be applied to a wide range of problem types. It is applicable to one- or two-dimensional problems involving steady-state or transient flow. The model computes changes in concentration over time caused by the processes of convective transport, hydrodynamic dispersion, and mixing (or dilution) from fluid sources. The model assumes that the solute is non-reactive and that gradients of fluid density, viscosity, and temperature do not affect the velocity distribution. However, the aquifer may be heterogeneous and (or) anisotropic. The model couples the ground-water flow equation with the solute-transport equation. The digital computer program uses an alternating-direction implicit procedure to solve a finite-difference approximation to the ground-water flow equation, and it uses the method of characteristics to solve the solute-transport equation. The latter uses a particle- tracking procedure to represent convective transport and a two-step explicit procedure to solve a finite-difference equation that describes the effects of hydrodynamic dispersion, fluid sources and sinks, and divergence of velocity. This explicit procedure has several stability criteria, but the consequent time-step limitations are automatically determined by the program. The report includes a listing of the computer program, which is written in FORTRAN IV and contains about 2,000 lines. The model is based on a rectangular, block-centered, finite difference grid. It allows the specification of any number of injection or withdrawal wells and of spatially varying diffuse recharge or discharge, saturated thickness, transmissivity, boundary conditions, and initial heads and concentrations. The program also permits the designation of up to five nodes as observation points, for which a summary table of head and concentration versus time is printed at the end of the calculations. The data input formats for the model require three data cards and from seven to nine data sets to describe the aquifer properties, boundaries, and stresses. The accuracy of the model was evaluated for two idealized problems for which analytical solutions could be obtained. In the case of one-dimensional flow the agreement was nearly exact, but in the case of plane radial flow a small amount of numerical dispersion occurred. An analysis of several test problems indicates that the error in the mass balance will be generally less than 10 percent. The test problems demonstrated that the accuracy and precision of the numerical solution is sensitive to the initial number of particles placed in each cell and to the size of the time increment, as determined by the stability criteria. Mass balance errors are commonly the greatest during the first several time increments, but tend to decrease and stabilize with time.
Method and apparatus for continuous electrophoresis
Watson, Jack S.
1992-01-01
A method and apparatus for conducting continuous separation of substances by electrophoresis are disclosed. The process involves electrophoretic separation combined with couette flow in a thin volume defined by opposing surfaces. By alternating the polarity of the applied potential and producing reciprocating short rotations of at least one of the surfaces relative to the other, small increments of separation accumulate to cause substantial, useful segregation of electrophoretically separable components in a continuous flow system.
Motorized control for mirror mount apparatus
Cutburth, Ronald W.
1989-01-01
A motorized control and automatic braking system for adjusting mirror mount apparatus is disclosed. The motor control includes a planetary gear arrangement to provide improved pitch adjustment capability while permitting a small packaged design. The motor control for mirror mount adjustment is suitable for laser beam propagation applications. The brake is a system of constant contact, floating detents which engage the planetary gear at selected between-teeth increments to stop rotation instantaneously when the drive motor stops.
Theoretical models of the influence of genomic architecture on the dynamics of speciation.
Flaxman, Samuel M; Wacholder, Aaron C; Feder, Jeffrey L; Nosil, Patrik
2014-08-01
A long-standing problem in evolutionary biology has been determining whether and how gradual, incremental changes at the gene level can account for rapid speciation and bursts of adaptive radiation. Using genome-scale computer simulations, we extend previous theory showing how gradual adaptive change can generate nonlinear population transitions, resulting in the rapid formation of new, reproductively isolated species. We show that these transitions occur via a mechanism rooted in a basic property of biological heredity: the organization of genes in genomes. Genomic organization of genes facilitates two processes: (i) the build-up of statistical associations among large numbers of genes and (ii) the action of divergent selection on persistent combinations of alleles. When a population has accumulated a critical amount of standing, divergently selected variation, the combination of these two processes allows many mutations of small effect to act synergistically and precipitously split one population into two discontinuous, reproductively isolated groups. Periods of allopatry, chromosomal linkage among loci, and large-effect alleles can facilitate this process under some conditions, but are not required for it. Our results complement and extend existing theory on alternative stable states during population divergence, distinct phases of speciation and the rapid emergence of multilocus barriers to gene flow. The results are thus a step towards aligning population genomic theory with modern empirical studies. © 2014 John Wiley & Sons Ltd.
Stochastic Ocean Eddy Perturbations in a Coupled General Circulation Model.
NASA Astrophysics Data System (ADS)
Howe, N.; Williams, P. D.; Gregory, J. M.; Smith, R. S.
2014-12-01
High-resolution ocean models, which are eddy permitting and resolving, require large computing resources to produce centuries worth of data. Also, some previous studies have suggested that increasing resolution does not necessarily solve the problem of unresolved scales, because it simply introduces a new set of unresolved scales. Applying stochastic parameterisations to ocean models is one solution that is expected to improve the representation of small-scale (eddy) effects without increasing run-time. Stochastic parameterisation has been shown to have an impact in atmosphere-only models and idealised ocean models, but has not previously been studied in ocean general circulation models. Here we apply simple stochastic perturbations to the ocean temperature and salinity tendencies in the low-resolution coupled climate model, FAMOUS. The stochastic perturbations are implemented according to T(t) = T(t-1) + (ΔT(t) + ξ(t)), where T is temperature or salinity, ΔT is the corresponding deterministic increment in one time step, and ξ(t) is Gaussian noise. We use high-resolution HiGEM data coarse-grained to the FAMOUS grid to provide information about the magnitude and spatio-temporal correlation structure of the noise to be added to the lower resolution model. Here we present results of adding white and red noise, showing the impacts of an additive stochastic perturbation on mean climate state and variability in an AOGCM.
NASA Astrophysics Data System (ADS)
Koukourakis, Georg; Vafiadou, Maria; Steimers, André; Geraskin, Dmitri; Neary, Patrick; Kohl-Bareis, Matthias
2009-07-01
We used spatially resolved near-infrared spectroscopy (SRS-NIRS) to assess calf and thigh muscle oxygenation during running on a motor-driven treadmill. Two protocols were used: An incremental speed protocol (velocity = 6 - 12 km/h, ▵v = 2 km/h) was performed in 3 minute stages, while a pacing paradigm modulated step frequency alternatively (2.3 Hz [SLow]; 3.3 Hz [SHigh]) during a constant velocity for 2 minutes each. A SRS-NIRS broadband system (600 - 1000 nm) was used to measure total haemoglobin concentration and oxygen saturation (SO2). An accelerometer was placed on the hip joints to measure limb acceleration through the experiment. The data showed that the calf (SO2 58 to 42%) desaturated to a significantly lower level than the thigh (61 to 54%). During the pacing protocol, SO2 was significantly different between the SLow vs. SHigh trials. Additionally, physiological data as measured by spirometry were different between the SLow vs. SHigh pacing trials (VO2 (2563+/- 586 vs. 2503 +/- 605 mL/min). Significant differences in VO2 at the same workload (speed) indicate alterations in mechanical efficiency. These data suggest that SRS broadband NIRS can be used to discern small changes in muscle oxygenation, making this device useful for metabolic exercise studies in addition to spirometry and movement monitoring by accelerometers.
Technology: Digital Photography in an Inner-City Fifth Grade, Part 1
ERIC Educational Resources Information Center
Riner, Phil
2005-01-01
Research tells us we can learn complex tasks most easily if they are taught in "small sequential steps." This column is about the small sequential steps that unlocked the powers of digital photography, of portraiture, and of student creativity. The strategies and ideas described in this article came as a result of working with…
ERIC Educational Resources Information Center
Bass, Kristin M.; Drits-Esser, Dina; Stark, Louisa A.
2016-01-01
The credibility of conclusions made about the effectiveness of educational interventions depends greatly on the quality of the assessments used to measure learning gains. This essay, intended for faculty involved in small-scale projects, courses, or educational research, provides a step-by-step guide to the process of developing, scoring, and…
[The grey line of dialysis initiation: as early as possible that is, by the incremental modality].
Casino, Francesco Gaetano
2010-01-01
In the past, the initiation of dialysis treatment was determined by the appearance of signs and symptoms of uremia along with biochemical parameters. More recently, based on the findings of observational studies, it was hypothesized that an earlier start would benefit patients. The endorsement of this concept by international guidelines has led to the current practice of starting dialysis at GFR levels of 10 to 15 mL/ min/1.73 m2. However, recent observational studies taking into proper account the lead time bias showed a worse rather than better prognosis in early starters, suggesting that the previous studies might have been flawed. The IDEAL (Initiating Dialysis Early And Late) study has shown that starting dialysis ''just in time'', i.e., at the occurrence of uremic symptoms, does not harm the patient in that it is associated with the same clinical outcomes as early dialysis initiation. We believe that these results are compatible with our hypothesis that starting peritoneal dialysis or hemodialysis with an incremental modality could be appropriate for an asymptomatic patient with objective signs of mild uremia and a measured GFR around 10 mL/min/1.73 m2. In fact, when the GFR is relatively high, a reduced dialysis dose and/or frequency could suffice to control mild uremia, while possibly preserving the residual renal function owing to the reduced contact time between blood and bio-incompatible dialysis materials. The dialysis dose and/or frequency could be increased step by step, at the occurrence of symptoms, marked biochemical derangements or problems with volume control, without computing weekly Kt/Vurea.
Dong, Hengjin; Buxton, Martin
2006-01-01
The objective of this study is to apply a Markov model to compare cost-effectiveness of total knee replacement (TKR) using computer-assisted surgery (CAS) with that of TKR using a conventional manual method in the absence of formal clinical trial evidence. A structured search was carried out to identify evidence relating to the clinical outcome, cost, and effectiveness of TKR. Nine Markov states were identified based on the progress of the disease after TKR. Effectiveness was expressed by quality-adjusted life years (QALYs). The simulation was carried out initially for 120 cycles of a month each, starting with 1,000 TKRs. A discount rate of 3.5 percent was used for both cost and effectiveness in the incremental cost-effectiveness analysis. Then, a probabilistic sensitivity analysis was carried out using a Monte Carlo approach with 10,000 iterations. Computer-assisted TKR was a long-term cost-effective technology, but the QALYs gained were small. After the first 2 years, the incremental cost per QALY of computer-assisted TKR was dominant because of cheaper and more QALYs. The incremental cost-effectiveness ratio (ICER) was sensitive to the "effect of CAS," to the CAS extra cost, and to the utility of the state "Normal health after primary TKR," but it was not sensitive to utilities of other Markov states. Both probabilistic and deterministic analyses produced similar cumulative serious or minor complication rates and complex or simple revision rates. They also produced similar ICERs. Compared with conventional TKR, computer-assisted TKR is a cost-saving technology in the long-term and may offer small additional QALYs. The "effect of CAS" is to reduce revision rates and complications through more accurate and precise alignment, and although the conclusions from the model, even when allowing for a full probabilistic analysis of uncertainty, are clear, the "effect of CAS" on the rate of revisions awaits long-term clinical evidence.
Accuracy of three Android-based pedometer applications in laboratory and free-living settings.
Leong, Jia Yan; Wong, Jyh Eiin
2017-01-01
This study examines the accuracy of three popular, free Android-based pedometer applications (apps), namely, Runtastic (RT), Pacer Works (PW), and Tayutau (TY) in laboratory and free-living settings. Forty-eight adults (22.5 ± 1.4 years) completed 3-min bouts of treadmill walking at five incremental speeds while carrying a test smartphone installed with the three apps. Experiment was repeated thrice, with the smartphone placed either in the pants pockets, at waist level, or secured to the left arm by an armband. The actual step count was manually counted by a tally counter. In the free-living setting, each of the 44 participants (21.9 ± 1.6 years) carried a smartphone with installed apps and a reference pedometer (Yamax Digi-Walker CW700) for 7 consecutive days. Results showed that TY produced the lowest mean absolute percent error (APE 6.7%) and was the only app with acceptable accuracy in counting steps in a laboratory setting. RT consistently underestimated steps with APE of 16.8% in the laboratory. PW significantly underestimated steps when the smartphone was secured to the arm, but overestimated under other conditions (APE 19.7%). TY was the most accurate app in counting steps in a laboratory setting with the lowest APE of 6.7%. In the free-living setting, the APE relative to the reference pedometer was 16.6%, 18.0%, and 16.8% for RT, PW, and TY, respectively. None of the three apps counted steps accurately in the free-living setting.
Cardiovascular responses to aerobic step dance sessions with and without appendicular overload.
La Torre, A; Impellizzeri, F M; Rampinini, E; Casanova, F; Alberti, G; Marcora, S M
2005-09-01
Several studies showed that exercise intensity during aerobic step dance can be modified varying stepping rate, bench height and manipulating body mass using hand held or adding loads to the torso. The aim of this study was to determine the cardiovascular responses during aerobic step dance using an overload strategy not yet investigated: appendicular overload. Ten healthy and moderately trained women (mean+/-SD: age 27+/-3.4 years, height 167.8+/-4.6 cm, body mass 55.7+/-4.7 kg, body mass index 19.8+/-1.6, VO2max44.4+/-6.1 mLxkg-1xmin-1) performed an incremental treadmill test to determine VO2peak, the VO2-heart rate (HR) and rating of perceived exertion (RPE)-HR relationships. Within 1 week from the laboratory test, the subjects performed two identical aerobic step dance routines: one using a track suit with loads placed in pockets close to the legs and arms and another without overload. The appendicular overload (10% of body mass) significantly increased the exercise intensity from 84.5% to 89.8% of HRmax corresponding to 68.9% and 78.3% of VO2peak, respectively (P<0.01). Similarly, RPE increased from 12.1 to 15.7 (P<0.001). The estimated VO2 and the caloric expenditure rose from 30.3 to 34.7 mLxkg-1xmin-1 and from 251 to 288 kcal, respectively. This study shows that the use of appendicular overload significantly increases the energy cost of aerobic step session similarly to other overload strategies already reported in the literature.
Small High-Speed Self-Acting Shaft Seals for Liquid Rocket Engines
NASA Technical Reports Server (NTRS)
Burcham, R. E.; Boynton, J. L.
1977-01-01
Design analysis, fabrication, and experimental evaluation were performed on three self-acting facetype LOX seal designs and one circumferential-type helium deal design. The LOX seals featured Rayleigh step lift pad and spiral groove geometry for lift augmentation. Machined metal bellows and piston ring secondary seal designs were tested. The helium purge seal featured floating rings with Rayleigh step lift pads. The Rayleigh step pad piston ring and the spiral groove LOX seals were successfully tested for approximately 10 hours in liquid oxygen. The helium seal was successfully tested for 24 hours. The shrouded Rayleigh step hydrodynamic lift pad LOX seal is feasible for advanced, small, high-speed oxygen turbopumps.
Simulation of load traffic and steeped speed control of conveyor
NASA Astrophysics Data System (ADS)
Reutov, A. A.
2017-10-01
The article examines the possibilities of the step control simulation of conveyor speed within Mathcad, Simulink, Stateflow software. To check the efficiency of the control algorithms and to more accurately determine the characteristics of the control system, it is necessary to simulate the process of speed control with real values of traffic for a work shift or for a day. For evaluating the belt workload and absence of spillage it is necessary to use empirical values of load flow in a shorter period of time. The analytical formulas for optimal speed step values were received using empirical values of load. The simulation checks acceptability of an algorithm, determines optimal parameters of regulation corresponding to load flow characteristics. The average speed and the number of speed switching during simulation are admitted as criteria of regulation efficiency. The simulation example within Mathcad software is implemented. The average conveyor speed decreases essentially by two-step and three-step control. A further increase in the number of regulatory steps decreases average speed insignificantly but considerably increases the intensity of the speed switching. Incremental algorithm of speed regulation uses different number of stages for growing and reducing load traffic. This algorithm allows smooth control of the conveyor speed changes with monotonic variation of the load flow. The load flow oscillation leads to an unjustified increase or decrease of speed. Work results can be applied at the design of belt conveyors with adjustable drives.
The Patient Protection and Affordable Care Act and the regulation of the health insurance industry.
Jha, Saurabh; Baker, Tom
2012-12-01
The Patient Protection and Affordable Care Act is a comprehensive and multipronged reform of the US health care system. The legislation makes incremental changes to Medicare, Medicaid, and the market for employer-sponsored health insurance. However, it makes substantial changes to the market for individual and small-group health insurance. The purpose of this article is to introduce the key regulatory reforms in the market for individual and small-group health insurance and explain how these reforms tackle adverse selection and risk classification and improve access to health care for the hitherto uninsured or underinsured population. Copyright © 2012 American College of Radiology. Published by Elsevier Inc. All rights reserved.
Finite element solutions for crack-tip behavior in small-scale yielding
NASA Technical Reports Server (NTRS)
Tracey, D. M.
1976-01-01
The subject considered is the stress and deformation fields in a cracked elastic-plastic power law hardening material under plane strain tensile loading. An incremental plasticity finite element formulation is developed for accurate analysis of the complete field problem including the extensively deformed near tip region, the elastic-plastic region, and the remote elastic region. The formulation has general applicability and was used to solve the small scale yielding problem for a set of material hardening exponents. Distributions of stress, strain, and crack opening displacement at the crack tip and through the elastic-plastic zone are presented as a function of the elastic stress intensity factor and material properties.
Small Steps Lead to Quality Assurance and Enhancement in Qatar University
ERIC Educational Resources Information Center
Al Attiyah, Asma; Khalifa, Batoul
2009-01-01
This paper presents a brief overview of Qatar University's history since it was started in 1973. Its primary focus is on the various small, but important, steps taken by the University to address the needs of quality assurance and enhancement. The Qatar University Reform Plan is described in detail. Its aims are to continually improve the quality…
Code of Federal Regulations, 2014 CFR
2014-07-01
... 40 Protection of Environment 23 2014-07-01 2014-07-01 false Applicability of corrosion control treatment steps to small, medium-size and large water systems. 141.81 Section 141.81 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) WATER PROGRAMS (CONTINUED) NATIONAL PRIMARY DRINKING WATER REGULATIONS Control of Lead and Copper...
Coq Tacticals and PVS Strategies: A Small Step Semantics
NASA Technical Reports Server (NTRS)
Kirchner, Florent
2003-01-01
The need for a small step semantics and more generally for a thorough documentation and understanding of Coq's tacticals and PVS's strategies arise with their growing use and the progressive uncovering of their subtleties. The purpose of the following study is to provide a simple and clear formal framework to describe their detailed semantics, and highlight their differences and similarities.
Equal-mobility bed load transport in a small, step-pool channel in the Ouachita Mountains
Daniel A. Marion; Frank Weirich
2003-01-01
Abstract: Equal-mobility transport (EMT) of bed load is more evident than size-selective transport during near-bankfull flow events in a small, step-pool channel in the Ouachita Mountains of central Arkansas. Bed load transport modes were studied by simulating five separate runoff events with peak discharges between 0.25 and 1.34 m3...
NASA Technical Reports Server (NTRS)
2002-01-01
JOHNSON SPACE CENTER, HOUSTON, TEXAS -- EXPEDITION FIVE CREW INSIGNIA (ISS05-S-001) -- The International Space Station (ISS) Expedition Five patch depicts the Station in its completed configuration and represents the vision of mankind's first step as a permanent human presence in space. The United States and Russian flags are joined together in a Roman numeral V to represent both the nationalities of the crew and the fifth crew to live aboard the ISS. Crew members' names are shown in the border of this patch. This increment encompasses a new phase in growth for the Station, with three Shuttle crews delivering critical components and building blocks to the ISS. To signify the participation of each crew member, the Shuttle is docked to the Station beneath a constellation of 17 stars symbolizing all those visiting and living aboard Station during this increment. The NASA insignia design for Shuttle flights is reserved for use by the astronauts and for other official use as the NASA Administrator may authorize. Public availability has been approved only in the forms of illustrations by the various news media. When and if there is any change in this policy, which is not anticipated, the change will be publicly announced.
NASA Astrophysics Data System (ADS)
Juneja, A.; Lathrop, D. P.; Sreenivasan, K. R.; Stolovitzky, G.
1994-06-01
A family of schemes is outlined for constructing stochastic fields that are close to turbulence. The fields generated from the more sophisticated versions of these schemes differ little in terms of one-point and two-point statistics from velocity fluctuations in high-Reynolds-number turbulence; we shall designate such fields as synthetic turbulence. All schemes, implemented here in one dimension, consist of the following three ingredients, but differ in various details. First, a simple multiplicative procedure is utilized for generating an intermittent signal which has the same properties as those of the turbulent energy dissipation rate ɛ. Second, the properties of the intermittent signal averaged over an interval of size r are related to those of longitudinal velocity increments Δu(r), evaluated over the same distance r, through a stochastic variable V introduced in the spirit of Kolmogorov's refined similarity hypothesis. The third and final step, which partially resembles a well-known procedure for constructing fractional Brownian motion, consists of suitably combining velocity increments to construct an artificial velocity signal. Various properties of the synthetic turbulence are obtained both analytically and numerically, and found to be in good agreement with measurements made in the atmospheric surface layer. A brief review of some previous models is provided.
Thermoviscoplastic analysis of fibrous periodic composites using triangular subvolumes
NASA Technical Reports Server (NTRS)
Walker, Kevin P.; Freed, Alan D.; Jordan, Eric H.
1993-01-01
The nonlinear viscoplastic behavior of fibrous periodic composites is analyzed by discretizing the unit cell into triangular subvolumes. A set of these subvolumes can be configured by the analyst to construct a representation for the unit cell of a periodic composite. In each step of the loading history, the total strain increment at any point is governed by an integral equation which applies to the entire composite. A Fourier series approximation allows the incremental stresses and strains to be determined within a unit cell of the periodic lattice. The nonlinearity arising from the viscoplastic behavior of the constituent materials comprising the composite is treated as fictitious body force in the governing integral equation. Specific numerical examples showing the stress distributions in the unit cell of a fibrous tungsten/copper metal matrix composite under viscoplastic loading conditions are given. The stress distribution resulting in the unit cell when the composite material is subjected to an overall transverse stress loading history perpendicular to the fibers is found to be highly heterogeneous, and typical homogenization techniques based on treating the stress and strain distributions within the constituent phases as homogeneous result in large errors under inelastic loading conditions.
Thermoviscoplastic analysis of fibrous periodic composites by the use of triangular subvolumes
NASA Technical Reports Server (NTRS)
Walker, Kevin P.; Freed, Alan D.; Jordan, Eric H.
1994-01-01
The non-linear viscoplastic behavior of fibrous periodic composites is analyzed by discretizing the unit cell into triangular subvolumes. A set of these subvolumes can be configured by the analyst to construct a representation for the unit cell of a periodic composite. In each step of the loading history the total strain increment at any point is governed by an integral equation which applies to the entire composite. A Fourier series approximation allows the incremental stresses and strains to be determined within a unit cell of the periodic lattice. The non-linearity arising from the viscoplastic behavior of the constituent materials comprising the composite is treated as a fictitious body force in the governing integral equation. Specific numerical examples showing the stress distributions in the unit cell of a fibrous tungsten/copper metal-matrix composite under viscoplastic loading conditions are given. The stress distribution resulting in the unit cell when the composite material is subjected to an overall transverse stress loading history perpendicular to the fibers is found to be highly heterogeneous, and typical homogenization techniques based on treating the stress and strain distributions within the constituent phases as homogeneous result in large errors under inelastic loading conditions.
Multiple-hopping trajectories near a rotating asteroid
NASA Astrophysics Data System (ADS)
Shen, Hong-Xin; Zhang, Tian-Jiao; Li, Zhao; Li, Heng-Nian
2017-03-01
We present a study of the transfer orbits connecting landing points of irregular-shaped asteroids. The landing points do not touch the surface of the asteroids and are chosen several meters above the surface. The ant colony optimization technique is used to calculate the multiple-hopping trajectories near an arbitrary irregular asteroid. This new method has three steps which are as follows: (1) the search of the maximal clique of candidate target landing points; (2) leg optimization connecting all landing point pairs; and (3) the hopping sequence optimization. In particular this method is applied to asteroids 433 Eros and 216 Kleopatra. We impose a critical constraint on the target landing points to allow for extensive exploration of the asteroid: the relative distance between all the arrived target positions should be larger than a minimum allowed value. Ant colony optimization is applied to find the set and sequence of targets, and the differential evolution algorithm is used to solve for the hopping orbits. The minimum-velocity increment tours of hopping trajectories connecting all the landing positions are obtained by ant colony optimization. The results from different size asteroids indicate that the cost of the minimum velocity-increment tour depends on the size of the asteroids.
Watson, J M; Crosby, H; Dale, V M; Tober, G; Wu, Q; Lang, J; McGovern, R; Newbury-Birch, D; Parrott, S; Bland, J M; Drummond, C; Godfrey, C; Kaner, E; Coulton, S
2013-06-01
There is clear evidence of the detrimental impact of hazardous alcohol consumption on the physical and mental health of the population. Estimates suggest that hazardous alcohol consumption annually accounts for 150,000 hospital admissions and between 15,000 and 22,000 deaths in the UK. In the older population, hazardous alcohol consumption is associated with a wide range of physical, psychological and social problems. There is evidence of an association between increased alcohol consumption and increased risk of coronary heart disease, hypertension and haemorrhagic and ischaemic stroke, increased rates of alcohol-related liver disease and increased risk of a range of cancers. Alcohol is identified as one of the three main risk factors for falls. Excessive alcohol consumption in older age can also contribute to the onset of dementia and other age-related cognitive deficits and is implicated in one-third of all suicides in the older population. To compare the clinical effectiveness and cost-effectiveness of a stepped care intervention against a minimal intervention in the treatment of older hazardous alcohol users in primary care. A multicentre, pragmatic, two-armed randomised controlled trial with an economic evaluation. General practices in primary care in England and Scotland between April 2008 and October 2010. Adults aged ≥ 55 years scoring ≥ 8 on the Alcohol Use Disorders Identification Test (10-item) (AUDIT) were eligible. In total, 529 patients were randomised in the study. The minimal intervention group received a 5-minute brief advice intervention with the practice or research nurse involving feedback of the screening results and discussion regarding the health consequences of continued hazardous alcohol consumption. Those in the stepped care arm initially received a 20-minute session of behavioural change counselling, with referral to step 2 (motivational enhancement therapy) and step 3 (local specialist alcohol services) if indicated. Sessions were recorded and rated to ensure treatment fidelity. The primary outcome was average drinks per day (ADD) derived from extended AUDIT--Consumption (3-item) (AUDIT-C) at 12 months. Secondary outcomes were AUDIT-C score at 6 and 12 months; alcohol-related problems assessed using the Drinking Problems Index (DPI) at 6 and 12 months; health-related quality of life assessed using the Short Form Questionnaire-12 items (SF-12) at 6 and 12 months; ADD at 6 months; quality-adjusted life-years (QALYs) (for cost-utility analysis derived from European Quality of Life-5 Dimensions); and health and social care resource use associated with the two groups. Both groups reduced alcohol consumption between baseline and 12 months. The difference between groups in log-transformed ADD at 12 months was very small, at 0.025 [95% confidence interval (CI)--0.060 to 0.119], and not statistically significant. At month 6 the stepped care group had a lower ADD, but again the difference was not statistically significant. At months 6 and 12, the stepped care group had a lower DPI score, but this difference was not statistically significant at the 5% level. The stepped care group had a lower SF-12 mental component score and lower physical component score at month 6 and month 12, but these differences were not statistically significant at the 5% level. The overall average cost per patient, taking into account health and social care resource use, was £488 [standard deviation (SD) £826] in the stepped care group and £482 (SD £826) in the minimal intervention group at month 6. The mean QALY gains were slightly greater in the stepped care group than in the minimal intervention group, with a mean difference of 0.0058 (95% CI -0.0018 to 0.0133), generating an incremental cost-effectiveness ratio (ICER) of £1100 per QALY gained. At month 12, participants in the stepped care group incurred fewer costs, with a mean difference of -£194 (95% CI -£585 to £198), and had gained 0.0117 more QALYs (95% CI -0.0084 to 0.0318) than the control group. Therefore, from an economic perspective the minimal intervention was dominated by stepped care but, as would be expected given the effectiveness results, the difference was small and not statistically significant. Stepped care does not confer an advantage over minimal intervention in terms of reduction in alcohol consumption at 12 months post intervention when compared with a 5-minute brief (minimal) intervention. This trial is registered as ISRCTN52557360. This project was funded by the NIHR Health Technology Assessment programme and will be published in full in Health Technology Assessment; Vol. 17, No. 25. See the HTA programme website for further project information.
Do low step count goals inhibit walking behavior: a randomized controlled study.
Anson, Denis; Madras, Diane
2016-07-01
Confirmation and quantification of observed differences in goal-directed walking behavior. Single-blind, split-half randomized trial. Small rural university, Pennsylvania, United States. A total of 94 able-bodied subjects (self-selected volunteer students, faculty and staff of a small university) were randomly assigned walking goals, and 53 completed the study. Incentivized pedometer-monitored program requiring recording the step-count for 56-days into a custom-made website providing daily feedback. Steps logged per day. During the first half of the study, the 5000 and 10,000 step group logged significantly different steps 7500 and 9000, respectively (P > 0.05). During the second half of the study, the 5000 and 10,000 step groups logged 7000 and 8600 steps, respectively (significance P > 0.05). The group switched from 5000 to →10,000 steps logged, 7900 steps for the first half and 9500 steps for the second half (significance P > 0.05). The group switched from 10,000 to 5000 steps logged 9700 steps for the first half and 9000 steps for the second half, which was significant (p > 0.05). Levels of walking behavior are influenced by the goals assigned. Subjects with high goals walk more than those with low goals, even if they do not meet the assigned goal. Reducing goals from a high to low level can reduce walking behavior. © The Author(s) 2015.
Repp, B H
1999-04-01
The detectability of a deviation from metronomic timing--of a small local increment in interonset interval (IOI) duration--in a musical excerpt is subject to positional biases, or "timing expectations," that are closely related to the expressive timing (sequence of IOI durations) typically produced by musicians in performance (Repp, 1992b, 1998c, 1998d). Experiment 1 replicated this finding with some changes in procedure and showed that the perception-performance correlation is not the result of formal musical training or availability of a musical score. Experiments 2 and 3 used a synchronization task to examine the hypothesis that participants' perceptual timing expectations are due to systematic modulations in the period of a mental timekeeper that also controls perceptual-motor coordination. Indeed, there was systematic variation in the asynchronies between taps and metronomically timed musical event onsets, and this variation was correlated both with the variations in IOI increment detectability (Experiment 1) and with the typical expressive timing pattern in performance. When the music contained local IOI increments (Experiment 2), they were almost perfectly compensated for on the next tap, regardless of their detectability in Experiment 1, which suggests a perceptual-motor feedback mechanism that is sensitive to subthreshold timing deviations. Overall, the results suggest that aspects of perceived musical structure influence the predictions of mental timekeeping mechanisms, thereby creating a subliminal warping of experienced time.
Numerical simulation of high speed incremental forming of aluminum alloy
NASA Astrophysics Data System (ADS)
Giuseppina, Ambrogio; Teresa, Citrea; Luigino, Filice; Francesco, Gagliardi
2013-12-01
In this study, an innovative process is analyzed with the aim to satisfy the industrial requirements, such as process flexibility, differentiation and customizing of products, cost reduction, minimization of execution time, sustainable production, etc. The attention is focused on incremental forming process, nowadays used in different fields such as: rapid prototyping, medical sector, architectural industry, aerospace and marine, in the production of molds and dies. Incremental forming consists in deforming only a small region of the workspace through a punch driven by a NC machine. SPIF is the considered variant of the process, in which the punch gives local deformation without dies and molds; consequently, the final product geometry can be changed by the control of an actuator without requiring a set of different tools. The drawback of this process is its slowness. The aim of this study is to assess the IF feasibility at high speeds. An experimental campaign will be performed by a CNC lathe with high speed to test process feasibility and the influence on materials formability mainly on aluminum alloys. The first results show how the material presents the same performance than in conventional speed IF and, in some cases, better material behavior due to the temperature field. An accurate numerical simulation has been performed to investigate the material behavior during the high speed process substantially confirming experimental evidence.
NASA Astrophysics Data System (ADS)
Zhan, Zongqian; Wang, Chendong; Wang, Xin; Liu, Yi
2018-01-01
On the basis of today's popular virtual reality and scientific visualization, three-dimensional (3-D) reconstruction is widely used in disaster relief, virtual shopping, reconstruction of cultural relics, etc. In the traditional incremental structure from motion (incremental SFM) method, the time cost of the matching is one of the main factors restricting the popularization of this method. To make the whole matching process more efficient, we propose a preprocessing method before the matching process: (1) we first construct a random k-d forest with the large-scale scale-invariant feature transform features in the images and combine this with the pHash method to obtain a value of relatedness, (2) we then construct a connected weighted graph based on the relatedness value, and (3) we finally obtain a planned sequence of adding images according to the principle of the minimum spanning tree. On this basis, we attempt to thin the minimum spanning tree to reduce the number of matchings and ensure that the images are well distributed. The experimental results show a great reduction in the number of matchings with enough object points, with only a small influence on the inner stability, which proves that this method can quickly and reliably improve the efficiency of the SFM method with unordered multiview images in complex scenes.
Is incremental hemodialysis ready to return on the scene? From empiricism to kinetic modelling.
Basile, Carlo; Casino, Francesco Gaetano; Kalantar-Zadeh, Kamyar
2017-08-01
Most people who make the transition to maintenance dialysis therapy are treated with a fixed dose thrice-weekly hemodialysis regimen without considering their residual kidney function (RKF). The RKF provides effective and naturally continuous clearance of both small and middle molecules, plays a major role in metabolic homeostasis, nutritional status, and cardiovascular health, and aids in fluid management. The RKF is associated with better patient survival and greater health-related quality of life, although these effects may be confounded by patient comorbidities. Preservation of the RKF requires a careful approach, including regular monitoring, avoidance of nephrotoxins, gentle control of blood pressure to avoid intradialytic hypotension, and an individualized dialysis prescription including the consideration of incremental hemodialysis. There is currently no standardized method for applying incremental hemodialysis in practice. Infrequent (once- to twice-weekly) hemodialysis regimens are often used arbitrarily, without knowing which patients would benefit the most from them or how to escalate the dialysis dose as RKF declines over time. The recently heightened interest in incremental hemodialysis has been hindered by the current limitations of the urea kinetic models (UKM) which tend to overestimate the dialysis dose required in the presence of substantial RKF. This is due to an erroneous extrapolation of the equivalence between renal urea clearance (Kru) and dialyser urea clearance (Kd), correctly assumed by the UKM, to the clinical domain. In this context, each ml/min of Kd clears the urea from the blood just as 1 ml/min of Kru does. By no means should such kinetic equivalence imply that 1 ml/min of Kd is clinically equivalent to 1 ml/min of urea clearance provided by the native kidneys. A recent paper by Casino and Basile suggested a variable target model (VTM) as opposed to the fixed model, because the VTM gives more clinical weight to the RKF and allows less frequent hemodialysis treatments at lower RKF. The potentially important clinical and financial implications of incremental hemodialysis render it highly promising and warrant randomized controlled trials.
Rapidly-Indexing Incremental-Angle Encoder
NASA Technical Reports Server (NTRS)
Christon, Philip R.; Meyer, Wallace W.
1989-01-01
Optoelectronic system measures relative angular position of shaft or other device to be turned, also measures absolute angular position after device turned through small angle. Relative angular position measured with fine resolution by optoelectronically counting finely- and uniformly-spaced light and dark areas on encoder disk as disk turns past position-sensing device. Also includes track containing coarsely- and nonuniformly-spaced light and dark areas, angular widths varying in proportion to absolute angular position. This second track provides gating and indexing signal.
The MAL: A Malware Analysis Lexicon
2013-02-01
we feel that further exploration of the open source literature is a promising avenue for enlarging the corpus. 2.3 Publishing the MAL Early in the...MAL. We feel that the advantages of this format are well worth the small incremental cost. The distribution of the MAL in this format is under...dictionary. We feel that moving to a richer format such as WordNet or WordVis would greatly improve the usability of the lexicon. 3.5 Improved Hosting The
Solution of elastic-plastic stress analysis problems by the p-version of the finite element method
NASA Technical Reports Server (NTRS)
Szabo, Barna A.; Actis, Ricardo L.; Holzer, Stefan M.
1993-01-01
The solution of small strain elastic-plastic stress analysis problems by the p-version of the finite element method is discussed. The formulation is based on the deformation theory of plasticity and the displacement method. Practical realization of controlling discretization errors for elastic-plastic problems is the main focus. Numerical examples which include comparisons between the deformation and incremental theories of plasticity under tight control of discretization errors are presented.
Elasto-plastic flow in cracked bodies using a new finite element model. Ph.D. Thesis
NASA Technical Reports Server (NTRS)
Karabin, M. E., Jr.
1977-01-01
Cracked geometries were studied by finite element techniques with the aid of a new special element embedded at the crack tip. This model seeked to accurately represent the singular stresses and strains associated with the elasto-plastic flow process. The present model was not restricted to a material type and did not predetermine a singularity. Rather the singularity was treated as an unknown. For each step of the incremental process the nodal degrees of freedom and the unknown singularity were found through minimization of an energy-like functional. The singularity and nodal degrees of freedom were determined by means of an iterative process.
McGuire, Thomas G
2010-01-01
This commentary on R. F. Averill et al. (2010) addresses their idea of risk and quality adjusting fee-for-service payments to primary care physicians in order to improve the efficiency of primary care and take a step toward financing a "medical home"for patients. I show how their idea can create incentives for efficient practice styles. Pairing this with an active beneficiary choice of primary care physician with an enrollment fee would make the idea easier to implement and provide an incentive and the financing for elements of service not covered by procedure-based fees.
An incremental block-line-Gauss-Seidel method for the Navier-Stokes equations
NASA Technical Reports Server (NTRS)
Napolitano, M.; Walters, R. W.
1985-01-01
A block-line-Gauss-Seidel (LGS) method is developed for solving the incompressible and compressible Navier-Stokes equations in two dimensions. The method requires only one block-tridiagonal solution process per iteration and is consequently faster per step than the linearized block-ADI methods. Results are presented for both incompressible and compressible separated flows: in all cases the proposed block-LGS method is more efficient than the block-ADI methods. Furthermore, for high Reynolds number weakly separated incompressible flow in a channel, which proved to be an impossible task for a block-ADI method, solutions have been obtained very efficiently by the new scheme.
Stepwise shockwave velocity determinator
NASA Technical Reports Server (NTRS)
Roth, Timothy E.; Beeson, Harold
1992-01-01
To provide an uncomplicated and inexpensive method for measuring the far-field velocity of a surface shockwave produced by an explosion, a stepwise shockwave velocity determinator (SSVD) was developed. The velocity determinator is constructed of readily available materials and works on the principle of breaking discrete sensors composed of aluminum foil contacts. The discrete sensors have an average breaking threshold of approximately 7 kPa. An incremental output step of 250 mV is created with each foil contact breakage and is logged by analog-to-digital instrumentation. Velocity data obtained from the SSVD is within approximately 11 percent of the calculated surface shockwave velocity of a muzzle blast from a 30.06 rifle.
Smart Steps to Sustainability 2.0
Smart Steps to Sustainability provides small business owners and managers with practical advice and tools to implementsustainable and environmentally-preferable business practices that go beyond compliance.
Percolator: Scalable Pattern Discovery in Dynamic Graphs
DOE Office of Scientific and Technical Information (OSTI.GOV)
Choudhury, Sutanay; Purohit, Sumit; Lin, Peng
We demonstrate Percolator, a distributed system for graph pattern discovery in dynamic graphs. In contrast to conventional mining systems, Percolator advocates efficient pattern mining schemes that (1) support pattern detection with keywords; (2) integrate incremental and parallel pattern mining; and (3) support analytical queries such as trend analysis. The core idea of Percolator is to dynamically decide and verify a small fraction of patterns and their in- stances that must be inspected in response to buffered updates in dynamic graphs, with a total mining cost independent of graph size. We demonstrate a) the feasibility of incremental pattern mining by walkingmore » through each component of Percolator, b) the efficiency and scalability of Percolator over the sheer size of real-world dynamic graphs, and c) how the user-friendly GUI of Percolator inter- acts with users to support keyword-based queries that detect, browse and inspect trending patterns. We also demonstrate two user cases of Percolator, in social media trend analysis and academic collaboration analysis, respectively.« less
Xiao, Hui-Jie; Wei, Zi-Gang; Wang, Qing; Zhu, Xiao-Bo
2012-12-01
Based on the theory of harmonious development of ecological economy, a total of 13 evaluation indices were selected from the ecological, economic, and social sub-systems of Yanqi River watershed in Huairou District of Beijing. The selected evaluation indices were normalized by using trapezoid functions, and the weights of the evaluation indices were determined by analytic hierarchy process. Then, the eco-economic benefits of the watershed were evaluated with weighted composite index method. From 2004 to 2011, the ecological, economic, and social benefits of Yanqi River watershed all had somewhat increase, among which, ecological benefit increased most, with the value changed from 0.210 in 2004 to 0.255 in 2011 and an increment of 21.5%. The eco-economic benefits of the watershed increased from 0.734 in 2004 to 0.840 in 2011, with an increment of 14.2%. At present, the watershed reached the stage of advanced ecosystem, being in beneficial circulation and harmonious development of ecology, economy, and society.
Characteristics of a Sensitive Well Showing Pre-Earthquake Water-Level Changes
NASA Astrophysics Data System (ADS)
King, Chi-Yu
2018-04-01
Water-level data recorded at a sensitive well next to a fault in central Japan between 1989 and 1998 showed many coseismic water-level drops and a large (60 cm) and long (6-month) pre-earthquake drop before a rare local earthquake of magnitude 5.8 on 17 March 1997, as well as 5 smaller pre-earthquake drops during a 7-year period prior to this earthquake. The pre-earthquake changes were previously attributed to leakage through the fault-gouge zone caused by small but broad-scaled crustal-stress increments. These increments now seem to be induced by some large slow-slip events. The coseismic changes are attributed to seismic shaking-induced fissures in the adjacent aquitards, in addition to leakage through the fault. The well's high-sensitivity is attributed to its tapping a highly permeable aquifer, which is connected to the fractured side of the fault, and its near-critical condition for leakage, especially during the 7 years before the magnitude 5.8 earthquake.
NASA Astrophysics Data System (ADS)
Danesh-Yazdi, Mohammad; Botter, Gianluca; Foufoula-Georgiou, Efi
2017-05-01
Lack of hydro-bio-chemical data at subcatchment scales necessitates adopting an aggregated system approach for estimating water and solute transport properties, such as residence and travel time distributions, at the catchment scale. In this work, we show that within-catchment spatial heterogeneity, as expressed in spatially variable discharge-storage relationships, can be appropriately encapsulated within a lumped time-varying stochastic Lagrangian formulation of transport. This time (variability) for space (heterogeneity) substitution yields mean travel times (MTTs) that are not significantly biased to the aggregation of spatial heterogeneity. Despite the significant variability of MTT at small spatial scales, there exists a characteristic scale above which the MTT is not impacted by the aggregation of spatial heterogeneity. Extensive simulations of randomly generated river networks reveal that the ratio between the characteristic scale and the mean incremental area is on average independent of river network topology and the spatial arrangement of incremental areas.
The Value of Medical and Pharmaceutical Interventions for Reducing Obesity
Michaud, Pierre-Carl; Goldman, Dana; Lakdawalla, Darius; Zheng, Yuhui; Gailey, Adam H.
2012-01-01
This paper attempts to quantify the social, private, and public-finance values of reducing obesity through pharmaceutical and medical interventions. We find that the total social value of bariatric surgery is large for treated patients, with incremental social cost-effectiveness ratios typically under $10,000 per life-year saved. On the other hand, pharmaceutical interventions against obesity yield much less social value with incremental social cost-effectiveness ratios around $50,000. Our approach accounts for: competing risks to life expectancy; health care costs; and a variety of non-medical economic consequences (pensions, disability insurance, taxes, and earnings), which account for 20% of the total social cost of these treatments. On balance, bariatric surgery generates substantial private value for those treated, in the form of health and other economic consequences. The net public fiscal effects are modest, primarily because the size of the population eligible for treatment is small while the net social effect is large once improvements in life expectancy are taken into account. PMID:22705389
NASA Astrophysics Data System (ADS)
Sgambitterra, Emanuele; Piccininni, Antonio; Guglielmi, Pasquale; Ambrogio, Giuseppina; Fragomeni, Gionata; Villa, Tomaso; Palumbo, Gianfranco
2018-05-01
Cranial implants are custom prostheses characterized by quite high geometrical complexity and small thickness; at the same time aesthetic and mechanical requirements have to be met. Titanium alloys are largely adopted for such prostheses, as they can be processed via different manufacturing technologies. In the present work cranial prostheses have been manufactured by Super Plastic Forming (SPF) and Single Point Incremental Forming (SPIF). In order to assess the mechanical performance of the cranial prostheses, drop tests under different load conditions were conducted on flat samples to investigate the effect of the blank thickness. Numerical simulations were also run for comparison purposes. The mechanical performance of the cranial implants manufactured by SPF and SPIF could be predicted using drop test data and information about the thickness evolution of the formed parts: the SPIFed prosthesis revealed to have a lower maximum deflection and a higher maximum force, while the SPFed prostheses showed a lower absorbed energy.
Micromagnetic simulation study of magnetization reversal in torus-shaped permalloy nanorings
NASA Astrophysics Data System (ADS)
Mishra, Amaresh Chandra; Giri, R.
2017-09-01
Using micromagnetic simulation, the magnetization reversal of soft permalloy rings of torus shape with major radius R varying within 20-100 nm has been investigated. The minor radius r of the torus rings was increased from 5 nm up to a maximum value rmax such that R- rmax = 10 nm. Micromagnetic simulation of in-plane hysteresis curve of these nanorings revealed that in the case of very thin rings (r ≤ 10 nm), the remanent state is found to be an onion state, whereas for all other rings, the remanent state is a vortex state. The area of the hysteresis loop was found to be decreasing gradually with the increment of r. The normalized area under the hysteresis loops (AN) increases initially with increment of r. It attains a maximum for a certain value of r = r0 and again decreases thereafter. This value r0 increases as we decrease R and as a result, this peak feature is hardly visible in the case of smaller rings (rings having small R).
The value of medical and pharmaceutical interventions for reducing obesity.
Michaud, Pierre-Carl; Goldman, Dana P; Lakdawalla, Darius N; Zheng, Yuhui; Gailey, Adam H
2012-07-01
This paper attempts to quantify the social, private, and public-finance values of reducing obesity through pharmaceutical and medical interventions. We find that the total social value of bariatric surgery is large for treated patients, with incremental social cost-effectiveness ratios typically under $10,000 per life-year saved. On the other hand, pharmaceutical interventions against obesity yield much less social value with incremental social cost-effectiveness ratios around $50,000. Our approach accounts for: competing risks to life expectancy; health care costs; and a variety of non-medical economic consequences (pensions, disability insurance, taxes, and earnings), which account for 20% of the total social cost of these treatments. On balance, bariatric surgery generates substantial private value for those treated, in the form of health and other economic consequences. The net public fiscal effects are modest, primarily because the size of the population eligible for treatment is small. The net social effect is large once improvements in life expectancy are taken into account. Copyright © 2012 Elsevier B.V. All rights reserved.
Ovarian tissue cryopreservation by stepped vitrification and monitored by X-ray computed tomography.
Corral, Ariadna; Clavero, Macarena; Gallardo, Miguel; Balcerzyk, Marcin; Amorim, Christiani A; Parrado-Gallego, Ángel; Dolmans, Marie-Madeleine; Paulini, Fernanda; Morris, John; Risco, Ramón
2018-04-01
Ovarian tissue cryopreservation is, in most cases, the only fertility preservation option available for female patients soon to undergo gonadotoxic treatment. To date, cryopreservation of ovarian tissue has been carried out by both traditional slow freezing method and vitrification, but even with the best techniques, there is still a considerable loss of follicle viability. In this report, we investigated a stepped cryopreservation procedure which combines features of slow cooling and vitrification (hereafter called stepped vitrification). Bovine ovarian tissue was used as a tissue model. Stepwise increments of the Me 2 SO concentration coupled with stepwise drops-in temperature in a device specifically designed for this purpose and X-ray computed tomography were combined to investigate loading times at each step, by monitoring the attenuation of the radiation proportional to Me 2 SO permeation. Viability analysis was performed in warmed tissues by immunohistochemistry. Although further viability tests should be conducted after transplantation, preliminary results are very promising. Four protocols were explored. Two of them showed a poor permeation of the vitrification solution (P1 and P2). The other two (P3 and P4), with higher permeation, were studied in deeper detail. Out of these two protocols, P4, with a longer permeation time at -40 °C, showed the same histological integrity after warming as fresh controls. Copyright © 2018 Elsevier Inc. All rights reserved.
The effects of processing techniques on magnesium-based composite
NASA Astrophysics Data System (ADS)
Rodzi, Siti Nur Hazwani Mohamad; Zuhailawati, Hussain
2016-12-01
The aim of this study is to investigate the effect of processing techniques on the densification, hardness and compressive strength of Mg alloy and Mg-based composite for biomaterial application. The control sample (pure Mg) and Mg-based composite (Mg-Zn/HAp) were fabricated through mechanical alloying process using high energy planetary mill, whilst another Mg-Zn/HAp composite was fabricated through double step processing (the matrix Mg-Zn alloy was fabricated by planetary mill, subsequently HAp was dispersed by roll mill). As-milled powder was then consolidated by cold press into 10 mm diameter pellet under 400 MPa compaction pressure before being sintered at 300 °C for 1 hour under the flow of argon. The densification of the sintered pellets were then determined by Archimedes principle. Mechanical properties of the sintered pellets were characterized by microhardness and compression test. The results show that the density of the pellets was significantly increased by addition of HAp, but the most optimum density was observed when the sample was fabricated through double step processing (1.8046 g/cm3). Slight increment in hardness and ultimate compressive strength were observed for Mg-Zn/HAp composite that was fabricated through double step processing (58.09 HV, 132.19 MPa), as compared to Mg-Zn/HAp produced through single step processing (47.18 HV, 122.49 MPa).
Six-sigma application in tire-manufacturing company: a case study
NASA Astrophysics Data System (ADS)
Gupta, Vikash; Jain, Rahul; Meena, M. L.; Dangayach, G. S.
2017-09-01
Globalization, advancement of technologies, and increment in the demand of the customer change the way of doing business in the companies. To overcome these barriers, the six-sigma define-measure-analyze-improve-control (DMAIC) method is most popular and useful. This method helps to trim down the wastes and generating the potential ways of improvement in the process as well as service industries. In the current research, the DMAIC method was used for decreasing the process variations of bead splice causing wastage of material. This six-sigma DMAIC research was initiated by problem identification through voice of customer in the define step. The subsequent step constitutes of gathering the specification data of existing tire bead. This step was followed by the analysis and improvement steps, where the six-sigma quality tools such as cause-effect diagram, statistical process control, and substantial analysis of existing system were implemented for root cause identification and reduction in process variation. The process control charts were used for systematic observation and control the process. Utilizing DMAIC methodology, the standard deviation was decreased from 2.17 to 1.69. The process capability index (C p) value was enhanced from 1.65 to 2.95 and the process performance capability index (C pk) value was enhanced from 0.94 to 2.66. A DMAIC methodology was established that can play a key role for reducing defects in the tire-manufacturing process in India.
Self-calibration of robot-sensor system
NASA Technical Reports Server (NTRS)
Yeh, Pen-Shu
1990-01-01
The process of finding the coordinate transformation between a robot and an external sensor system has been addressed. This calibration is equivalent to solving a nonlinear optimization problem for the parameters that characterize the transformation. A two-step procedure is herein proposed for solving the problem. The first step involves finding a nominal solution that is a good approximation of the final solution. A varational problem is then generated to replace the original problem in the next step. With the assumption that the variational parameters are small compared to unity, the problem that can be more readily solved with relatively small computation effort.
Time series analysis of Mexico City subsidence constrained by radar interferometry
NASA Astrophysics Data System (ADS)
López-Quiroz, Penélope; Doin, Marie-Pierre; Tupin, Florence; Briole, Pierre; Nicolas, Jean-Marie
2009-09-01
In Mexico City, subsidence rates reach up to 40 cm/yr mainly due to soil compaction led by the over exploitation of the Mexico Basin aquifer. In this paper, we map the spatial and temporal patterns of the Mexico City subsidence by differential radar interferometry, using 38 ENVISAT images acquired between end of 2002 and beginning of 2007. We present the severe interferogram unwrapping problems partly due to the coherence loss but mostly due to the high fringe rates. These difficulties are overcome by designing a new methodology that helps the unwrapping step. Our approach is based on the fact that the deformation shape is stable for similar time intervals during the studied period. As a result, a stack of the five best interferograms can be used to compute an average deformation rate for a fixed time interval. Before unwrapping, the number of fringes is then decreased in wrapped interferograms using a scaled version of the stack together with the estimation of the atmospheric phase contribution related with the troposphere vertical stratification. The residual phase, containing less fringes, is more easily unwrapped than the original interferogram. The unwrapping procedure is applied in three iterative steps. The 71 small baseline unwrapped interferograms are inverted to obtain increments of radar propagation delays between the 38 acquisition dates. Based on the redundancy of the interferometric data base, we quantify the unwrapping errors and show that they are strongly decreased by iterations in the unwrapping process. A map of the RMS interferometric system misclosure allows to define the unwrapping reliability for each pixel. Finally, we present a new algorithm for time series analysis that differs from classical SVD decomposition and is best suited to the present data base. Accurate deformation time series are then derived over the metropolitan area of the city with a spatial resolution of 30 × 30 m.
An adaptive grid algorithm for one-dimensional nonlinear equations
NASA Technical Reports Server (NTRS)
Gutierrez, William E.; Hills, Richard G.
1990-01-01
Richards' equation, which models the flow of liquid through unsaturated porous media, is highly nonlinear and difficult to solve. Step gradients in the field variables require the use of fine grids and small time step sizes. The numerical instabilities caused by the nonlinearities often require the use of iterative methods such as Picard or Newton interation. These difficulties result in large CPU requirements in solving Richards equation. With this in mind, adaptive and multigrid methods are investigated for use with nonlinear equations such as Richards' equation. Attention is focused on one-dimensional transient problems. To investigate the use of multigrid and adaptive grid methods, a series of problems are studied. First, a multigrid program is developed and used to solve an ordinary differential equation, demonstrating the efficiency with which low and high frequency errors are smoothed out. The multigrid algorithm and an adaptive grid algorithm is used to solve one-dimensional transient partial differential equations, such as the diffusive and convective-diffusion equations. The performance of these programs are compared to that of the Gauss-Seidel and tridiagonal methods. The adaptive and multigrid schemes outperformed the Gauss-Seidel algorithm, but were not as fast as the tridiagonal method. The adaptive grid scheme solved the problems slightly faster than the multigrid method. To solve nonlinear problems, Picard iterations are introduced into the adaptive grid and tridiagonal methods. Burgers' equation is used as a test problem for the two algorithms. Both methods obtain solutions of comparable accuracy for similar time increments. For the Burgers' equation, the adaptive grid method finds the solution approximately three times faster than the tridiagonal method. Finally, both schemes are used to solve the water content formulation of the Richards' equation. For this problem, the adaptive grid method obtains a more accurate solution in fewer work units and less computation time than required by the tridiagonal method. The performance of the adaptive grid method tends to degrade as the solution process proceeds in time, but still remains faster than the tridiagonal scheme.