A table of intensity increments.
DOT National Transportation Integrated Search
1966-01-01
Small intensity increments can be produced by adding larger intensity increments. A table is presented covering the range of small intensity increments from 0.008682 through 6.020 dB in 60 large intensity increments of 1 dB.
The Improvement Cycle: Analyzing Our Experience
NASA Technical Reports Server (NTRS)
Pajerski, Rose; Waligora, Sharon
1996-01-01
NASA's Software Engineering Laboratory (SEL), one of the earliest pioneers in the areas of software process improvement and measurement, has had a significant impact on the software business at NASA Goddard. At the heart of the SEL's improvement program is a belief that software products can be improved by optimizing the software engineering process used to develop them and a long-term improvement strategy that facilitates small incremental improvements that accumulate into significant gains. As a result of its efforts, the SEL has incrementally reduced development costs by 60%, decreased error rates by 85%, and reduced cycle time by 25%. In this paper, we analyze the SEL's experiences on three major improvement initiatives to better understand the cyclic nature of the improvement process and to understand why some improvements take much longer than others.
Improve Math Teaching with Incremental Improvements
ERIC Educational Resources Information Center
Star, Jon R.
2016-01-01
Past educational reforms have failed because they didn't meet teachers where they were. They expected major changes in practices that may have been unrealistic for many teachers even under ideal professional learning conditions. Instead of promoting broad scale changes, improvement may be more likely if they are composed of small yet powerful…
History Matters: Incremental Ontology Reasoning Using Modules
NASA Astrophysics Data System (ADS)
Cuenca Grau, Bernardo; Halaschek-Wiener, Christian; Kazakov, Yevgeny
The development of ontologies involves continuous but relatively small modifications. Existing ontology reasoners, however, do not take advantage of the similarities between different versions of an ontology. In this paper, we propose a technique for incremental reasoning—that is, reasoning that reuses information obtained from previous versions of an ontology—based on the notion of a module. Our technique does not depend on a particular reasoning calculus and thus can be used in combination with any reasoner. We have applied our results to incremental classification of OWL DL ontologies and found significant improvement over regular classification time on a set of real-world ontologies.
Systems Engineering and Integration (SE and I)
NASA Technical Reports Server (NTRS)
Chevers, ED; Haley, Sam
1990-01-01
The issue of technology advancement and future space transportation vehicles is addressed. The challenge is to develop systems which can be evolved and improved in small incremental steps where each increment reduces present cost, improves, reliability, or does neither but sets the stage for a second incremental upgrade that does. Future requirements are interface standards for commercial off the shelf products to aid in the development of integrated facilities; enhanced automated code generation system slightly coupled to specification and design documentation; modeling tools that support data flow analysis; and shared project data bases consisting of technical characteristics cast information, measurement parameters, and reusable software programs. Topics addressed include: advanced avionics development strategy; risk analysis and management; tool quality management; low cost avionics; cost estimation and benefits; computer aided software engineering; computer systems and software safety; system testability; and advanced avionics laboratories - and rapid prototyping. This presentation is represented by viewgraphs only.
Mortelliti, Caroline L; Mortelliti, Anthony J
2016-08-01
To elucidate the relatively large incremental percent change (IPC) in cross sectional area (CSA) in currently available small endotracheal tubes (ETTs), and to make recommendation for lesser incremental change in CSA in these smaller ETTs, in order to minimize iatrogenic airway injury. The CSAs of a commercially available line of ETTs were calculated, and the IPC of the CSA between consecutive size ETTs was calculated and graphed. The average IPC in CSA with large ETTs was applied to calculate identical IPC in the CSA for a theoretical, smaller ETT series, and the dimensions of a new theoretical series of proposed small ETTs were defined. The IPC of CSA in the larger (5.0-8.0 mm inner diameter (ID)) ETTs was 17.07%, and the IPC of CSA in the smaller ETTs (2.0-4.0 mm ID) is remarkably larger (38.08%). Applying the relatively smaller IPC of CSA from larger ETTs to a theoretical sequence of small ETTs, starting with the 2.5 mm ID ETT, suggests that intermediate sizes of small ETTs (ID 2.745 mm, 3.254 mm, and 3.859 mm) should exist. We recommend manufacturers produce additional small ETT size options at the intuitive intermediate sizes of 2.75 mm, 3.25 mm, and 3.75 mm ID in order to improve airway management for infants and small children. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Image Fluctuations in LED Electromechanical 3D-Display
NASA Astrophysics Data System (ADS)
Klyuev, Alexey V.; Yakimov, Arkady V.
Fluctuations in parameters of light-emitting diode (LED) electromechanical 3D-display are investigated. It is shown, that there are two types of fluctuations in the rotating 3D-display. The first one is caused by a small increment in the rotation angle, which has a tendency to the increase. That occurs in the form of the “drift” without periodic changes of the angle. The second one is the change in small linear increments of the angle, which occurs as undamped harmonic oscillations with constant amplitude. This shows the stability of the investigated steady state because there is no tendency to increase the amplitude of the considered parameter regime. In conclusion we give some recommendations how to improve synchronization of the system.
Bradbury, Penelope A; Tu, Dongsheng; Seymour, Lesley; Isogai, Pierre K; Zhu, Liting; Ng, Raymond; Mittmann, Nicole; Tsao, Ming-Sound; Evans, William K; Shepherd, Frances A; Leighl, Natasha B
2010-03-03
The NCIC Clinical Trials Group conducted the BR.21 trial, a randomized placebo-controlled trial of erlotinib (an epidermal growth factor receptor tyrosine kinase inhibitor) in patients with previously treated advanced non-small cell lung cancer. This trial accrued patients between August 14, 2001, and January 31, 2003, and found that overall survival and quality of life were improved in the erlotinib arm than in the placebo arm. However, funding restrictions limit access to erlotinib in many countries. We undertook an economic analysis of erlotinib treatment in this trial and explored different molecular and clinical predictors of outcome to determine the cost-effectiveness of treating various populations with erlotinib. Resource utilization was determined from individual patient data in the BR.21 trial database. The trial recruited 731 patients (488 in the erlotinib arm and 243 in the placebo arm). Costs arising from erlotinib treatment, diagnostic tests, outpatient visits, acute hospitalization, adverse events, lung cancer-related concomitant medications, transfusions, and radiation therapy were captured. The incremental cost-effectiveness ratio was calculated as the ratio of incremental cost (in 2007 Canadian dollars) to incremental effectiveness (life-years gained). In exploratory analyses, we evaluated the benefits of treatment in selected subgroups to determine the impact on the incremental cost-effectiveness ratio. The incremental cost-effectiveness ratio for erlotinib treatment in the BR.21 trial population was $94,638 per life-year gained (95% confidence interval = $52,359 to $429,148). The major drivers of cost-effectiveness included the magnitude of survival benefit and erlotinib cost. Subgroup analyses revealed that erlotinib may be more cost-effective in never-smokers or patients with high EGFR gene copy number. With an incremental cost-effectiveness ratio of $94 638 per life-year gained, erlotinib treatment for patients with previously treated advanced non-small cell lung cancer is marginally cost-effective. The use of molecular predictors of benefit for targeted agents may help identify more or less cost-effective subgroups for treatment.
Cost-Effectiveness Analysis of Regorafenib for Metastatic Colorectal Cancer
Goldstein, Daniel A.; Ahmad, Bilal B.; Chen, Qiushi; Ayer, Turgay; Howard, David H.; Lipscomb, Joseph; El-Rayes, Bassel F.; Flowers, Christopher R.
2015-01-01
Purpose Regorafenib is a standard-care option for treatment-refractory metastatic colorectal cancer that increases median overall survival by 6 weeks compared with placebo. Given this small incremental clinical benefit, we evaluated the cost-effectiveness of regorafenib in the third-line setting for patients with metastatic colorectal cancer from the US payer perspective. Methods We developed a Markov model to compare the cost and effectiveness of regorafenib with those of placebo in the third-line treatment of metastatic colorectal cancer. Health outcomes were measured in life-years and quality-adjusted life-years (QALYs). Drug costs were based on Medicare reimbursement rates in 2014. Model robustness was addressed in univariable and probabilistic sensitivity analyses. Results Regorafenib provided an additional 0.04 QALYs (0.13 life-years) at a cost of $40,000, resulting in an incremental cost-effectiveness ratio of $900,000 per QALY. The incremental cost-effectiveness ratio for regorafenib was > $550,000 per QALY in all of our univariable and probabilistic sensitivity analyses. Conclusion Regorafenib provides minimal incremental benefit at high incremental cost per QALY in the third-line management of metastatic colorectal cancer. The cost-effectiveness of regorafenib could be improved by the use of value-based pricing. PMID:26304904
The MAL: A Malware Analysis Lexicon
2013-02-01
we feel that further exploration of the open source literature is a promising avenue for enlarging the corpus. 2.3 Publishing the MAL Early in the...MAL. We feel that the advantages of this format are well worth the small incremental cost. The distribution of the MAL in this format is under...dictionary. We feel that moving to a richer format such as WordNet or WordVis would greatly improve the usability of the lexicon. 3.5 Improved Hosting The
Mandavia, Amar D; Bonanno, George A
2018-04-29
To determine whether there were incremental mental health impacts, specifically on depression trajectories, as a result of the 2008 economic crisis (the Great Recession) and subsequent Hurricane Sandy. Using latent growth mixture modeling and the ORANJ BOWL dataset, we examined prospective trajectories of depression among older adults (mean age, 60.67; SD, 6.86) who were exposed to the 2 events. We also collected community economic and criminal justice data to examine their impact upon depression trajectories. Participants (N=1172) were assessed at 3 times for affect, successful aging, and symptoms of depression. We additionally assessed posttraumatic stress disorder (PTSD) symptomology after Hurricane Sandy. We identified 3 prospective trajectories of depression. The majority (83.6%) had no significant change in depression from before to after these events (resilience), while 7.2% of the sample increased in depression incrementally after each event (incremental depression). A third group (9.2%) went from high to low depression symptomology following the 2 events (depressive-improving). Only those in the incremental depression group had significant PTSD symptoms following Hurricane Sandy. We identified a small group of individuals for whom the experience of multiple stressful events had an incremental negative effect on mental health outcomes. These results highlight the importance of understanding the perseveration of depression symptomology from one event to another. (Disaster Med Public Health Preparedness. 2018;page 1 of 10).
Springback effects during single point incremental forming: Optimization of the tool path
NASA Astrophysics Data System (ADS)
Giraud-Moreau, Laurence; Belchior, Jérémy; Lafon, Pascal; Lotoing, Lionel; Cherouat, Abel; Courtielle, Eric; Guines, Dominique; Maurine, Patrick
2018-05-01
Incremental sheet forming is an emerging process to manufacture sheet metal parts. This process is more flexible than conventional one and well suited for small batch production or prototyping. During the process, the sheet metal blank is clamped by a blank-holder and a small-size smooth-end hemispherical tool moves along a user-specified path to deform the sheet incrementally. Classical three-axis CNC milling machines, dedicated structure or serial robots can be used to perform the forming operation. Whatever the considered machine, large deviations between the theoretical shape and the real shape can be observed after the part unclamping. These deviations are due to both the lack of stiffness of the machine and residual stresses in the part at the end of the forming stage. In this paper, an optimization strategy of the tool path is proposed in order to minimize the elastic springback induced by residual stresses after unclamping. A finite element model of the SPIF process allowing the shape prediction of the formed part with a good accuracy is defined. This model, based on appropriated assumptions, leads to calculation times which remain compatible with an optimization procedure. The proposed optimization method is based on an iterative correction of the tool path. The efficiency of the method is shown by an improvement of the final shape.
Incremental Improvement of Career Education in Utah. Final Report.
ERIC Educational Resources Information Center
Utah State Board of Education, Salt Lake City.
This is a project report on Utah's plans to effect "incremental improvements" in career education implementation in seven school districts. Project objectives are formulated as follow: effect incremental improvements in attendance area cones, strengthen career education leadership capabilities, develop staff competence to diffuse the…
Incremental soil sampling root water uptake, or be great through others
USDA-ARS?s Scientific Manuscript database
Ray Allmaras pursued several research topics in relation to residue and tillage research. He looked for new tools to help explain soil responses to tillage, including disk permeameters and image analysis. The incremental sampler developed by Pikul and Allmaras allowed small-depth increment, volumetr...
Thermal modeling of cogging process using finite element method
NASA Astrophysics Data System (ADS)
Khaled, Mahmoud; Ramadan, Mohamad; Fourment, Lionel
2016-10-01
Among forging processes, incremental processes are those where the work piece undergoes several thermal and deformation steps with small increment of deformation. They offer high flexibility in terms of the work piece size since they allow shaping wide range of parts from small to large size. Since thermal treatment is essential to obtain the required shape and quality, this paper presents the thermal modeling of incremental processes. The finite element discretization, spatial and temporal, is exposed. Simulation is performed using commercial software Forge 3. Results show the thermal behavior at the beginning and at the end of the process.
21 CFR 874.1070 - Short increment sensitivity index (SISI) adapter.
Code of Federal Regulations, 2011 CFR
2011-04-01
... 21 Food and Drugs 8 2011-04-01 2011-04-01 false Short increment sensitivity index (SISI) adapter. 874.1070 Section 874.1070 Food and Drugs FOOD AND DRUG ADMINISTRATION, DEPARTMENT OF HEALTH AND HUMAN... short periodic sound pulses in specific small decibel increments that are intended to be superimposed on...
21 CFR 874.1070 - Short increment sensitivity index (SISI) adapter.
Code of Federal Regulations, 2010 CFR
2010-04-01
... 21 Food and Drugs 8 2010-04-01 2010-04-01 false Short increment sensitivity index (SISI) adapter. 874.1070 Section 874.1070 Food and Drugs FOOD AND DRUG ADMINISTRATION, DEPARTMENT OF HEALTH AND HUMAN... short periodic sound pulses in specific small decibel increments that are intended to be superimposed on...
Combining Accuracy and Efficiency: An Incremental Focal-Point Method Based on Pair Natural Orbitals.
Fiedler, Benjamin; Schmitz, Gunnar; Hättig, Christof; Friedrich, Joachim
2017-12-12
In this work, we present a new pair natural orbitals (PNO)-based incremental scheme to calculate CCSD(T) and CCSD(T0) reaction, interaction, and binding energies. We perform an extensive analysis, which shows small incremental errors similar to previous non-PNO calculations. Furthermore, slight PNO errors are obtained by using T PNO = T TNO with appropriate values of 10 -7 to 10 -8 for reactions and 10 -8 for interaction or binding energies. The combination with the efficient MP2 focal-point approach yields chemical accuracy relative to the complete basis-set (CBS) limit. In this method, small basis sets (cc-pVDZ, def2-TZVP) for the CCSD(T) part are sufficient in case of reactions or interactions, while some larger ones (e.g., (aug)-cc-pVTZ) are necessary for molecular clusters. For these larger basis sets, we show the very high efficiency of our scheme. We obtain not only tremendous decreases of the wall times (i.e., factors >10 2 ) due to the parallelization of the increment calculations as well as of the total times due to the application of PNOs (i.e., compared to the normal incremental scheme) but also smaller total times with respect to the standard PNO method. That way, our new method features a perfect applicability by combining an excellent accuracy with a very high efficiency as well as the accessibility to larger systems due to the separation of the full computation into several small increments.
Motorized control for mirror mount apparatus
Cutburth, Ronald W.
1989-01-01
A motorized control and automatic braking system for adjusting mirror mount apparatus is disclosed. The motor control includes a planetary gear arrangement to provide improved pitch adjustment capability while permitting a small packaged design. The motor control for mirror mount adjustment is suitable for laser beam propagation applications. The brake is a system of constant contact, floating detents which engage the planetary gear at selected between-teeth increments to stop rotation instantaneously when the drive motor stops.
NASA Astrophysics Data System (ADS)
Kuhn, Matthew R.; Daouadji, Ali
2018-05-01
The paper addresses a common assumption of elastoplastic modeling: that the recoverable, elastic strain increment is unaffected by alterations of the elastic moduli that accompany loading. This assumption is found to be false for a granular material, and discrete element (DEM) simulations demonstrate that granular materials are coupled materials at both micro- and macro-scales. Elasto-plastic coupling at the macro-scale is placed in the context of thermomechanics framework of Tomasz Hueckel and Hans Ziegler, in which the elastic moduli are altered by irreversible processes during loading. This complex behavior is explored for multi-directional loading probes that follow an initial monotonic loading. An advanced DEM model is used in the study, with non-convex non-spherical particles and two different contact models: a conventional linear-frictional model and an exact implementation of the Hertz-like Cattaneo-Mindlin model. Orthotropic true-triaxial probes were used in the study (i.e., no direct shear strain), with tiny strain increments of 2 ×10-6 . At the micro-scale, contact movements were monitored during small increments of loading and load-reversal, and results show that these movements are not reversed by a reversal of strain direction, and some contacts that were sliding during a loading increment continue to slide during reversal. The probes show that the coupled part of a strain increment, the difference between the recoverable (elastic) increment and its reversible part, must be considered when partitioning strain increments into elastic and plastic parts. Small increments of irreversible (and plastic) strain and contact slipping and frictional dissipation occur for all directions of loading, and an elastic domain, if it exists at all, is smaller than the strain increment used in the simulations.
Spear, Ashley D.; Hochhalter, Jacob D.; Cerrone, Albert R.; ...
2016-04-27
In an effort to reproduce computationally the observed evolution of microstructurally small fatigue cracks (MSFCs), a method is presented for generating conformal, finite-element (FE), volume meshes from 3D measurements of MSFC propagation. The resulting volume meshes contain traction-free surfaces that conform to incrementally measured 3D crack shapes. Grain morphologies measured using near-field high-energy X-ray diffraction microscopy are also represented within the FE volume meshes. Proof-of-concept simulations are performed to demonstrate the utility of the mesh-generation method. The proof-of-concept simulations employ a crystal-plasticity constitutive model and are performed using the conformal FE meshes corresponding to successive crack-growth increments. Although the simulationsmore » for each crack increment are currently independent of one another, they need not be, and transfer of material-state information among successive crack-increment meshes is discussed. The mesh-generation method was developed using post-mortem measurements, yet it is general enough that it can be applied to in-situ measurements of 3D MSFC propagation.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Spear, Ashley D.; Hochhalter, Jacob D.; Cerrone, Albert R.
In an effort to reproduce computationally the observed evolution of microstructurally small fatigue cracks (MSFCs), a method is presented for generating conformal, finite-element (FE), volume meshes from 3D measurements of MSFC propagation. The resulting volume meshes contain traction-free surfaces that conform to incrementally measured 3D crack shapes. Grain morphologies measured using near-field high-energy X-ray diffraction microscopy are also represented within the FE volume meshes. Proof-of-concept simulations are performed to demonstrate the utility of the mesh-generation method. The proof-of-concept simulations employ a crystal-plasticity constitutive model and are performed using the conformal FE meshes corresponding to successive crack-growth increments. Although the simulationsmore » for each crack increment are currently independent of one another, they need not be, and transfer of material-state information among successive crack-increment meshes is discussed. The mesh-generation method was developed using post-mortem measurements, yet it is general enough that it can be applied to in-situ measurements of 3D MSFC propagation.« less
Incremental classification learning for anomaly detection in medical images
NASA Astrophysics Data System (ADS)
Giritharan, Balathasan; Yuan, Xiaohui; Liu, Jianguo
2009-02-01
Computer-aided diagnosis usually screens thousands of instances to find only a few positive cases that indicate probable presence of disease.The amount of patient data increases consistently all the time. In diagnosis of new instances, disagreement occurs between a CAD system and physicians, which suggests inaccurate classifiers. Intuitively, misclassified instances and the previously acquired data should be used to retrain the classifier. This, however, is very time consuming and, in some cases where dataset is too large, becomes infeasible. In addition, among the patient data, only a small percentile shows positive sign, which is known as imbalanced data.We present an incremental Support Vector Machines(SVM) as a solution for the class imbalance problem in classification of anomaly in medical images. The support vectors provide a concise representation of the distribution of the training data. Here we use bootstrapping to identify potential candidate support vectors for future iterations. Experiments were conducted using images from endoscopy videos, and the sensitivity and specificity were close to that of SVM trained using all samples available at a given incremental step with significantly improved efficiency in training the classifier.
Two-dimensional scanner apparatus. [flaw detector in small flat plates
NASA Technical Reports Server (NTRS)
Kurtz, G. W.; Bankston, B. F. (Inventor)
1984-01-01
An X-Y scanner utilizes an eddy current or ultrasonic current test probe to detect surface defects in small flat plates and the like. The apparatus includes a scanner which travels on a pair of slide tubes in the X-direction. The scanner, carried on a carriage which slides in the Y-direction, is driven by a helix shaft with a closed-loop helix groove in which a follower pin carried by scanner rides. The carriage is moved incrementally in the Y-direction upon the completion of travel of the scanner back and forth in the X-direction by means of an indexing actuator and an indexing gear. The actuator is in the form of a ratchet which engages ratchet gear upon return of the scanner to the indexing position. The indexing gear is rotated a predetermined increment along a crack gear to move carriage incrementally in the Y-direction. Thus, simplified highly responsive mechanical motion may be had in a small lightweight portable unit for accurate scanning of small area.
NASA Astrophysics Data System (ADS)
Alam, Md Jahangir; Goodall, Jonathan L.
2012-04-01
The goal of this research was to quantify the relative impact of hydrologic and nitrogen source changes on incremental nitrogen yield in the contiguous United States. Using nitrogen source estimates from various federal data bases, remotely sensed land use data from the National Land Cover Data program, and observed instream loadings from the United States Geological Survey National Stream Quality Accounting Network program, we calibrated and applied the spatially referenced regression model SPARROW to estimate incremental nitrogen yield for the contiguous United States. We ran different model scenarios to separate the effects of changes in source contributions from hydrologic changes for the years 1992 and 2001, assuming that only state conditions changed and that model coefficients describing the stream water-quality response to changes in state conditions remained constant between 1992 and 2001. Model results show a decrease of 8.2% in the median incremental nitrogen yield over the period of analysis with the vast majority of this decrease due to changes in hydrologic conditions rather than decreases in nitrogen sources. For example, when we changed the 1992 version of the model to have nitrogen source data from 2001, the model results showed only a small increase in median incremental nitrogen yield (0.12%). However, when we changed the 1992 version of the model to have hydrologic conditions from 2001, model results showed a decrease of approximately 8.7% in median incremental nitrogen yield. We did, however, find notable differences in incremental yield estimates for different sources of nitrogen after controlling for hydrologic changes, particularly for population related sources. For example, the median incremental yield for population related sources increased by 8.4% after controlling for hydrologic changes. This is in contrast to a 2.8% decrease in population related sources when hydrologic changes are included in the analysis. Likewise we found that median incremental yield from urban watersheds increased by 6.8% after controlling for hydrologic changes—in contrast to the median incremental nitrogen yield from cropland watersheds, which decreased by 2.1% over the same time period. These results suggest that, after accounting for hydrologic changes, population related sources became a more significant contributor of nitrogen yield to streams in the contiguous United States over the period of analysis. However, this study was not able to account for the influence of human management practices such as improvements in wastewater treatment plants or Best Management Practices that likely improved water quality, due to a lack of data for quantifying the impact of these practices for the study area.
40 CFR 60.1605 - What if I do not meet an increment of progress?
Code of Federal Regulations, 2010 CFR
2010-07-01
... Times for Small Municipal Waste Combustion Units Constructed on or Before August 30, 1999 Model Rule... increment of progress, you must submit a notification to the Administrator postmarked within 10 business...
Rod-cone interaction in light adaptation
Latch, M.; Lennie, P.
1977-01-01
1. The increment-threshold for a small test spot in the peripheral visual field was measured against backgrounds that were red or blue. 2. When the background was a large uniform field, threshold over most of the scotopic range depended exactly upon the background's effect upon rods. This confirms Flamant & Stiles (1948). But when the background was small, threshold was elevated more by a long wave-length than a short wave-length background equated for its effect on rods. 3. The influence of cones was explored in a further experiment. The scotopic increment-threshold was established for a short wave-length test spot on a large, short wave-length background. Then a steady red circular patch, conspicuous to cones, but below the increment-threshold for rod vision, was added to the background. When it was small, but not when it was large, this patch substantially raised the threshold for the test. 4. When a similar experiment was made using, instead of a red patch, a short wave-length one that was conspicuous in rod vision, threshold varied similarly with patch size. These results support the notion that the influence of small backgrounds arises in some size-selective mechanism that is indifferent to the receptor system in which visual signals originate. Two corollaries of this hypothesis were tested in further experiments. 5. A small patch was chosen so as to lift scotopic threshold substantially above its level on a uniform field. This threshold elevation persisted for minutes after extinction of the patch, but only when the patch was small. A large patch made bright enough to elevate threshold by as much as the small one gave rise to no corresponding after-effect. 6. Increment-thresholds for a small red test spot, detected through cones, followed the same course whether a large uniform background was long- or short wave-length. When the background was small, threshold upon the short wave-length one began to rise for much lower levels of background illumination, suggesting the influence of rods. This was confirmed by repeating the experiment after a strong bleach when the cones, but not rods, had fully recovered their sensitivity. Increment-thresholds upon small backgrounds of long or short wave-lengths then followed the same course. PMID:894602
NASA Technical Reports Server (NTRS)
Quraishi, Naveed; Allen, Jim; Bushnell, Glenn; Fialho, Ian
2003-01-01
The purpose of ARIS-ICE is to improve, optimize then operationally test and document the performance of the ARIS system on the International Space Station. The ICE program required testing across a full 3 increments (2 through 4). This paper represents the operational report summarizing our accomplishments through the third and fourth increment of testing. The main objectives and results of the increment two testing are discussed in The Increment two Operational Report. This report can be obtained from the ISS Payloads Office or from (http://iss-www.isc.nasa.gov/sslissapt/payofc/OZ3/ARIS.html). In summary these were to ensure the smooth and successful activation of the system and correct operational issues related to long term testing. Then the follow on increment 3 & 4 testing encompassed the majority of the on orbit performance assessments and improvements made to the ARIS system. The intent here is to report these preliminary results of the increment 3 & 4 ARIS-ICE testing as well as the ARIS system improvements made for our users and customers.
NASA Technical Reports Server (NTRS)
Marr, W. A., Jr.
1972-01-01
The behavior of finite element models employing different constitutive relations to describe the stress-strain behavior of soils is investigated. Three models, which assume small strain theory is applicable, include a nondilatant, a dilatant and a strain hardening constitutive relation. Two models are formulated using large strain theory and include a hyperbolic and a Tresca elastic perfectly plastic constitutive relation. These finite element models are used to analyze retaining walls and footings. Methods of improving the finite element solutions are investigated. For nonlinear problems better solutions can be obtained by using smaller load increment sizes and more iterations per load increment than by increasing the number of elements. Suitable methods of treating tension stresses and stresses which exceed the yield criteria are discussed.
Clausen, J L; Georgian, T; Gardner, K H; Douglas, T A
2018-01-01
This study compares conventional grab sampling to incremental sampling methodology (ISM) to characterize metal contamination at a military small-arms-range. Grab sample results had large variances, positively skewed non-normal distributions, extreme outliers, and poor agreement between duplicate samples even when samples were co-located within tens of centimeters of each other. The extreme outliers strongly influenced the grab sample means for the primary contaminants lead (Pb) and antinomy (Sb). In contrast, median and mean metal concentrations were similar for the ISM samples. ISM significantly reduced measurement uncertainty of estimates of the mean, increasing data quality (e.g., for environmental risk assessments) with fewer samples (e.g., decreasing total project costs). Based on Monte Carlo resampling simulations, grab sampling resulted in highly variable means and upper confidence limits of the mean relative to ISM.
A FMEA clinical laboratory case study: how to make problems and improvements measurable.
Capunzo, Mario; Cavallo, Pierpaolo; Boccia, Giovanni; Brunetti, Luigi; Pizzuti, Sante
2004-01-01
The authors have experimented the application of the Failure Mode and Effect Analysis (FMEA) technique in a clinical laboratory. FMEA technique allows: a) to evaluate and measure the hazards of a process malfunction, b) to decide where to execute improvement actions, and c) to measure the outcome of those actions. A small sample of analytes has been studied: there have been determined the causes of the possible malfunctions of the analytical process, calculating the risk probability index (RPI), with a value between 1 and 1,000. Only for the cases of RPI > 400, improvement actions have been implemented that allowed a reduction of RPI values between 25% to 70% with a costs increment of < 1%. FMEA technique can be applied to the processes of a clinical laboratory, even if of small dimensions, and offers a high potential of improvement. Nevertheless, such activity needs a thorough planning because it is complex, even if the laboratory already operates an ISO 9000 Quality Management System.
‘Small Changes’ to Diet and Physical Activity Behaviors for Weight Management
Hills, Andrew P.; Byrne, Nuala M.; Lindstrom, Rachel; Hill, James O.
2013-01-01
Obesity is associated with numerous short- and long-term health consequences. Low levels of physical activity and poor dietary habits are consistent with an increased risk of obesity in an obesogenic environment. Relatively little research has investigated associations between eating and activity behaviors by using a systems biology approach and by considering the dynamics of the energy balance concept. A significant body of research indicates that a small positive energy balance over time is sufficient to cause weight gain in many individuals. In contrast, small changes in nutrition and physical activity behaviors can prevent weight gain. In the context of weight management, it may be more feasible for most people to make small compared to large short-term changes in diet and activity. This paper presents a case for the use of small and incremental changes in diet and physical activity for improved weight management in the context of a toxic obesogenic environment. PMID:23711772
Looking back and to the future: Are we improving 'cure' in non-small cell lung cancer?
Walder, David; O'Brien, Mary
2017-04-01
In surgical series, cancer-free survival at 5 years is often referred to as a cure. In recent years, attempts to improve cure rates in non-small cell lung cancer (NSCLC) have focussed on earlier diagnosis through cost-effective screening programs. Systemic therapies have historically added only a small benefit to overall survival in both the adjuvant and palliative setting. However, in the last two decades, the development of new treatment options has added incremental improvements in NSCLC survival rates. Patients with a targetable sensitising mutation including epidermal growth factor receptor gene mutations and anaplastic lymphoma kinase rearrangements have significantly better prognosis, and many will survive beyond 5 years. Immunotherapy is an effective treatment in selected patients with NSCLC and is set to cause another leap in 5 year survival rates. Although these patients are not free from disease, survival at 5 years may become the more important end-point as NSCLC becomes seen as a chronic oncological disease. Copyright © 2017 Elsevier Ltd. All rights reserved.
Small Diameter Bomb Increment II (SDB II)
2015-12-01
Selected Acquisition Report ( SAR ) RCS: DD-A&T(Q&A)823-439 Small Diameter Bomb Increment II (SDB II) As of FY 2017 President’s Budget Defense...Acquisition Management Information Retrieval (DAMIR) March 23, 2016 16:19:13 UNCLASSIFIED SDB II December 2015 SAR March 23, 2016 16:19:13 UNCLASSIFIED...Document OSD - Office of the Secretary of Defense O&S - Operating and Support PAUC - Program Acquisition Unit Cost SDB II December 2015 SAR March 23
2010-06-01
Sampling (MIS)? • Technique of combining many increments of soil from a number of points within exposure area • Developed by Enviro Stat (Trademarked...Demonstrating a reliable soil sampling strategy to accurately characterize contaminant concentrations in spatially extreme and heterogeneous...into a set of decision (exposure) units • One or several discrete or small- scale composite soil samples collected to represent each decision unit
The Patient Protection and Affordable Care Act and the regulation of the health insurance industry.
Jha, Saurabh; Baker, Tom
2012-12-01
The Patient Protection and Affordable Care Act is a comprehensive and multipronged reform of the US health care system. The legislation makes incremental changes to Medicare, Medicaid, and the market for employer-sponsored health insurance. However, it makes substantial changes to the market for individual and small-group health insurance. The purpose of this article is to introduce the key regulatory reforms in the market for individual and small-group health insurance and explain how these reforms tackle adverse selection and risk classification and improve access to health care for the hitherto uninsured or underinsured population. Copyright © 2012 American College of Radiology. Published by Elsevier Inc. All rights reserved.
NASA Technical Reports Server (NTRS)
Hoffman, Ross N.
1993-01-01
A preliminary assessment of the impact of the ERS 1 scatterometer wind data on the current European Centre for Medium-Range Weather Forecasts analysis and forecast system has been carried out. Although the scatterometer data results in changes to the analyses and forecasts, there is no consistent improvement or degradation. Our results are based on comparing analyses and forecasts from assimilation cycles. The two sets of analyses are very similar except for the low level wind fields over the ocean. Impacts on the analyzed wind fields are greater over the southern ocean, where other data are scarce. For the most part the mass field increments are too small to balance the wind increments. The effect of the nonlinear normal mode initialization on the analysis differences is quite small, but we observe that the differences tend to wash out in the subsequent 6-hour forecast. In the Northern Hemisphere, analysis differences are very small, except directly at the scatterometer locations. Forecast comparisons reveal large differences in the Southern Hemisphere after 72 hours. Notable differences in the Northern Hemisphere do not appear until late in the forecast. Overall, however, the Southern Hemisphere impacts are neutral. The experiments described are preliminary in several respects. We expect these data to ultimately prove useful for global data assimilation.
Wildi, Karin; Zellweger, Christa; Twerenbold, Raphael; Jaeger, Cedric; Reichlin, Tobias; Haaf, Philip; Faoro, Jonathan; Giménez, Maria Rubini; Fischer, Andreas; Nelles, Berit; Druey, Sophie; Krivoshei, Lian; Hillinger, Petra; Puelacher, Christian; Herrmann, Thomas; Campodarve, Isabel; Rentsch, Katharina; Steuer, Stephan; Osswald, Stefan; Mueller, Christian
2015-01-01
The incremental value of copeptin, a novel marker of endogenous stress, for rapid rule-out of non-ST-elevation myocardial infarction (NSTEMI) is unclear when sensitive or even high-sensitivity cardiac troponin cTn (hs-cTn) assays are used. In an international multicenter study we evaluated 1929 consecutive patients with symptoms suggestive of acute myocardial infarction (AMI). Measurements of copeptin, three sensitive and three hs-cTn assays were performed at presentation in a blinded fashion. The final diagnosis was adjudicated by two independent cardiologists using all clinical information including coronary angiography and levels of hs-cTnT. The incremental value in the diagnosis of NSTEMI was quantified using four outcome measures: area under the receiver-operating characteristic curve (AUC), integrated discrimination improvement (IDI), sensitivity and negative predictive value (NPV). Early presenters (< 4h since chest pain onset) were a pre-defined subgroup. NSTEMI was the adjudicated final diagnosis in 358 (18.6%) patients. As compared to the use of cTn alone, copeptin significantly increased AUC for two (33%) and IDI (between 0.010 and 0.041 (all p < 0.01)), sensitivity and NPV for all six cTn assays (100%); NPV to 96-99% when the 99 th percentile of the respective cTnI assay was combined with a copeptin level of 9 pmol/l (all p < 0.01). The incremental value in early presenters was similar to that of the overall cohort. When used for rapid rule-out of NSTEM in combination with sensitive or hs-cTnI assays, copeptin provides a numerically small, but statistically and likely also clinically significant incremental value. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
Widera, Christian; Pencina, Michael J; Bobadilla, Maria; Reimann, Ines; Guba-Quint, Anja; Marquardt, Ivonne; Bethmann, Kerstin; Korf-Klingebiel, Mortimer; Kempf, Tibor; Lichtinghagen, Ralf; Katus, Hugo A; Giannitsis, Evangelos; Wollert, Kai C
2013-10-01
Guidelines recommend the use of validated risk scores and a high-sensitivity cardiac troponin assay for risk assessment in non-ST-elevation acute coronary syndrome (NSTE-ACS). The incremental prognostic value of biomarkers in this context is unknown. We calculated the Global Registry of Acute Coronary Events (GRACE) score and measured the circulating concentrations of high-sensitivity cardiac troponin T (hs-cTnT) and 8 selected cardiac biomarkers on admission in 1146 patients with NSTE-ACS. We used an hs-cTnT threshold at the 99th percentile of a reference population to define increased cardiac marker in the score. The magnitude of the increase in model performance when individual biomarkers were added to GRACE was assessed by the change (Δ) in the area under the receiver-operating characteristic curve (AUC), integrated discrimination improvement (IDI), and category-free net reclassification improvement [NRI(>0)]. Seventy-eight patients reached the combined end point of 6-month all-cause mortality or nonfatal myocardial infarction. The GRACE score alone had an AUC of 0.749. All biomarkers were associated with the risk of the combined end point and offered statistically significant improvement in model performance when added to GRACE (likelihood ratio test P ≤ 0.015). Growth differentiation factor 15 [ΔAUC 0.039, IDI 0.049, NRI(>0) 0.554] and N-terminal pro-B-type natriuretic peptide [ΔAUC 0.024, IDI 0.027, NRI(>0) 0.438] emerged as the 2 most promising biomarkers. Improvements in model performance upon addition of a second biomarker were small in magnitude. Biomarkers can add prognostic information to the GRACE score even in the current era of high-sensitivity cardiac troponin assays. The incremental information offered by individual biomarkers varies considerably, however.
Small Group Learning: Do Group Members' Implicit Theories of Ability Make a Difference?
ERIC Educational Resources Information Center
Beckmann, Nadin; Wood, Robert E.; Minbashian, Amirali; Tabernero, Carmen
2012-01-01
We examined the impact of members' implicit theories of ability on group learning and the mediating role of several group process variables, such as goal-setting, effort attributions, and efficacy beliefs. Comparisons were between 15 groups with a strong incremental view on ability (high incremental theory groups), and 15 groups with a weak…
Incremental social learning in particle swarms.
de Oca, Marco A Montes; Stutzle, Thomas; Van den Enden, Ken; Dorigo, Marco
2011-04-01
Incremental social learning (ISL) was proposed as a way to improve the scalability of systems composed of multiple learning agents. In this paper, we show that ISL can be very useful to improve the performance of population-based optimization algorithms. Our study focuses on two particle swarm optimization (PSO) algorithms: a) the incremental particle swarm optimizer (IPSO), which is a PSO algorithm with a growing population size in which the initial position of new particles is biased toward the best-so-far solution, and b) the incremental particle swarm optimizer with local search (IPSOLS), in which solutions are further improved through a local search procedure. We first derive analytically the probability density function induced by the proposed initialization rule applied to new particles. Then, we compare the performance of IPSO and IPSOLS on a set of benchmark functions with that of other PSO algorithms (with and without local search) and a random restart local search algorithm. Finally, we measure the benefits of using incremental social learning on PSO algorithms by running IPSO and IPSOLS on problems with different fitness distance correlations.
The Value of Medical and Pharmaceutical Interventions for Reducing Obesity
Michaud, Pierre-Carl; Goldman, Dana; Lakdawalla, Darius; Zheng, Yuhui; Gailey, Adam H.
2012-01-01
This paper attempts to quantify the social, private, and public-finance values of reducing obesity through pharmaceutical and medical interventions. We find that the total social value of bariatric surgery is large for treated patients, with incremental social cost-effectiveness ratios typically under $10,000 per life-year saved. On the other hand, pharmaceutical interventions against obesity yield much less social value with incremental social cost-effectiveness ratios around $50,000. Our approach accounts for: competing risks to life expectancy; health care costs; and a variety of non-medical economic consequences (pensions, disability insurance, taxes, and earnings), which account for 20% of the total social cost of these treatments. On balance, bariatric surgery generates substantial private value for those treated, in the form of health and other economic consequences. The net public fiscal effects are modest, primarily because the size of the population eligible for treatment is small while the net social effect is large once improvements in life expectancy are taken into account. PMID:22705389
The value of medical and pharmaceutical interventions for reducing obesity.
Michaud, Pierre-Carl; Goldman, Dana P; Lakdawalla, Darius N; Zheng, Yuhui; Gailey, Adam H
2012-07-01
This paper attempts to quantify the social, private, and public-finance values of reducing obesity through pharmaceutical and medical interventions. We find that the total social value of bariatric surgery is large for treated patients, with incremental social cost-effectiveness ratios typically under $10,000 per life-year saved. On the other hand, pharmaceutical interventions against obesity yield much less social value with incremental social cost-effectiveness ratios around $50,000. Our approach accounts for: competing risks to life expectancy; health care costs; and a variety of non-medical economic consequences (pensions, disability insurance, taxes, and earnings), which account for 20% of the total social cost of these treatments. On balance, bariatric surgery generates substantial private value for those treated, in the form of health and other economic consequences. The net public fiscal effects are modest, primarily because the size of the population eligible for treatment is small. The net social effect is large once improvements in life expectancy are taken into account. Copyright © 2012 Elsevier B.V. All rights reserved.
A Study on Mutil-Scale Background Error Covariances in 3D-Var Data Assimilation
NASA Astrophysics Data System (ADS)
Zhang, Xubin; Tan, Zhe-Min
2017-04-01
The construction of background error covariances is a key component of three-dimensional variational data assimilation. There are different scale background errors and interactions among them in the numerical weather Prediction. However, the influence of these errors and their interactions cannot be represented in the background error covariances statistics when estimated by the leading methods. So, it is necessary to construct background error covariances influenced by multi-scale interactions among errors. With the NMC method, this article firstly estimates the background error covariances at given model-resolution scales. And then the information of errors whose scales are larger and smaller than the given ones is introduced respectively, using different nesting techniques, to estimate the corresponding covariances. The comparisons of three background error covariances statistics influenced by information of errors at different scales reveal that, the background error variances enhance particularly at large scales and higher levels when introducing the information of larger-scale errors by the lateral boundary condition provided by a lower-resolution model. On the other hand, the variances reduce at medium scales at the higher levels, while those show slight improvement at lower levels in the nested domain, especially at medium and small scales, when introducing the information of smaller-scale errors by nesting a higher-resolution model. In addition, the introduction of information of larger- (smaller-) scale errors leads to larger (smaller) horizontal and vertical correlation scales of background errors. Considering the multivariate correlations, the Ekman coupling increases (decreases) with the information of larger- (smaller-) scale errors included, whereas the geostrophic coupling in free atmosphere weakens in both situations. The three covariances obtained in above work are used in a data assimilation and model forecast system respectively, and then the analysis-forecast cycles for a period of 1 month are conducted. Through the comparison of both analyses and forecasts from this system, it is found that the trends for variation in analysis increments with information of different scale errors introduced are consistent with those for variation in variances and correlations of background errors. In particular, introduction of smaller-scale errors leads to larger amplitude of analysis increments for winds at medium scales at the height of both high- and low- level jet. And analysis increments for both temperature and humidity are greater at the corresponding scales at middle and upper levels under this circumstance. These analysis increments improve the intensity of jet-convection system which includes jets at different levels and coupling between them associated with latent heat release, and these changes in analyses contribute to the better forecasts for winds and temperature in the corresponding areas. When smaller-scale errors are included, analysis increments for humidity enhance significantly at large scales at lower levels to moisten southern analyses. This humidification devotes to correcting dry bias there and eventually improves forecast skill of humidity. Moreover, inclusion of larger- (smaller-) scale errors is beneficial for forecast quality of heavy (light) precipitation at large (small) scales due to the amplification (diminution) of intensity and area in precipitation forecasts but tends to overestimate (underestimate) light (heavy) precipitation .
Improving stroke outcome: the benefits of increasing availability of technology.
Heller, R. F.; Langhorne, P.; James, E.
2000-01-01
INTRODUCTION: A decision analysis was performed to explore the potential benefits of interventions to improve the outcome of patients admitted to hospital with a stroke, in the context of the technology available in different parts of the world. METHODS: The outcome of death or dependency was used with a six-month end-point. RESULTS: Four settings were identified that would depend on the resources available. The proportion of stroke patients who were dead or dependent at six months was 61.5% with no intervention at all. Setting 4, with the only intervention being the delayed introduction of aspirin, produced a 0.5% absolute improvement in outcome (death or dependency), and the addition of an organized stroke unit (Setting 3) produced the largest incremental improvement, of 2.7%. Extra interventions associated with non-urgent computed tomography and thus the ability to avoid anticoagulation or aspirin for those with a haemorrhagic stroke (Setting 2), and immediate computed tomography scanning to allow the use of thrombolytics in non-haemorrhagic stroke (Setting 1), produced only small incremental benefits of 0.4% in each case. DISCUSSION: To reduce the burden of illness due to stroke, efforts at primary prevention are essential and likely to have a greater impact than even the best interventions after the event. In the absence of good primary prevention, whatever is possible must be done to reduce the sequelae of stroke. This analysis provides a rational basis for beginning the development of clinical guidelines applicable to the economic setting of the patient. PMID:11143194
NASA Astrophysics Data System (ADS)
He, Jianbin; Yu, Simin; Cai, Jianping
2016-12-01
Lyapunov exponent is an important index for describing chaotic systems behavior, and the largest Lyapunov exponent can be used to determine whether a system is chaotic or not. For discrete-time dynamical systems, the Lyapunov exponents are calculated by an eigenvalue method. In theory, according to eigenvalue method, the more accurate calculations of Lyapunov exponent can be obtained with the increment of iterations, and the limits also exist. However, due to the finite precision of computer and other reasons, the results will be numeric overflow, unrecognized, or inaccurate, which can be stated as follows: (1) The iterations cannot be too large, otherwise, the simulation result will appear as an error message of NaN or Inf; (2) If the error message of NaN or Inf does not appear, then with the increment of iterations, all Lyapunov exponents will get close to the largest Lyapunov exponent, which leads to inaccurate calculation results; (3) From the viewpoint of numerical calculation, obviously, if the iterations are too small, then the results are also inaccurate. Based on the analysis of Lyapunov-exponent calculation in discrete-time systems, this paper investigates two improved algorithms via QR orthogonal decomposition and SVD orthogonal decomposition approaches so as to solve the above-mentioned problems. Finally, some examples are given to illustrate the feasibility and effectiveness of the improved algorithms.
Taking the Next Step: Combining Incrementally Valid Indicators to Improve Recidivism Prediction
ERIC Educational Resources Information Center
Walters, Glenn D.
2011-01-01
The possibility of combining indicators to improve recidivism prediction was evaluated in a sample of released federal prisoners randomly divided into a derivation subsample (n = 550) and a cross-validation subsample (n = 551). Five incrementally valid indicators were selected from five domains: demographic (age), historical (prior convictions),…
Kongsaengdao, Subsai; Samintarapanya, Kanoksri; Rusmeechan, Siwarit; Sithinamsuwan, Pasiri; Tanprawate, Surat
2009-08-01
In this study we describe the electrophysiological findings in botulism patients with neuromuscular respiratory failure from major botulism outbreaks in Thailand. High-rate repetitive nerve stimulation testing (RNST) of the abductor digiti minimi (ADM) muscle of 17 botulism patients with neuromuscular respiratory failure showed mostly incremental responses, especially in response to >20-HZ stimulation. In the most severe stage of neuromuscular respiratory failure, RNST failed to elicit a compound muscle action potential (CMAP) of the ADM muscle. In the moderately severe stage, the initial CMAPs were of very low amplitude, and a 3-HZ RNST elicited incremental or decremental responses. A 10-HZ RNST elicited mainly decremental responses. In the early recovery stage, the initial CMAP amplitudes of the ADM muscle improved, with initially low amplitudes and an incremental response to 3- and 10-HZ RNSTs. Improved electrophysiological patterns of the ADM muscle correlated with improved respiratory muscle function. Incremental responses to 20-HZ RNST were most useful for diagnosis. The initial electrodiagnostic sign of recovery following treatment of neuromuscular respiratory failure was an increased CMAP amplitude and an incremental response to 10-20-HZ RNST. Muscle Nerve 40: 271-278, 2009.
Increment contracts: southern experience and potential use in the Appalachians
Gary W. Zinn; Gary W. Miller
1984-01-01
Increment contracts are long-term timber management contracts in which landowners receive regular payments based on the average annual growth of wood their land is capable of producing. Increment contracts have been used on nearly 500,000 acres of private forests in the South. Southern experience suggests that several changes in the contract would improve its utility:...
The voltage control for self-excited induction generator based on STATCOM
NASA Astrophysics Data System (ADS)
Yan, Dandan; Wang, Feifeng; Pan, Juntao; Long, Weijie
2018-05-01
The small independent induction generator can build up voltage under its remanent magnetizing and excitation capacitance, but it is prone to voltage sag and harmonic increment when running with load. Therefore, the controller for constant voltage is designed based on the natural coordinate system to adjust the static synchronous compensator (STATCOM), which provides two-way dynamic reactive power compensation for power generation system to achieve voltage stability and harmonic suppression. The control strategy is verified on Matlab/Sinmulik, and the results show that the STATCOM under the controller can effectively improve the load capacity and reliability of asynchronous generator.
Defense Acquisitions: Assessments of Selected Weapon Programs
2016-03-01
Increment 3 81 Indirect Fire Protection Capability Increment 2-Intercept Block 1 (IFPC Inc 2-I Block 1) 83 Improved Turbine Engine Program (ITEP...ITEP Improved Turbine Engine Program JAGM Joint Air-to-Ground Missile JLTV Joint Light Tactical Vehicle JSTARS Recap Joint Surveillance Target...Attack Radar System Recap 09/2017 —- Improved Turbine Engine Program 06/2018 O O O Amphibious Ship Replacement 09/2018 O O Advanced Pilot
Defense Acquisitions: Assessments of Selected Weapon Programs
2017-03-01
PAC-3 MSE) 81 Warfighter Information Network-Tactical (WIN-T) Increment 2 83 Improved Turbine Engine Program (ITEP) 85 Long Range Precision Fires...Unmanned Air System 05/2018 —- O Joint Surveillance Target Attack Radar System Recapitalization 10/2017 —- O Improved Turbine Engine Program TBD...Network-Tactical (WIN-T) Increment 2 83 1-page assessments Improved Turbine Engine Program (ITEP) 85 Long Range Precision Fires (LRPF) 86
2012-01-01
307–308) define kaizen as “continuous, incremental improvement of an activity to create more value with less muda.” They define muda as “any activity...approaches, kaizen events, Six Sigma, total quality management (TQM) for continuous improvement, kaikaku,6 process reengineering for discontinuous...them fix problems and develop capabilities. These efforts may include kaizen (i.e., continuous, incremental improvement) events, process mapping, work
NASA Astrophysics Data System (ADS)
Zhan, Zongqian; Wang, Chendong; Wang, Xin; Liu, Yi
2018-01-01
On the basis of today's popular virtual reality and scientific visualization, three-dimensional (3-D) reconstruction is widely used in disaster relief, virtual shopping, reconstruction of cultural relics, etc. In the traditional incremental structure from motion (incremental SFM) method, the time cost of the matching is one of the main factors restricting the popularization of this method. To make the whole matching process more efficient, we propose a preprocessing method before the matching process: (1) we first construct a random k-d forest with the large-scale scale-invariant feature transform features in the images and combine this with the pHash method to obtain a value of relatedness, (2) we then construct a connected weighted graph based on the relatedness value, and (3) we finally obtain a planned sequence of adding images according to the principle of the minimum spanning tree. On this basis, we attempt to thin the minimum spanning tree to reduce the number of matchings and ensure that the images are well distributed. The experimental results show a great reduction in the number of matchings with enough object points, with only a small influence on the inner stability, which proves that this method can quickly and reliably improve the efficiency of the SFM method with unordered multiview images in complex scenes.
NASA Astrophysics Data System (ADS)
Wernicke, S.; Dang, T.; Gies, S.; Tekkaya, A. E.
2018-05-01
The tendency to a higher variety of products requires economical manufacturing processes suitable for the production of prototypes and small batches. In the case of complex hollow-shaped parts, single point incremental forming (SPIF) represents a highly flexible process. The flexibility of this process comes along with a very long process time. To decrease the process time, a new incremental forming approach with multiple forming tools is investigated. The influence of two incremental forming tools on the resulting mechanical and geometrical component properties compared to SPIF is presented. Sheets made of EN AW-1050A were formed to frustums of a pyramid using different tool-path strategies. Furthermore, several variations of the tool-path strategy are analyzed. A time saving between 40% and 60% was observed depending on the tool-path and the radii of the forming tools while the mechanical properties remained unchanged. This knowledge can increase the cost efficiency of incremental forming processes.
Bozic, Kevin J; Pui, Christine M; Ludeman, Matthew J; Vail, Thomas P; Silverstein, Marc D
2010-09-01
Metal-on-metal hip resurfacing arthroplasty (MoM HRA) may offer potential advantages over total hip arthroplasty (THA) for certain patients with advanced osteoarthritis of the hip. However, the cost effectiveness of MoM HRA compared with THA is unclear. The purpose of this study was to compare the clinical effectiveness and cost-effectiveness of MoM HRA to THA. A Markov decision model was constructed to compare the quality-adjusted life-years (QALYs) and costs associated with HRA versus THA from the healthcare system perspective over a 30-year time horizon. We performed sensitivity analyses to evaluate the impact of patient characteristics, clinical outcome probabilities, quality of life and costs on the discounted incremental costs, incremental clinical effectiveness, and the incremental cost-effectiveness ratio (ICER) of HRA compared to THA. MoM HRA was associated with modest improvements in QALYs at a small incremental cost, and had an ICER less than $50,000 per QALY gained for men younger than 65 and for women younger than 55. MoM HRA and THA failure rates, device costs, and the difference in quality of life after conversion from HRA to THA compared to primary THA had the largest impact on costs and quality of life. MoM HRA could be clinically advantageous and cost-effective in younger men and women. Further research on the comparative effectiveness of MoM HRA versus THA should include assessments of the quality of life and resource use in addition to the clinical outcomes associated with both procedures. Level I, economic and decision analysis. See Guidelines for Authors for a complete description of levels of evidence.
Yates, Brian T; Taub, Jennifer
2003-12-01
To the extent that assessment improves the effectiveness of treatment, prevention, or other services, it can be said to be effective. If an assessment is as effective as alternatives for improving treatment and less costly, it can be said to be cost-effective. If that improvement in the effectiveness of the service is monetary or monetizable, the assessment can be judged beneficial. And, if the sum of monetary and monetizable benefits of assessment exceeds the sum of the costs of treatment, the assessment can be said to be cost-beneficial. An overview of cost-related issues is followed by practical strategies that researchers and administrators can use to measure incremental costs, incremental effectiveness, and incremental benefits of adding psychological assessments to other psychological interventions.
Müller, Eike H.; Scheichl, Rob; Shardlow, Tony
2015-01-01
This paper applies several well-known tricks from the numerical treatment of deterministic differential equations to improve the efficiency of the multilevel Monte Carlo (MLMC) method for stochastic differential equations (SDEs) and especially the Langevin equation. We use modified equations analysis as an alternative to strong-approximation theory for the integrator, and we apply this to introduce MLMC for Langevin-type equations with integrators based on operator splitting. We combine this with extrapolation and investigate the use of discrete random variables in place of the Gaussian increments, which is a well-known technique for the weak approximation of SDEs. We show that, for small-noise problems, discrete random variables can lead to an increase in efficiency of almost two orders of magnitude for practical levels of accuracy. PMID:27547075
Müller, Eike H; Scheichl, Rob; Shardlow, Tony
2015-04-08
This paper applies several well-known tricks from the numerical treatment of deterministic differential equations to improve the efficiency of the multilevel Monte Carlo (MLMC) method for stochastic differential equations (SDEs) and especially the Langevin equation. We use modified equations analysis as an alternative to strong-approximation theory for the integrator, and we apply this to introduce MLMC for Langevin-type equations with integrators based on operator splitting. We combine this with extrapolation and investigate the use of discrete random variables in place of the Gaussian increments, which is a well-known technique for the weak approximation of SDEs. We show that, for small-noise problems, discrete random variables can lead to an increase in efficiency of almost two orders of magnitude for practical levels of accuracy.
Improving the Selection, Classification, and Utilization of Army Enlisted Personnel. Project A
1987-06-01
performance measures, to determine whether the new predictors have incremental validity over and above the present system. These two components must be...critical aspect of this task is the demonstration of the incremental validity added by new predictors. Task 3. Measurement of School/Training Success...chances of incremental validity and classification efficiency. 3. Retain measures with adequate reliability. Using all accumulated information, the final
Stopping mechanism for capsule endoscope using electrical stimulus.
Woo, Sang Hyo; Kim, Tae Wan; Cho, Jin Ho
2010-01-01
An ingestible capsule, which has the ability to stop at certain locations in the small intestine, was designed and implemented to monitor intestinal diseases. The proposed capsule can contract the small intestine by using electrical stimuli; this contraction causes the capsule to stop when the maximum static frictional force (MSFF) is larger than the force of natural peristalsis. In vitro experiments were carried out to verify the feasibility of the capsule, and the results showed that the capsule was successfully stopped in the small intestine. Various electrodes and electrical stimulus parameters were determined on the basis of the MSFF. A moderate increment of the MSFF (12.7 +/- 4.6 gf at 5 V, 10 Hz, and 5 ms) and the maximum increment of the MSFF (56.5 +/- 9.77 gf at 20 V, 10 Hz, and 5 ms) were obtained, and it is sufficient force to stop the capsule.
Dynamic Constraint Satisfaction with Reasonable Global Constraints
NASA Technical Reports Server (NTRS)
Frank, Jeremy
2003-01-01
Previously studied theoretical frameworks for dynamic constraint satisfaction problems (DCSPs) employ a small set of primitive operators to modify a problem instance. They do not address the desire to model problems using sophisticated global constraints, and do not address efficiency questions related to incremental constraint enforcement. In this paper, we extend a DCSP framework to incorporate global constraints with flexible scope. A simple approach to incremental propagation after scope modification can be inefficient under some circumstances. We characterize the cases when this inefficiency can occur, and discuss two ways to alleviate this problem: adding rejection variables to the scope of flexible constraints, and adding new features to constraints that permit increased control over incremental propagation.
A simple method for quantitating the propensity for calcium oxalate crystallization in urine
NASA Technical Reports Server (NTRS)
Wabner, C. L.; Pak, C. Y.
1991-01-01
To assess the propensity for spontaneous crystallization of calcium oxalate in urine, the permissible increment in oxalate is calculated. The previous method required visual observation of crystallization with the addition of oxalate, this warranted the need for a large volume of urine and a sacrifice in accuracy in defining differences between small incremental changes of added oxalate. Therefore, this method has been miniaturized and spontaneous crystallization is detected from the depletion of radioactive oxalate. The new "micro" method demonstrated a marked decrease (p < 0.001) in the permissible increment in oxalate in urine of stone formers versus normal subjects. Moreover, crystallization inhibitors added to urine, in vitro (heparin or diphosphonate) or in vivo (potassium citrate administration), substantially increased the permissible increment in oxalate. Thus, the "micro" method has proven reliable and accurate in discriminating stone forming from control urine and in distinguishing changes of inhibitory activity.
NASA Technical Reports Server (NTRS)
Smith, Mark S.; Bui, Trong T.; Garcia, Christian A.; Cumming, Stephen B.
2016-01-01
A pair of compliant trailing edge flaps was flown on a modified GIII airplane. Prior to flight test, multiple analysis tools of various levels of complexity were used to predict the aerodynamic effects of the flaps. Vortex lattice, full potential flow, and full Navier-Stokes aerodynamic analysis software programs were used for prediction, in addition to another program that used empirical data. After the flight-test series, lift and pitching moment coefficient increments due to the flaps were estimated from flight data and compared to the results of the predictive tools. The predicted lift increments matched flight data well for all predictive tools for small flap deflections. All tools over-predicted lift increments for large flap deflections. The potential flow and Navier-Stokes programs predicted pitching moment coefficient increments better than the other tools.
Illi, Sabine K; Held, Ulrike; Frank, Irène; Spengler, Christina M
2012-08-01
Two distinct types of specific respiratory muscle training (RMT), i.e. respiratory muscle strength (resistive/threshold) and endurance (hyperpnoea) training, have been established to improve the endurance performance of healthy individuals. We performed a systematic review and meta-analysis in order to determine the factors that affect the change in endurance performance after RMT in healthy subjects. A computerized search was performed without language restriction in MEDLINE, EMBASE and CINAHL and references of original studies and reviews were searched for further relevant studies. RMT studies with healthy individuals assessing changes in endurance exercise performance by maximal tests (constant load, time trial, intermittent incremental, conventional [non-intermittent] incremental) were screened and abstracted by two independent investigators. A multiple linear regression model was used to identify effects of subjects' fitness, type of RMT (inspiratory or combined inspiratory/expiratory muscle strength training, respiratory muscle endurance training), type of exercise test, test duration and type of sport (rowing, running, swimming, cycling) on changes in performance after RMT. In addition, a meta-analysis was performed to determine the effect of RMT on endurance performance in those studies providing the necessary data. The multiple linear regression analysis including 46 original studies revealed that less fit subjects benefit more from RMT than highly trained athletes (6.0% per 10 mL · kg⁻¹ · min⁻¹ decrease in maximal oxygen uptake, 95% confidence interval [CI] 1.8, 10.2%; p = 0.005) and that improvements do not differ significantly between inspiratory muscle strength and respiratory muscle endurance training (p = 0.208), while combined inspiratory and expiratory muscle strength training seems to be superior in improving performance, although based on only 6 studies (+12.8% compared with inspiratory muscle strength training, 95% CI 3.6, 22.0%; p = 0.006). Furthermore, constant load tests (+16%, 95% CI 10.2, 22.9%) and intermittent incremental tests (+18.5%, 95% CI 10.8, 26.3%) detect changes in endurance performance better than conventional incremental tests (both p < 0.001) with no difference between time trials and conventional incremental tests (p = 0.286). With increasing test duration, improvements in performance are greater (+0.4% per minute test duration, 95% CI 0.1, 0.6%; p = 0.011) and the type of sport does not influence the magnitude of improvements (all p > 0.05). The meta-analysis, performed on eight controlled trials revealed a significant improvement in performance after RMT, which was detected by constant load tests, time trials and intermittent incremental tests, but not by conventional incremental tests. RMT improves endurance exercise performance in healthy individuals with greater improvements in less fit individuals and in sports of longer durations. The two most common types of RMT (inspiratory muscle strength and respiratory muscle endurance training) do not differ significantly in their effect, while combined inspiratory/expiratory strength training might be superior. Improvements are similar between different types of sports. Changes in performance can be detected by constant load tests, time trials and intermittent incremental tests only. Thus, all types of RMT can be used to improve exercise performance in healthy subjects but care must be taken regarding the test used to investigate the improvements.
Motivation and performance in physical education: an experimental test.
Moreno, Juan A; González-Cutre, David; Martín-Albo, José; Cervelló, Eduardo
2010-01-01
The purpose of this study was to analyse, experimentally, the relationships between motivation and performance in a lateral movement test in physical education. The study group consisted of 363 students (227 boys and 136 girls), aged between 12 and 16, who were randomly divided into three groups: an experimental group in which an incremental ability belief was induced, another experimental group in which an entity ability belief was induced, and a control group where there was no intervention. Measurements were made of situational intrinsic motivation, perceived competence in executing the task and performance. The results revealed that the incremental group reported higher scores on the situational intrinsic motivation scale. The entity group demonstrated better performance in the first test attempt than the incremental group but, in the second attempt, the performance was similar in the different groups. Perhaps the initial differences in performance disappeared because the incremental group counted on improving in the second attempt. These results are discussed in relation to the intensity with which the teacher conveys information relating to incremental ability belief of the pupil to increase intrinsic motivation and performance. Key pointsThe incremental group showed more situational intrinsic motivation.The entity group showed higher performance in the first test attempt, but significant differences disappeared in the second attempt.It seems that this incremental belief and greater intrinsic motivation made the students trust they would improve their performance in the second attempt at the lateral movement test.
Fate and Transport of Tungsten at Camp Edwards Small Arms Ranges
2007-08-01
area into the lower berm and/or trough. A similar approach was used in the lower berm area with samples collected from soil sloughing from the...bucket au- ger to collect samples beneath the bullet pockets and the trough. A multi - increment, subsurface soil sample was made by combining the...range. From these soil profiles, a total of 72 multi -increment subsurface soil sam- ples was collected (Table 2). The auger was cleaned between holes
Molecular Volumes and the Stokes-Einstein Equation
ERIC Educational Resources Information Center
Edward, John T.
1970-01-01
Examines the limitations of the Stokes-Einstein equation as it applies to small solute molecules. Discusses molecular volume determinations by atomic increments, molecular models, molar volumes of solids and liquids, and molal volumes. Presents an empirical correction factor for the equation which applies to molecular radii as small as 2 angstrom…
The dynamics of social innovation
Young, H. Peyton
2011-01-01
Social norms and institutions are mechanisms that facilitate coordination between individuals. A social innovation is a novel mechanism that increases the welfare of the individuals who adopt it compared with the status quo. We model the dynamics of social innovation as a coordination game played on a network. Individuals experiment with a novel strategy that would increase their payoffs provided that it is also adopted by their neighbors. The rate at which a social innovation spreads depends on three factors: the topology of the network and in particular the extent to which agents interact in small local clusters, the payoff gain of the innovation relative to the status quo, and the amount of noise in the best response process. The analysis shows that local clustering greatly enhances the speed with which social innovations spread. It also suggests that the welfare gains from innovation are more likely to occur in large jumps than in a series of small incremental improvements. PMID:22198762
Predictive optimized adaptive PSS in a single machine infinite bus.
Milla, Freddy; Duarte-Mermoud, Manuel A
2016-07-01
Power System Stabilizer (PSS) devices are responsible for providing a damping torque component to generators for reducing fluctuations in the system caused by small perturbations. A Predictive Optimized Adaptive PSS (POA-PSS) to improve the oscillations in a Single Machine Infinite Bus (SMIB) power system is discussed in this paper. POA-PSS provides the optimal design parameters for the classic PSS using an optimization predictive algorithm, which adapts to changes in the inputs of the system. This approach is part of small signal stability analysis, which uses equations in an incremental form around an operating point. Simulation studies on the SMIB power system illustrate that the proposed POA-PSS approach has better performance than the classical PSS. In addition, the effort in the control action of the POA-PSS is much less than that of other approaches considered for comparison. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.
The dynamics of social innovation.
Young, H Peyton
2011-12-27
Social norms and institutions are mechanisms that facilitate coordination between individuals. A social innovation is a novel mechanism that increases the welfare of the individuals who adopt it compared with the status quo. We model the dynamics of social innovation as a coordination game played on a network. Individuals experiment with a novel strategy that would increase their payoffs provided that it is also adopted by their neighbors. The rate at which a social innovation spreads depends on three factors: the topology of the network and in particular the extent to which agents interact in small local clusters, the payoff gain of the innovation relative to the status quo, and the amount of noise in the best response process. The analysis shows that local clustering greatly enhances the speed with which social innovations spread. It also suggests that the welfare gains from innovation are more likely to occur in large jumps than in a series of small incremental improvements.
Invited commentary: the incremental value of customization in defining abnormal fetal growth status.
Zhang, Jun; Sun, Kun
2013-10-15
Reference tools based on birth weight percentiles at a given gestational week have long been used to define fetuses or infants that are small or large for their gestational ages. However, important deficiencies of the birth weight reference are being increasingly recognized. Overwhelming evidence indicates that an ultrasonography-based fetal weight reference should be used to classify fetal and newborn sizes during pregnancy and at birth, respectively. Questions have been raised as to whether further adjustments for race/ethnicity, parity, sex, and maternal height and weight are helpful to improve the accuracy of the classification. In this issue of the Journal, Carberry et al. (Am J Epidemiol. 2013;178(8):1301-1308) show that adjustment for race/ethnicity is useful, but that additional fine tuning for other factors (i.e., full customization) in the classification may not further improve the ability to predict infant morbidity, mortality, and other fetal growth indicators. Thus, the theoretical advantage of full customization may have limited incremental value for pediatric outcomes, particularly in term births. Literature on the prediction of short-term maternal outcomes and very long-term outcomes (adult diseases) is too scarce to draw any conclusions. Given that each additional variable being incorporated in the classification scheme increases complexity and costs in practice, the clinical utility of full customization in obstetric practice requires further testing.
Joore, Manuela; Brunenberg, Danielle; Nelemans, Patricia; Wouters, Emiel; Kuijpers, Petra; Honig, Adriaan; Willems, Danielle; de Leeuw, Peter; Severens, Johan; Boonen, Annelies
2010-01-01
This article investigates whether differences in utility scores based on the EQ-5D and the SF-6D have impact on the incremental cost-utility ratios in five distinct patient groups. We used five empirical data sets of trial-based cost-utility studies that included patients with different disease conditions and severity (musculoskeletal disease, cardiovascular pulmonary disease, and psychological disorders) to calculate differences in quality-adjusted life-years (QALYs) based on EQ-5D and SF-6D utility scores. We compared incremental QALYs, incremental cost-utility ratios, and the probability that the incremental cost-utility ratio was acceptable within and across the data sets. We observed small differences in incremental QALYs, but large differences in the incremental cost-utility ratios and in the probability that these ratios were acceptable at a given threshold, in the majority of the presented cost-utility analyses. More specifically, in the patient groups with relatively mild health conditions the probability of acceptance of the incremental cost-utility ratio was considerably larger when using the EQ-5D to estimate utility. While in the patient groups with worse health conditions the probability of acceptance of the incremental cost-utility ratio was considerably larger when using the SF-6D to estimate utility. Much of the appeal in using QALYs as measure of effectiveness in economic evaluations is in the comparability across conditions and interventions. The incomparability of the results of cost-utility analyses using different instruments to estimate a single index value for health severely undermines this aspect and reduces the credibility of the use of incremental cost-utility ratios for decision-making.
Moriwaki, K; Mouri, M; Hagino, H
2017-06-01
Model-based economic evaluation was performed to assess the cost-effectiveness of zoledronic acid. Although zoledronic acid was dominated by alendronate, the incremental quality-adjusted life year (QALY) was quite small in extent. Considering the advantage of once-yearly injection of zoledronic acid in persistence, zoledronic acid might be a cost-effective treatment option compared to once-weekly oral alendronate. The purpose of this study was to estimate the cost-effectiveness of once-yearly injection of zoledronic acid for the treatment of osteoporosis in Japan. A patient-level state-transition model was developed to predict the outcome of patients with osteoporosis who have experienced a previous vertebral fracture. The efficacy of zoledronic acid was derived from a published network meta-analysis. Lifetime cost and QALYs were estimated for patients who had received zoledronic acid, alendronate, or basic treatment alone. The incremental cost-effectiveness ratio (ICER) of zoledronic acid was estimated. For patients 70 years of age, zoledronic acid was dominated by alendronate with incremental QALY of -0.004 to -0.000 and incremental cost of 430 USD to 493 USD. Deterministic sensitivity analysis indicated that the relative risk of hip fracture and drug cost strongly affected the cost-effectiveness of zoledronic acid compared to alendronate. Scenario analysis considering treatment persistence showed that the ICER of zoledronic acid compared to alendronate was estimated to be 47,435 USD, 27,018 USD, and 10,749 USD per QALY gained for patients with a T-score of -2.0, -2.5, or -3.0, respectively. Although zoledronic acid is dominated by alendronate, the incremental QALY is quite small in extent. Considering the advantage of annual zoledronic acid treatment in compliance and persistence, zoledronic acid may be a cost-effective treatment option compared to alendronate.
Cost-effectiveness of Lung Cancer Screening in Canada.
Goffin, John R; Flanagan, William M; Miller, Anthony B; Fitzgerald, Natalie R; Memon, Saima; Wolfson, Michael C; Evans, William K
2015-09-01
The US National Lung Screening Trial supports screening for lung cancer among smokers using low-dose computed tomographic (LDCT) scans. The cost-effectiveness of screening in a publically funded health care system remains a concern. To assess the cost-effectiveness of LDCT scan screening for lung cancer within the Canadian health care system. The Cancer Risk Management Model (CRMM) simulated individual lives within the Canadian population from 2014 to 2034, incorporating cancer risk, disease management, outcome, and cost data. Smokers and former smokers eligible for lung cancer screening (30 pack-year smoking history, ages 55-74 years, for the reference scenario) were modeled, and performance parameters were calibrated to the National Lung Screening Trial (NLST). The reference screening scenario assumes annual scans to age 75 years, 60% participation by 10 years, 70% adherence to screening, and unchanged smoking rates. The CRMM outputs are aggregated, and costs (2008 Canadian dollars) and life-years are discounted 3% annually. The incremental cost-effectiveness ratio. Compared with no screening, the reference scenario saved 51,000 quality-adjusted life-years (QALY) and had an incremental cost-effectiveness ratio of CaD $52,000/QALY. If smoking history is modeled for 20 or 40 pack-years, incremental cost-effectiveness ratios of CaD $62,000 and CaD $43,000/QALY, respectively, were generated. Changes in participation rates altered life years saved but not the incremental cost-effectiveness ratio, while the incremental cost-effectiveness ratio is sensitive to changes in adherence. An adjunct smoking cessation program improving the quit rate by 22.5% improves the incremental cost-effectiveness ratio to CaD $24,000/QALY. Lung cancer screening with LDCT appears cost-effective in the publicly funded Canadian health care system. An adjunct smoking cessation program has the potential to improve outcomes.
Incremental k-core decomposition: Algorithms and evaluation
Sariyuce, Ahmet Erdem; Gedik, Bugra; Jacques-SIlva, Gabriela; ...
2016-02-01
A k-core of a graph is a maximal connected subgraph in which every vertex is connected to at least k vertices in the subgraph. k-core decomposition is often used in large-scale network analysis, such as community detection, protein function prediction, visualization, and solving NP-hard problems on real networks efficiently, like maximal clique finding. In many real-world applications, networks change over time. As a result, it is essential to develop efficient incremental algorithms for dynamic graph data. In this paper, we propose a suite of incremental k-core decomposition algorithms for dynamic graph data. These algorithms locate a small subgraph that ismore » guaranteed to contain the list of vertices whose maximum k-core values have changed and efficiently process this subgraph to update the k-core decomposition. We present incremental algorithms for both insertion and deletion operations, and propose auxiliary vertex state maintenance techniques that can further accelerate these operations. Our results show a significant reduction in runtime compared to non-incremental alternatives. We illustrate the efficiency of our algorithms on different types of real and synthetic graphs, at varying scales. Furthermore, for a graph of 16 million vertices, we observe relative throughputs reaching a million times, relative to the non-incremental algorithms.« less
Program Budgeting within the Department of the Navy
1989-06-01
incremental increases in bidgets . When a popular call goes out for a 600 ship navy, a strategic defense initiative, or a B-2 bomber, this is the time...Previously defense budgets had been formulated by j focusing on the existing base and adding incremental improvements to it. The whole question of how much a...the aggregate budgets appeared to change in smooth increments , they found great variations in the budgets of the agencies which make up the DOA. They
Dor, Avi; Luo, Qian; Gerstein, Maya Tuchman; Malveaux, Floyd; Mitchell, Herman; Markus, Anne Rossier
We present an incremental cost-effectiveness analysis of an evidence-based childhood asthma intervention (Community Healthcare for Asthma Management and Prevention of Symptoms [CHAMPS]) to usual management of childhood asthma in community health centers. Data used in the analysis include household surveys, Medicaid insurance claims, and community health center expenditure reports. We combined our incremental cost-effectiveness analysis with a difference-in-differences multivariate regression framework. We found that CHAMPS reduced symptom days by 29.75 days per child-year and was cost-effective (incremental cost-effectiveness ratio: $28.76 per symptom-free days). Most of the benefits were due to reductions in direct medical costs. Indirect benefits from increased household productivity were relatively small.
The Application of Quantity Discounts in Army Procurements (Field Test).
1980-04-01
Work Directive (PWD). d. The amended PWD is forwarded to the Procurement and Production (PP) control where quantity increments and delivery schedules are...counts on 97 Army Stock Fund small purchases (less than $10,000) and received 10 I be * 0p cebe * )~ Cb 111 cost effective discounts on 46 or 47.4% of...discount but the computed annualized cost for the QD increment was larger than the computed annualized cost for the EOQ, this was not a cost effective
2013-06-01
lenses of unconsolidated sand and rounded river gravel overlain by as much as 5 m of silt. Gravel consists mostly of quartz and metamorphic rock with...iii LIST OF FIGURES Page Figure 1. Example of multi-increment sampling using a systematic-random sampling design for collecting two separate...The small arms firing Range 16 Record berms at Fort Wainwright. .................... 25 Figure 9. Location of berms sampled using ISM and grab
Kim, Jane J.; Campos, Nicole G.; Sy, Stephen; Burger, Emily A.; Cuzick, Jack; Castle, Philip E.; Hunt, William C.; Waxman, Alan; Wheeler, Cosette M.
2016-01-01
Background Studies suggest that cervical cancer screening practice in the United States is inefficient. The cost and health implications of non-compliance in the screening process compared to recommended guidelines are uncertain. Objective To estimate the benefits, costs, and cost-effectiveness of current cervical cancer screening practice and assess the value of screening improvements. Design Model-based cost-effectiveness analysis. Data Sources New Mexico HPV Pap Registry; medical literature. Target Population Cohort of women eligible for routine screening. Time Horizon Lifetime. Perspective Societal. Interventions Current cervical cancer screening practice; improved compliance to guidelines-based screening interval, triage testing, diagnostic referrals, and precancer treatment referrals. Outcome Measures Reductions in lifetime cervical cancer risk, quality-adjusted life-years (QALYs), lifetime costs, incremental cost-effectiveness ratios (ICERs), incremental net monetary benefits (INMBs Results of Base-Case Analysis Current screening practice was associated with lower health benefit and was not cost-effective relative to guidelines-based strategies. Improvements in the screening process were associated with higher QALYs and small changes in costs. Perfect c4mpliance to a 3-yearly screening interval and to colposcopy/biopsy referrals were associated with the highest INMBs ($759 and $741, respectively, at a willingness-to-pay threshold of $100,000 per QALY gained); together, the INMB increased to $1,645. Results of Sensitivity Analysis Current screening practice was inefficient in 100% of simulations. The rank ordering of screening improvements according to INMBs was stable over a range of screening inputs and willingness-to-pay thresholds. Limitations The impact of HPV vaccination was not considered. Conclusions The added health benefit of improving compliance to guidelines, especially the 3-yearly interval for cytology screening and diagnostic follow-up, may justify additional investments in interventions to improve U.S. cervical cancer screening practice. Funding Source U.S. National Cancer Institute. PMID:26414147
DOT National Transportation Integrated Search
2013-03-01
Incremental increases in paved shoulder widths have been studied and are shown in the Highway Safety Manual (HSM). While each incremental increase in shoulder width is beneficial, there is evidence that suggests the relationship between safety improv...
The Crucial Role of Error Correlation for Uncertainty Modeling of CFD-Based Aerodynamics Increments
NASA Technical Reports Server (NTRS)
Hemsch, Michael J.; Walker, Eric L.
2011-01-01
The Ares I ascent aerodynamics database for Design Cycle 3 (DAC-3) was built from wind-tunnel test results and CFD solutions. The wind tunnel results were used to build the baseline response surfaces for wind-tunnel Reynolds numbers at power-off conditions. The CFD solutions were used to build increments to account for Reynolds number effects. We calculate the validation errors for the primary CFD code results at wind tunnel Reynolds number power-off conditions and would like to be able to use those errors to predict the validation errors for the CFD increments. However, the validation errors are large compared to the increments. We suggest a way forward that is consistent with common practice in wind tunnel testing which is to assume that systematic errors in the measurement process and/or the environment will subtract out when increments are calculated, thus making increments more reliable with smaller uncertainty than absolute values of the aerodynamic coefficients. A similar practice has arisen for the use of CFD to generate aerodynamic database increments. The basis of this practice is the assumption of strong correlation of the systematic errors inherent in each of the results used to generate an increment. The assumption of strong correlation is the inferential link between the observed validation uncertainties at wind-tunnel Reynolds numbers and the uncertainties to be predicted for flight. In this paper, we suggest a way to estimate the correlation coefficient and demonstrate the approach using code-to-code differences that were obtained for quality control purposes during the Ares I CFD campaign. Finally, since we can expect the increments to be relatively small compared to the baseline response surface and to be typically of the order of the baseline uncertainty, we find that it is necessary to be able to show that the correlation coefficients are close to unity to avoid overinflating the overall database uncertainty with the addition of the increments.
Existing School Buildings: Incremental Seismic Retrofit Opportunities.
ERIC Educational Resources Information Center
Federal Emergency Management Agency, Washington, DC.
The intent of this document is to provide technical guidance to school district facility managers for linking specific incremental seismic retrofit opportunities to specific maintenance and capital improvement projects. The linkages are based on logical affinities, such as technical fit, location of the work within the building, cost saving…
Support System for Solar Receivers
NASA Technical Reports Server (NTRS)
Kiceniuk, T.
1985-01-01
Hinged split-ring mounts insure safe support of heavy receivers. In addition to safer operation and damage-free mounting system provides more accurate focusing, and small incremental adjustments of ring more easily made.
Effects of monetary reward and punishment on information checking behaviour: An eye-tracking study.
Li, Simon Y W; Cox, Anna L; Or, Calvin; Blandford, Ann
2018-07-01
The aim of the present study was to investigate the effect of error consequence, as reward or punishment, on individuals' checking behaviour following data entry. This study comprised two eye-tracking experiments that replicate and extend the investigation of Li et al. (2016) into the effect of monetary reward and punishment on data-entry performance. The first experiment adopted the same experimental setup as Li et al. (2016) but additionally used an eye tracker. The experiment validated Li et al. (2016) finding that, when compared to no error consequence, both reward and punishment led to improved data-entry performance in terms of reducing errors, and that no performance difference was found between reward and punishment. The second experiment extended the earlier study by associating error consequence to each individual trial by providing immediate performance feedback to participants. It was found that gradual increment (i.e. reward feedback) also led to significantly more accurate performance than no error consequence. It is unclear whether gradual increment is more effective than gradual decrement because of the small sample size tested. However, this study reasserts the effectiveness of reward on data-entry performance. Copyright © 2018 Elsevier Ltd. All rights reserved.
Efficient traffic grooming with dynamic ONU grouping for multiple-OLT-based access network
NASA Astrophysics Data System (ADS)
Zhang, Shizong; Gu, Rentao; Ji, Yuefeng; Wang, Hongxiang
2015-12-01
Fast bandwidth growth urges large-scale high-density access scenarios, where the multiple Passive Optical Networking (PON) system clustered deployment can be adopted as an appropriate solution to fulfill the huge bandwidth demands, especially for a future 5G mobile network. However, the lack of interaction between different optical line terminals (OLTs) results in part of the bandwidth resources waste. To increase the bandwidth efficiency, as well as reduce bandwidth pressure at the edge of a network, we propose a centralized flexible PON architecture based on Time- and Wavelength-Division Multiplexing PON (TWDM PON). It can provide flexible affiliation for optical network units (ONUs) and different OLTs to support access network traffic localization. Specifically, a dynamic ONU grouping algorithm (DGA) is provided to obtain the minimal OLT outbound traffic. Simulation results show that DGA obtains an average 25.23% traffic gain increment under different OLT numbers within a small ONU number situation, and the traffic gain will increase dramatically with the increment of the ONU number. As the DGA can be deployed easily as an application running above the centralized control plane, the proposed architecture can be helpful to improve the network efficiency for future traffic-intensive access scenarios.
NASA Technical Reports Server (NTRS)
Allison, Dennis O.; Waggoner, E. G.
1990-01-01
Computational predictions of the effects of wing contour modifications on maximum lift and transonic performance were made and verified against low speed and transonic wind tunnel data. This effort was part of a program to improve the maneuvering capability of the EA-6B electronics countermeasures aircraft, which evolved from the A-6 attack aircraft. The predictions were based on results from three computer codes which all include viscous effects: MCARF, a 2-D subsonic panel code; TAWFIVE, a transonic full potential code; and WBPPW, a transonic small disturbance potential flow code. The modifications were previously designed with the aid of these and other codes. The wing modifications consists of contour changes to the leading edge slats and trailing edge flaps and were designed for increased maximum lift with minimum effect on transonic performance. The prediction of the effects of the modifications are presented, with emphasis on verification through comparisons with wind tunnel data from the National Transonic Facility. Attention is focused on increments in low speed maximum lift and increments in transonic lift, pitching moment, and drag resulting from the contour modifications.
On the statistics of increments in strong Alfvenic turbulence
NASA Astrophysics Data System (ADS)
Palacios, J. C.; Perez, J. C.
2017-12-01
In-situ measurements have shown that the solar wind is dominated by non-compressive Alfvén-like fluctuations of plasma velocity and magnetic field over a broad range of scales. In this work, we present recent progress in understanding intermittency in Alfvenic turbulence by investigating the statistics of Elsasser increments from simulations of steadily driven Reduced MHD with numerical resolutions up to 2048^3. The nature of these statistics guards a close relation to the fundamental properties of small-scale structures in which the turbulence is ultimately dissipated and therefore has profound implications in the possible contribution of turbulence to the heating of the solar wind. We extensively investigate the properties and three-dimensional structure of probability density functions (PDFs) of increments and compare with recent phenomenological models of intermittency in MHD turbulence.
Tuffaha, Haitham W; Mitchell, Andrew; Ward, Robyn L; Connelly, Luke; Butler, James R G; Norris, Sarah; Scuffham, Paul A
2018-01-04
PurposeTo evaluate the cost-effectiveness of BRCA testing in women with breast cancer, and cascade testing in family members of BRCA mutation carriers.MethodsA cost-effectiveness analysis was conducted using a cohort Markov model from a health-payer perspective. The model estimated the long-term benefits and costs of testing women with breast cancer who had at least a 10% pretest BRCA mutation probability, and the cascade testing of first- and second-degree relatives of women who test positive.ResultsCompared with no testing, BRCA testing of affected women resulted in an incremental cost per quality-adjusted life-year (QALY) gained of AU$18,900 (incremental cost AU$1,880; incremental QALY gain 0.10) with reductions of 0.04 breast and 0.01 ovarian cancer events. Testing affected women and cascade testing of family members resulted in an incremental cost per QALY gained of AU$9,500 compared with testing affected women only (incremental cost AU$665; incremental QALY gain 0.07) with additional reductions of 0.06 breast and 0.01 ovarian cancer events.ConclusionBRCA testing in women with breast cancer is cost-effective and is associated with reduced risk of cancer and improved survival. Extending testing to cover family members of affected women who test positive improves cost-effectiveness beyond restricting testing to affected women only.GENETICS in MEDICINE advance online publication, 4 January 2018; doi:10.1038/gim.2017.231.
Multibeam collimator uses prism stack
NASA Technical Reports Server (NTRS)
Minott, P. O.
1981-01-01
Optical instrument creates many divergent light beams for surveying and machine element alignment applications. Angles and refractive indices of stack of prisms are selected to divert incoming laser beam by small increments, different for each prism. Angles of emerging beams thus differ by small, precisely-controlled amounts. Instrument is nearly immune to vibration, changes in gravitational force, temperature variations, and mechanical distortion.
Fritz, Julie M; Kim, Minchul; Magel, John S; Asche, Carl V
2017-03-01
Economic evaluation of a randomized clinical trial. Compare costs and cost-effectiveness of usual primary care management for patients with acute low back pain (LBP) with or without the addition of early physical therapy. Low back pain is among the most common and costly conditions encountered in primary care. Early physical therapy after a new primary care consultation for acute LBP results in small clinical improvement but cost-effectiveness of a strategy of early physical therapy is unknown. Economic evaluation was conducted alongside a randomized clinical trial of patients with acute, nonspecific LBP consulting a primary care provider. All patients received usual primary care management and education, and were randomly assigned to receive four sessions of physical therapy or usual care of delaying referral consideration to permit spontaneous recovery. Data were collected in a randomized trial involving 220 participants age 18 to 60 with LBP <16 days duration without red flags or signs of nerve root compression. The EuroQoL EQ-5D health states were collected at baseline and after 1-year and used to compute the quality adjusted life year (QALY) gained. Direct (health care utilization) and indirect (work absence or reduced productivity) costs related to LBP were collected monthly and valued using standard costs. The incremental cost-effectiveness ratio was computed as incremental total costs divided by incremental QALYs. Early physical therapy resulted in higher total 1-year costs (mean difference in adjusted total costs = $580, 95% CI: $175, $984, P = 0.005) and better quality of life (mean difference in QALYs = 0.02, 95% CI: 0.005, 0.35, P = 0.008) after 1-year. The incremental cost-effectiveness ratio was $32,058 (95% CI: $10,629, $151,161) per QALY. Our results support early physical therapy as cost-effective relative to usual primary care after 1 year for patients with acute, nonspecific LBP. 2.
Augmentation of maneuver performance by spanwise blowing
NASA Technical Reports Server (NTRS)
Erickson, G. E.; Campbell, J. F.
1977-01-01
A generalized wind tunnel model was tested to investigate new component concepts utilizing spanwise blowing to provide improved maneuver characteristics for advanced fighter aircraft. Primary emphasis was placed on high angle of attack performance, stability, and control at subsonic speeds. Spanwise blowing on a 44 deg swept trapezoidal wing resulted in leading edge vortex enhancement with subsequent large vortex-induced lift increments and drag polar improvements at the higher angles of attack. Small deflections of a leading edge flap delayed these lift and drag benefits to higher angles of attack. In addition, blowing was more effective at higher Mach numbers. Spanwise blowing in conjunction with a deflected trailing edge flap resulted in lift and drag benefits that exceeded the summation of the effects of each high lift device acting alone. Asymmetric blowing was an effective lateral control device at the higher angles of attack. Spanwise blowing on the wing reduced horizontal tail loading and improved the lateral-directional stability characteristics of a wing-horizontal tail-vertical tail configuration.
Consequences of Frequent Hemodialysis: Comparison to Conventional Hemodialysis and Transplantation
Stokes, John B.
2011-01-01
The average life expectancy of a person on hemodialysis is less than 3 years and hasn't changed in 20 years. The Hemodialysis (HEMO) trial, a randomized trial to determine whether increasing urea removal to the maximum practical degree through a 3-times-a-week schedule, showed no difference in mortality in the treatment and control groups. Investigators speculated that the increment in functional waste removal in the HEMO study was too small to produce improvements in mortality. To test this hypothesis, the NIDDK funded the Frequent Hemodialysis Network, a consortium of centers testing whether patients randomized to intensive dialysis would demonstrate improved (reduced) left ventricular LV mass and quality of life. The trial has two arms: the daily (in-center) and the home (nocturnal) arms. Each arm has patients randomized to conventional dialysis or 6 days (or nights) of dialysis. The results of the HEMO trial will be reported in the fall of 2010. PMID:21686215
Genetic attack on neural cryptography.
Ruttor, Andreas; Kinzel, Wolfgang; Naeh, Rivka; Kanter, Ido
2006-03-01
Different scaling properties for the complexity of bidirectional synchronization and unidirectional learning are essential for the security of neural cryptography. Incrementing the synaptic depth of the networks increases the synchronization time only polynomially, but the success of the geometric attack is reduced exponentially and it clearly fails in the limit of infinite synaptic depth. This method is improved by adding a genetic algorithm, which selects the fittest neural networks. The probability of a successful genetic attack is calculated for different model parameters using numerical simulations. The results show that scaling laws observed in the case of other attacks hold for the improved algorithm, too. The number of networks needed for an effective attack grows exponentially with increasing synaptic depth. In addition, finite-size effects caused by Hebbian and anti-Hebbian learning are analyzed. These learning rules converge to the random walk rule if the synaptic depth is small compared to the square root of the system size.
NASA Technical Reports Server (NTRS)
Alexander, Leslie, Jr.
2006-01-01
Advanced Chemical Propulsion (ACP) provides near-term incremental improvements in propulsion system performance and/or cost. It is an evolutionary approach to technology development that produces useful products along the way to meet increasingly more demanding mission requirements while focusing on improving payload mass fraction to yield greater science capability. Current activities are focused on two areas: chemical propulsion component, subsystem, and manufacturing technologies that offer measurable system level benefits; and the evaluation of high-energy storable propellants with enhanced performance for in-space application. To prioritize candidate propulsion technology alternatives, a variety of propulsion/mission analyses and trades have been conducted for SMD missions to yield sufficient data for investment planning. They include: the Advanced Chemical Propulsion Assessment; an Advanced Chemical Propulsion System Model; a LOx-LH2 small pumps conceptual design; a space storables propellant study; a spacecraft cryogenic propulsion study; an advanced pressurization and mixture ratio control study; and a pump-fed vs. pressure-fed study.
NASA Astrophysics Data System (ADS)
Rosaiah, P.; Hussain, O. M.; Zhu, Jinghui; Qiu, Yejun
2017-08-01
Lithium iron phosphate (Li x FePO4) is synthesized by a solid-state reaction method. The structural, electrical and electrochemical properties are studied in detail. It is found that the increment of lithium concentration (up to x = 1.05) does not affect the structure of LiFePO4 but improves its electrical conductivity as well as electrochemical performance. Surface morphological studies exhibited the formation of rod-like nanoparticles with small size. Electric and dielectric properties are also investigated over a frequency range of 1 Hz-1 MHz at different temperatures. The conductivity increased with increasing temperature, which follows the Arrhenius relation with the activation energy of about 0.31 eV. And the electrochemical tests found that the Li1.05FePO4 cathode possessed improved discharge capacity with better cycling performance.
Genetic attack on neural cryptography
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ruttor, Andreas; Kinzel, Wolfgang; Naeh, Rivka
2006-03-15
Different scaling properties for the complexity of bidirectional synchronization and unidirectional learning are essential for the security of neural cryptography. Incrementing the synaptic depth of the networks increases the synchronization time only polynomially, but the success of the geometric attack is reduced exponentially and it clearly fails in the limit of infinite synaptic depth. This method is improved by adding a genetic algorithm, which selects the fittest neural networks. The probability of a successful genetic attack is calculated for different model parameters using numerical simulations. The results show that scaling laws observed in the case of other attacks hold formore » the improved algorithm, too. The number of networks needed for an effective attack grows exponentially with increasing synaptic depth. In addition, finite-size effects caused by Hebbian and anti-Hebbian learning are analyzed. These learning rules converge to the random walk rule if the synaptic depth is small compared to the square root of the system size.« less
Genetic attack on neural cryptography
NASA Astrophysics Data System (ADS)
Ruttor, Andreas; Kinzel, Wolfgang; Naeh, Rivka; Kanter, Ido
2006-03-01
Different scaling properties for the complexity of bidirectional synchronization and unidirectional learning are essential for the security of neural cryptography. Incrementing the synaptic depth of the networks increases the synchronization time only polynomially, but the success of the geometric attack is reduced exponentially and it clearly fails in the limit of infinite synaptic depth. This method is improved by adding a genetic algorithm, which selects the fittest neural networks. The probability of a successful genetic attack is calculated for different model parameters using numerical simulations. The results show that scaling laws observed in the case of other attacks hold for the improved algorithm, too. The number of networks needed for an effective attack grows exponentially with increasing synaptic depth. In addition, finite-size effects caused by Hebbian and anti-Hebbian learning are analyzed. These learning rules converge to the random walk rule if the synaptic depth is small compared to the square root of the system size.
40 CFR 60.1590 - When must I complete each increment of progress?
Code of Federal Regulations, 2010 CFR
2010-07-01
... PROGRAMS (CONTINUED) STANDARDS OF PERFORMANCE FOR NEW STATIONARY SOURCES Emission Guidelines and Compliance Times for Small Municipal Waste Combustion Units Constructed on or Before August 30, 1999 Model Rule...
Mehand, Massinissa Si; Srinivasan, Bala; De Crescenzo, Gregory
2015-01-01
Surface plasmon resonance-based biosensors have been successfully applied to the study of the interactions between macromolecules and small molecular weight compounds. In an effort to increase the throughput of these SPR-based experiments, we have already proposed to inject multiple compounds simultaneously over the same surface. When specifically applied to small molecular weight compounds, such a strategy would however require prior knowledge of the refractive index increment of each compound in order to correctly interpret the recorded signal. An additional experiment is typically required to obtain this information. In this manuscript, we show that through the introduction of an additional global parameter corresponding to the ratio of the saturating signals associated with each molecule, the kinetic parameters could be identified with similar confidence intervals without any other experimentation. PMID:26515024
Role of Self-Efficacy in Rehabilitation Outcome among Chronic Low Back Pain Patients.
ERIC Educational Resources Information Center
Altmaier, Elizabeth M.; And Others
1993-01-01
Examined role of self-efficacy beliefs in rehabilitation of 45 low back pain patients participating in 3-week rehabilitation program. Increments in self-efficacy beliefs during program were not associated with improved patient functioning at discharge. However, in support of theorized role of self-efficacy in behavior change, increments in…
An incremental knowledge assimilation system (IKAS) for mine detection
NASA Astrophysics Data System (ADS)
Porway, Jake; Raju, Chaitanya; Varadarajan, Karthik Mahesh; Nguyen, Hieu; Yadegar, Joseph
2010-04-01
In this paper we present an adaptive incremental learning system for underwater mine detection and classification that utilizes statistical models of seabed texture and an adaptive nearest-neighbor classifier to identify varied underwater targets in many different environments. The first stage of processing uses our Background Adaptive ANomaly detector (BAAN), which identifies statistically likely target regions using Gabor filter responses over the image. Using this information, BAAN classifies the background type and updates its detection using background-specific parameters. To perform classification, a Fully Adaptive Nearest Neighbor (FAAN) determines the best label for each detection. FAAN uses an extremely fast version of Nearest Neighbor to find the most likely label for the target. The classifier perpetually assimilates new and relevant information into its existing knowledge database in an incremental fashion, allowing improved classification accuracy and capturing concept drift in the target classes. Experiments show that the system achieves >90% classification accuracy on underwater mine detection tasks performed on synthesized datasets provided by the Office of Naval Research. We have also demonstrated that the system can incrementally improve its detection accuracy by constantly learning from new samples.
Kovic, Bruno; Guyatt, Gordon; Brundage, Michael; Thabane, Lehana; Bhatnagar, Neera; Xie, Feng
2016-01-01
Introduction There is an increasing number of new oncology drugs being studied, approved and put into clinical practice based on improvement in progression-free survival, when no overall survival benefits exist. In oncology, the association between progression-free survival and health-related quality of life is currently unknown, despite its importance for patients with cancer, and the unverified assumption that longer progression-free survival indicates improved health-related quality of life. Thus far, only 1 study has investigated this association, providing insufficient evidence and inconclusive results. The objective of this study protocol is to provide increased transparency in supporting a systematic summary of the evidence bearing on this association in oncology. Methods and analysis Using the OVID platform in MEDLINE, Embase and Cochrane databases, we will conduct a systematic review of randomised controlled human trials addressing oncology issues published starting in 2000. A team of reviewers will, in pairs, independently screen and abstract data using standardised, pilot-tested forms. We will employ numerical integration to calculate mean incremental area under the curve between treatment groups in studies for health-related quality of life, along with total related error estimates, and a 95% CI around incremental area. To describe the progression-free survival to health-related quality of life association, we will construct a scatterplot for incremental health-related quality of life versus incremental progression-free survival. To estimate the association, we will use a weighted simple regression approach, comparing mean incremental health-related quality of life with either median incremental progression-free survival time or the progression-free survival HR, in the absence of overall survival benefit. Discussion Identifying direction and magnitude of association between progression-free survival and health-related quality of life is critically important in interpreting results of oncology trials. Systematic evidence produced from our study will contribute to improvement of patient care and practice of evidence-based medicine in oncology. PMID:27591026
Djalalov, Sandjar; Beca, Jaclyn; Hoch, Jeffrey S; Krahn, Murray; Tsao, Ming-Sound; Cutz, Jean-Claude; Leighl, Natasha B
2014-04-01
ALK-targeted therapy with crizotinib offers significant improvement in clinical outcomes for the treatment of EML4-ALK fusion-positive non-small-cell lung cancer (NSCLC). We estimated the cost effectiveness of EML4-ALK fusion testing in combination with targeted first-line crizotinib treatment in Ontario. A cost-effectiveness analysis was conducted using a Markov model from the Canadian Public health (Ontario) perspective and a lifetime horizon in patients with stage IV NSCLC with nonsquamous histology. Transition probabilities and mortality rates were calculated from the Ontario Cancer Registry and Cancer Care Ontario New Drug Funding Program (CCO NDFP). Costs were obtained from the Ontario Case Costing Initiative, CCO NDFP, University Health Network, and literature. Molecular testing with first-line targeted crizotinib treatment in the population with advanced nonsquamous NSCLC resulted in a gain of 0.011 quality-adjusted life-years (QALYs) compared with standard care. The incremental cost was Canadian $2,725 per patient, and the incremental cost-effectiveness ratio (ICER) was $255,970 per QALY gained. Among patients with known EML4-ALK-positive advanced NSCLC, first-line crizotinib therapy provided 0.379 additional QALYs, cost an additional $95,043 compared with standard care, and produced an ICER of $250,632 per QALY gained. The major driver of cost effectiveness was drug price. EML4-ALK fusion testing in stage IV nonsquamous NSCLC with crizotinib treatment for ALK-positive patients is not cost effective in the setting of high drug costs and a low biomarker frequency in the population.
NASA Astrophysics Data System (ADS)
Störkle, Denis Daniel; Seim, Patrick; Thyssen, Lars; Kuhlenkötter, Bernd
2016-10-01
This article describes new developments in an incremental, robot-based sheet metal forming process (`Roboforming') for the production of sheet metal components for small lot sizes and prototypes. The dieless kinematic-based generation of the shape is implemented by means of two industrial robots, which are interconnected to a cooperating robot system. Compared to other incremental sheet metal forming (ISF) machines, this system offers high geometrical form flexibility without the need of any part-dependent tools. The industrial application of ISF is still limited by certain constraints, e.g. the low geometrical accuracy. Responding to these constraints, the authors present the influence of the part orientation and the forming sequence on the geometric accuracy. Their influence is illustrated with the help of various experimental results shown and interpreted within this article.
Thermomechanical simulations and experimental validation for high speed incremental forming
NASA Astrophysics Data System (ADS)
Ambrogio, Giuseppina; Gagliardi, Francesco; Filice, Luigino; Romero, Natalia
2016-10-01
Incremental sheet forming (ISF) consists in deforming only a small region of the workspace through a punch driven by a NC machine. The drawback of this process is its slowness. In this study, a high speed variant has been investigated from both numerical and experimental points of view. The aim has been the design of a FEM model able to perform the material behavior during the high speed process by defining a thermomechanical model. An experimental campaign has been performed by a CNC lathe with high speed to test process feasibility. The first results have shown how the material presents the same performance than in conventional speed ISF and, in some cases, better material behavior due to the temperature increment. An accurate numerical simulation has been performed to investigate the material behavior during the high speed process confirming substantially experimental evidence.
Transport of Internetwork Magnetic Flux Elements in the Solar Photosphere
NASA Astrophysics Data System (ADS)
Agrawal, Piyush; Rast, Mark P.; Gošić, Milan; Bellot Rubio, Luis R.; Rempel, Matthias
2018-02-01
The motions of small-scale magnetic flux elements in the solar photosphere can provide some measure of the Lagrangian properties of the convective flow. Measurements of these motions have been critical in estimating the turbulent diffusion coefficient in flux-transport dynamo models and in determining the Alfvén wave excitation spectrum for coronal heating models. We examine the motions of internetwork flux elements in Hinode/Narrowband Filter Imager magnetograms and study the scaling of their mean squared displacement and the shape of their displacement probability distribution as a function of time. We find that the mean squared displacement scales super-diffusively with a slope of about 1.48. Super-diffusive scaling has been observed in other studies for temporal increments as small as 5 s, increments over which ballistic scaling would be expected. Using high-cadence MURaM simulations, we show that the observed super-diffusive scaling at short increments is a consequence of random changes in barycenter positions due to flux evolution. We also find that for long temporal increments, beyond granular lifetimes, the observed displacement distribution deviates from that expected for a diffusive process, evolving from Rayleigh to Gaussian. This change in distribution can be modeled analytically by accounting for supergranular advection along with granular motions. These results complicate the interpretation of magnetic element motions as strictly advective or diffusive on short and long timescales and suggest that measurements of magnetic element motions must be used with caution in turbulent diffusion or wave excitation models. We propose that passive tracer motions in measured photospheric flows may yield more robust transport statistics.
An incremental approach to genetic-algorithms-based classification.
Guan, Sheng-Uei; Zhu, Fangming
2005-04-01
Incremental learning has been widely addressed in the machine learning literature to cope with learning tasks where the learning environment is ever changing or training samples become available over time. However, most research work explores incremental learning with statistical algorithms or neural networks, rather than evolutionary algorithms. The work in this paper employs genetic algorithms (GAs) as basic learning algorithms for incremental learning within one or more classifier agents in a multiagent environment. Four new approaches with different initialization schemes are proposed. They keep the old solutions and use an "integration" operation to integrate them with new elements to accommodate new attributes, while biased mutation and crossover operations are adopted to further evolve a reinforced solution. The simulation results on benchmark classification data sets show that the proposed approaches can deal with the arrival of new input attributes and integrate them with the original input space. It is also shown that the proposed approaches can be successfully used for incremental learning and improve classification rates as compared to the retraining GA. Possible applications for continuous incremental training and feature selection are also discussed.
Stream Kriging: Incremental and recursive ordinary Kriging over spatiotemporal data streams
NASA Astrophysics Data System (ADS)
Zhong, Xu; Kealy, Allison; Duckham, Matt
2016-05-01
Ordinary Kriging is widely used for geospatial interpolation and estimation. Due to the O (n3) time complexity of solving the system of linear equations, ordinary Kriging for a large set of source points is computationally intensive. Conducting real-time Kriging interpolation over continuously varying spatiotemporal data streams can therefore be especially challenging. This paper develops and tests two new strategies for improving the performance of an ordinary Kriging interpolator adapted to a stream-processing environment. These strategies rely on the expectation that, over time, source data points will frequently refer to the same spatial locations (for example, where static sensor nodes are generating repeated observations of a dynamic field). First, an incremental strategy improves efficiency in cases where a relatively small proportion of previously processed spatial locations are absent from the source points at any given iteration. Second, a recursive strategy improves efficiency in cases where there is substantial set overlap between the sets of spatial locations of source points at the current and previous iterations. These two strategies are evaluated in terms of their computational efficiency in comparison to ordinary Kriging algorithm. The results show that these two strategies can reduce the time taken to perform the interpolation by up to 90%, and approach average-case time complexity of O (n2) when most but not all source points refer to the same locations over time. By combining the approaches developed in this paper with existing heuristic ordinary Kriging algorithms, the conclusions indicate how further efficiency gains could potentially be accrued. The work ultimately contributes to the development of online ordinary Kriging interpolation algorithms, capable of real-time spatial interpolation with large streaming data sets.
Establishment of a rotor model basis
NASA Technical Reports Server (NTRS)
Mcfarland, R. E.
1982-01-01
Radial-dimension computations in the RSRA's blade-element model are modified for both the acquisition of extensive baseline data and for real-time simulation use. The baseline data, which are for the evaluation of model changes, use very small increments and are of high quality. The modifications to the real-time simulation model are for accuracy improvement, especially when a minimal number of blade segments is required for real-time synchronization. An accurate technique for handling tip loss in discrete blade models is developed. The mathematical consistency and convergence properties of summation algorithms for blade forces and moments are examined and generalized integration coefficients are applied to equal-annuli midpoint spacing. Rotor conditions identified as 'constrained' and 'balanced' are used and the propagation of error is analyzed.
Innovative technology for optical and infrared astronomy
NASA Astrophysics Data System (ADS)
Cunningham, Colin R.; Evans, Christopher J.; Molster, Frank; Kendrew, Sarah; Kenworthy, Matthew A.; Snik, Frans
2012-09-01
Advances in astronomy are often enabled by adoption of new technology. In some instances this is where the technology has been invented specifically for astronomy, but more usually it is adopted from another scientific or industrial area of application. The adoption of new technology typically occurs via one of two processes. The more usual is incremental progress by a series of small improvements, but occasionally this process is disruptive, where a new technology completely replaces an older one. One of the activities of the OPTICON Key Technology Network over the past few years has been a technology forecasting exercise. Here we report on a recent event which focused on the more radical, potentially disruptive technologies for ground-based, optical and infrared astronomy.
Analysis of Trajectory Parameters for Probe and Round-Trip Missions to Venus
NASA Technical Reports Server (NTRS)
Dugan, James F., Jr.; Simsic, Carl R.
1960-01-01
For one-way transfers between Earth and Venus, charts are obtained that show velocity, time, and angle parameters as functions of the eccentricity and semilatus rectum of the Sun-focused vehicle conic. From these curves, others are obtained that are useful in planning one-way and round-trip missions to Venus. The analysis is characterized by circular coplanar planetary orbits, successive two-body approximations, impulsive velocity changes, and circular parking orbits at 1.1 planet radii. For round trips the mission time considered ranges from 65 to 788 days, while wait time spent in the parking orbit at Venus ranges from 0 to 467 days. Individual velocity increments, one-way travel times, and departure dates are presented for round trips requiring the minimum total velocity increment. For both single-pass and orbiting Venusian probes, the time span available for launch becomes appreciable with only a small increase in velocity-increment capability above the minimum requirement. Velocity-increment increases are much more effective in reducing travel time for single-pass probes than they are for orbiting probes. Round trips composed of a direct route along an ellipse tangent to Earth's orbit and an aphelion route result in the minimum total velocity increment for wait times less than 100 days and mission times ranging from 145 to 612 days. Minimum-total-velocity-increment trips may be taken along perihelion-perihelion routes for wait times ranging from 300 to 467 days. These wait times occur during missions lasting from 640 to 759 days.
NASA Astrophysics Data System (ADS)
Afandi, M. I.; Adinanta, H.; Setiono, A.; Qomaruddin; Widiyatmoko, B.
2018-03-01
There are many ways to measure landslide displacement using sensors such as multi-turn potentiometer, fiber optic strain sensor, GPS, geodetic measurement, ground penetrating radar, etc. The proposed way is to use an optical encoder that produces pulse signal with high stability of measurement resolution despite voltage source instability. The landslide measurement using extensometer based on optical encoder has the ability of high resolution for wide range measurement and for a long period of time. The type of incremental optical encoder provides information about the pulse and direction of a rotating shaft by producing quadrature square wave cycle per increment of shaft movement. The result of measurement using 2,000 pulses per resolution of optical encoder has been obtained. Resolution of extensometer is 36 μm with speed limit of about 3.6 cm/s. System test in hazard landslide area has been carried out with good reliability for small landslide displacement monitoring.
NASA Technical Reports Server (NTRS)
Monta, W. J.
1980-01-01
The effects of conventional and square stores on the longitudinal aerodynamic characteristics of a fighter aircraft configuration at Mach numbers of 1.6, 1.8, and 2.0 was investigated. Five conventional store configurations and six arrangements of a square store configuration were studied. All configurations of the stores produced small, positive increments in the pitching moment throughout the angle-of-attack range, but the configuration with area ruled wing tanks also had a slight decrease on stability at the higher angles of attack. There were some small changes in lift coefficient because of the addition of the stores, causing the drag increment to vary with the lift coefficient. As a result, there were corresponding changes in the increments of the maximum lift drag ratios. The store drag coefficient based on the cross sectional area of the stores ranged from a maximum of 1.1 for the configuration with three Maverick missiles to a minimum of about .040 for the two MK-84 bombs and the arrangements with four square stores touching or two square stores in tandem. Square stores located side by side yielded about 0.50 in the aft position compared to 0.74 in the forward position.
Thomas Shelton
2013-01-01
A small-plot field trial was conducted to examine the area of influence of fipronil at incremental distances away from treated plots on the Harrison Experimental Forest near Saucier, MS. Small treated (water and fipronil) plots were surrounded by untreated wooden boards in an eight-point radial pattern, and examined for evidence of termite feeding every 60 d for 1 yr...
Incremental short daily home hemodialysis: a case series.
Toth-Manikowski, Stephanie M; Mullangi, Surekha; Hwang, Seungyoung; Shafi, Tariq
2017-07-05
Patients starting dialysis often have substantial residual kidney function. Incremental hemodialysis provides a hemodialysis prescription that supplements patients' residual kidney function while maintaining total (residual + dialysis) urea clearance (standard Kt/Vurea) targets. We describe our experience with incremental hemodialysis in patients using NxStage System One for home hemodialysis. From 2011 to 2015, we initiated 5 incident hemodialysis patients on an incremental home hemodialysis regimen. The biochemical parameters of all patients remained stable on the incremental hemodialysis regimen and they consistently achieved standard Kt/Vurea targets. Of the two patients with follow-up >6 months, residual kidney function was preserved for ≥2 years. Importantly, the patients were able to transition to home hemodialysis without automatically requiring 5 sessions per week at the outset and gradually increased the number of treatments and/or dialysate volume as the residual kidney function declined. An incremental home hemodialysis regimen can be safely prescribed and may improve acceptability of home hemodialysis. Reducing hemodialysis frequency by even one treatment per week can reduce the number of fistula or graft cannulations or catheter connections by >100 per year, an important consideration for patient well-being, access longevity, and access-related infections. The incremental hemodialysis approach, supported by national guidelines, can be considered for all home hemodialysis patients with residual kidney function.
Contact stresses in meshing spur gear teeth: Use of an incremental finite element procedure
NASA Technical Reports Server (NTRS)
Hsieh, Chih-Ming; Huston, Ronald L.; Oswald, Fred B.
1992-01-01
Contact stresses in meshing spur gear teeth are examined. The analysis is based upon an incremental finite element procedure that simultaneously determines the stresses in the contact region between the meshing teeth. The teeth themselves are modeled by two dimensional plain strain elements. Friction effects are included, with the friction forces assumed to obey Coulomb's law. The analysis assumes that the displacements are small and that the tooth materials are linearly elastic. The analysis procedure is validated by comparing its results with those for the classical two contacting semicylinders obtained from the Hertz method. Agreement is excellent.
Hu, Jing; Zhang, Xiaolong; Liu, Xiaoming; Tang, Jinshan
2015-06-01
Discovering hot regions in protein-protein interaction is important for drug and protein design, while experimental identification of hot regions is a time-consuming and labor-intensive effort; thus, the development of predictive models can be very helpful. In hot region prediction research, some models are based on structure information, and others are based on a protein interaction network. However, the prediction accuracy of these methods can still be improved. In this paper, a new method is proposed for hot region prediction, which combines density-based incremental clustering with feature-based classification. The method uses density-based incremental clustering to obtain rough hot regions, and uses feature-based classification to remove the non-hot spot residues from the rough hot regions. Experimental results show that the proposed method significantly improves the prediction performance of hot regions. Copyright © 2015 Elsevier Ltd. All rights reserved.
Carlson, Josh J; Suh, Kangho; Orfanos, Panos; Wong, William
2018-04-01
The recently completed ALEX trial demonstrated that alectinib improved progression-free survival, and delayed time to central nervous system progression compared with crizotinib in patients with anaplastic lymphoma kinase-positive non-small-cell lung cancer. However, the long-term clinical and economic impact of using alectinib vs. crizotinib has not been evaluated. The objective of this study was to determine the potential cost utility of alectinib vs. crizotinib from a US payer perspective. A cost-utility model was developed using partition survival methods and three health states: progression-free, post-progression, and death. ALEX trial data informed the progression-free and overall survival estimates. Costs included drug treatments and supportive care (central nervous system and non-central nervous system). Utility values were obtained from trial data and literature. Sensitivity analyses included one-way and probabilistic sensitivity analyses. Treatment with alectinib vs. crizotinib resulted in a gain of 0.91 life-years, 0.87 quality-adjusted life-years, and incremental costs of US$34,151, resulting in an incremental cost-effectiveness ratio of US$39,312/quality-adjusted life-year. Drug costs and utilities in the progression-free health state were the main drivers of the model in the one-way sensitivity analysis. From the probabilistic sensitivity analysis, alectinib had a 64% probability of being cost effective at a willingness-to-pay threshold of US$100,000/quality adjusted life-year. Alectinib increased time in the progression-free state and quality-adjusted life-years vs. crizotinib. The marginal cost increase was reflective of longer treatment durations in the progression-free state. Central nervous system-related costs were considerably lower with alectinib. Our results suggest that compared with crizotinib, alectinib may be a cost-effective therapy for treatment-naïve patients with anaplastic lymphoma kinase-positive non-small-cell lung cancer.
Plutons: Simmer between 350° and 500°C for 10 million years, then serve cold (Invited)
NASA Astrophysics Data System (ADS)
Coleman, D. S.; Davis, J.
2009-12-01
The growing recognition that continental plutons are assembled incrementally over millions of years requires reexamination of the thermal histories of intrusive rocks. With the exception of the suggestion that pluton magma chambers can be revitalized by mafic input at their deepest structural levels, most aspects of modern pluton petrology are built on the underlying assumption that silicic plutons intrude as discrete thermal packages that undergo subsequent monotonic decay back to a steady-state geothermal gradient. The recognition that homogeneous silicic plutons are constructed over timescales too great to be single events necessitates rethinking pluton intrusion mechanisms, textures, thermochronology, chemical evolution and links to volcanic rocks. Three-dimensional thermal modeling of sheeted (horizontal and vertical) incremental pluton assembly (using HEAT3D by Wohletz, 2007) yields several results that are largely independent of intrusive geometry and may help understand bothersome field and laboratory results from plutonic rocks. 1) All increments cool quickly below hornblende closure temperature. However, late increments are emplaced into walls warmed by earlier increments, and they cycle between hornblende and biotite closure temperatures, a range in which fluid-rich melts are likely to be present. These conditions persist until the increments are far from the region of new magma flux, or the addition of increments stops. These observations are supported by Ar thermochronology and may explain why heterogeneous early marginal intrusive phases often grade into younger homogeneous interior map units. 2) Early increments become the contact metamorphic wall rocks of later increments. This observation suggests that much of the contact metamorphism associated with a given volume of plutonic rock is “lost” via textural modification of early increments during intrusion of later increments. Johnson and Glazner (CMP, in press) argue that mappable variations in pluton texture can result from textural modification during thermal cycling associated with incremental assembly. 3) The thermal structure of the model pluton evolves toward roughly spheroidal isotherms even though the pluton is assembled from thin tabular sheets. The zone of melt-bearing rock and the shape of intrapluton contact metamorphic isograds bear little resemblance to the increments from which the pluton was built. Consequently, pluton contacts mapped by variations in texture that reflect the thermal cycling inherent to incremental assembly will inevitably be “blob” or diapir-like, but will yield little insight into magma intrusion geometry. 4) Although models yield large regions of melt-bearing rock, the melt fraction is low and the melt-bearing volume at any time is small compared to the total volume of the pluton. This observation raises doubts about the connections between zoned silicic plutons and large ignimbrite eruptions.
USDA-ARS?s Scientific Manuscript database
The objective of this study was to investigate the effects of incremental amounts of Ascophyllum nodosum meal (ANOD) on milk production, milk composition including fatty acids and I, blood metabolites, and nutrient intake and digestibility in early lactation dairy cows fed high-forage diets. Twelve ...
ERIC Educational Resources Information Center
Codding, Robin S.; Archer, Jillian; Connell, James
2010-01-01
The purpose of this study was to replicate and extend a previous study by Burns ("Education and Treatment of Children" 28: 237-249, 2005) examining the effectiveness of incremental rehearsal on computation performance. A multiple-probe design across multiplication problem sets was employed for one participant to examine digits correct per minute…
Thermal elastoplastic structural analysis of non-metallic thermal protection systems
NASA Technical Reports Server (NTRS)
Chung, T. J.; Yagawa, G.
1972-01-01
An incremental theory and numerical procedure to analyze a three-dimensional thermoelastoplastic structure subjected to high temperature, surface heat flux, and volume heat supply as well as mechanical loadings are presented. Heat conduction equations and equilibrium equations are derived by assuming a specific form of incremental free energy, entropy, stresses and heat flux together with the first and second laws of thermodynamics, von Mises yield criteria and Prandtl-Reuss flow rule. The finite element discretization using the linear isotropic three-dimensional element for the space domain and a difference operator corresponding to a linear variation of temperature within a small time increment for the time domain lead to systematic solutions of temperature distribution and displacement and stress fields. Various boundary conditions such as insulated surfaces and convection through uninsulated surface can be easily treated. To demonstrate effectiveness of the present formulation a number of example problems are presented.
Willan, Andrew R; Eckermann, Simon
2012-10-01
Previous applications of value of information methods for determining optimal sample size in randomized clinical trials have assumed no between-study variation in mean incremental net benefit. By adopting a hierarchical model, we provide a solution for determining optimal sample size with this assumption relaxed. The solution is illustrated with two examples from the literature. Expected net gain increases with increasing between-study variation, reflecting the increased uncertainty in incremental net benefit and reduced extent to which data are borrowed from previous evidence. Hence, a trial can become optimal where current evidence is sufficient assuming no between-study variation. However, despite the expected net gain increasing, the optimal sample size in the illustrated examples is relatively insensitive to the amount of between-study variation. Further percentage losses in expected net gain were small even when choosing sample sizes that reflected widely different between-study variation. Copyright © 2011 John Wiley & Sons, Ltd.
Gunderson, Elizabeth A; Donnellan, M Brent; Robins, Richard W; Trzesniewski, Kali H
2018-04-24
Individuals who believe that intelligence can be improved with effort (an incremental theory of intelligence) and who approach challenges with the goal of improving their understanding (a learning goal) tend to have higher academic achievement. Furthermore, parent praise is associated with children's incremental theories and learning goals. However, the influences of parental criticism, as well as different forms of praise and criticism (e.g., process vs. person), have received less attention. We examine these associations by analyzing two existing datasets (Study 1: N = 317 first to eighth graders; Study 2: N = 282 fifth and eighth graders). In both studies, older children held more incremental theories of intelligence, but lower learning goals, than younger children. Unexpectedly, the relation between theories of intelligence and learning goals was nonsignificant and did not vary with children's grade level. In both studies, overall perceived parent praise positively related to children's learning goals, whereas perceived parent criticism negatively related to incremental theories of intelligence. In Study 2, perceived parent process praise was the only significant (positive) predictor of children's learning goals, whereas perceived parent person criticism was the only significant (negative) predictor of incremental theories of intelligence. Finally, Study 2 provided some support for our hypothesis that age-related differences in perceived parent praise and criticism can explain age-related differences in children's learning goals. Results suggest that incremental theories of intelligence and learning goals might not be strongly related during childhood and that perceived parent praise and criticism have important, but distinct, relations with each motivational construct. Copyright © 2018 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Wang, H. H.; Shi, Y. P.; Li, X. H.; Ni, K.; Zhou, Q.; Wang, X. H.
2018-03-01
In this paper, a scheme to measure the position of precision stages, with a high precision, is presented. The encoder is composed of a scale grating and a compact two-probe reading head, to read the zero position pulse signal and continuous incremental displacement signal. The scale grating contains different codes, multiple reference codes with different spacing superimposed onto the incremental grooves with an equal spacing structure. The codes of reference mask in the reading head is the same with the reference codes on the scale grating, and generate pulse signal to locate the reference position primarily when the reading head moves along the scale grating. After locating the reference position in a section by means of the pulse signal, the reference position can be located precisely with the amplitude of the incremental displacement signal. A kind of reference codes and scale grating were designed, and experimental results show that the primary precision of the design achieved is 1 μ m. The period of the incremental signal is 1μ m, and 1000/N nm precision can be achieved by subdivide the incremental signal in N times.
Lakdawalla, Darius N; Chou, Jacquelyn W; Linthicum, Mark T; MacEwan, Joanna P; Zhang, Jie; Goldman, Dana P
2015-05-01
Surrogate end points may be used as proxy for more robust clinical end points. One prominent example is the use of progression-free survival (PFS) as a surrogate for overall survival (OS) in trials for oncologic treatments. Decisions based on surrogate end points may expedite regulatory approval but may not accurately reflect drug efficacy. Payers and clinicians must balance the potential benefits of earlier treatment access based on surrogate end points against the risks of clinical uncertainty. To present a framework for evaluating the expected net benefit or cost of providing early access to new treatments on the basis of evidence of PFS benefits before OS results are available, using non-small-cell lung cancer (NSCLC) as an example. A probabilistic decision model was used to estimate expected incremental social value of the decision to grant access to a new treatment on the basis of PFS evidence. The model analyzed a hypothetical population of patients with NSCLC who could be treated during the period between PFS and OS evidence publication. Estimates for delay in publication of OS evidence following publication of PFS evidence, expected OS benefit given PFS benefit, incremental cost of new treatment, and other parameters were drawn from the literature on treatment of NSCLC. Incremental social value of early access for each additional patient per month (in 2014 US dollars). For "medium-value" model parameters, early reimbursement of drugs with any PFS benefit yields an incremental social cost of more than $170,000 per newly treated patient per month. In contrast, granting early access on the basis of PFS benefit between 1 and 3.5 months produces more than $73,000 in incremental social value. Across the full range of model parameter values, granting access for drugs with PFS benefit between 3 and 3.5 months is robustly beneficial, generating incremental social value ranging from $38,000 to more than $1 million per newly treated patient per month, whereas access for all drugs with any PFS benefit is usually not beneficial. The value of providing access to new treatments on the basis of surrogate end points, and PFS in particular, likely varies considerably. Payers and clinicians should carefully consider how to use PFS data in balancing potential benefits against costs in each particular disease.
Incremental online learning in high dimensions.
Vijayakumar, Sethu; D'Souza, Aaron; Schaal, Stefan
2005-12-01
Locally weighted projection regression (LWPR) is a new algorithm for incremental nonlinear function approximation in high-dimensional spaces with redundant and irrelevant input dimensions. At its core, it employs nonparametric regression with locally linear models. In order to stay computationally efficient and numerically robust, each local model performs the regression analysis with a small number of univariate regressions in selected directions in input space in the spirit of partial least squares regression. We discuss when and how local learning techniques can successfully work in high-dimensional spaces and review the various techniques for local dimensionality reduction before finally deriving the LWPR algorithm. The properties of LWPR are that it (1) learns rapidly with second-order learning methods based on incremental training, (2) uses statistically sound stochastic leave-one-out cross validation for learning without the need to memorize training data, (3) adjusts its weighting kernels based on only local information in order to minimize the danger of negative interference of incremental learning, (4) has a computational complexity that is linear in the number of inputs, and (5) can deal with a large number of-possibly redundant-inputs, as shown in various empirical evaluations with up to 90 dimensional data sets. For a probabilistic interpretation, predictive variance and confidence intervals are derived. To our knowledge, LWPR is the first truly incremental spatially localized learning method that can successfully and efficiently operate in very high-dimensional spaces.
Improving tree age estimates derived from increment cores: a case study of red pine
Shawn Fraver; John B. Bradford; Brian J. Palik
2011-01-01
Accurate tree ages are critical to a range of forestry and ecological studies. However, ring counts from increment cores, if not corrected for the years between the root collar and coring height, can produce sizeable age errors. The magnitude of errors is influenced by both the height at which the core is extracted and the growth rate. We destructively sampled saplings...
Kovic, Bruno; Guyatt, Gordon; Brundage, Michael; Thabane, Lehana; Bhatnagar, Neera; Xie, Feng
2016-09-02
There is an increasing number of new oncology drugs being studied, approved and put into clinical practice based on improvement in progression-free survival, when no overall survival benefits exist. In oncology, the association between progression-free survival and health-related quality of life is currently unknown, despite its importance for patients with cancer, and the unverified assumption that longer progression-free survival indicates improved health-related quality of life. Thus far, only 1 study has investigated this association, providing insufficient evidence and inconclusive results. The objective of this study protocol is to provide increased transparency in supporting a systematic summary of the evidence bearing on this association in oncology. Using the OVID platform in MEDLINE, Embase and Cochrane databases, we will conduct a systematic review of randomised controlled human trials addressing oncology issues published starting in 2000. A team of reviewers will, in pairs, independently screen and abstract data using standardised, pilot-tested forms. We will employ numerical integration to calculate mean incremental area under the curve between treatment groups in studies for health-related quality of life, along with total related error estimates, and a 95% CI around incremental area. To describe the progression-free survival to health-related quality of life association, we will construct a scatterplot for incremental health-related quality of life versus incremental progression-free survival. To estimate the association, we will use a weighted simple regression approach, comparing mean incremental health-related quality of life with either median incremental progression-free survival time or the progression-free survival HR, in the absence of overall survival benefit. Identifying direction and magnitude of association between progression-free survival and health-related quality of life is critically important in interpreting results of oncology trials. Systematic evidence produced from our study will contribute to improvement of patient care and practice of evidence-based medicine in oncology. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/
NASA Astrophysics Data System (ADS)
Bhargava, K.; Kalnay, E.; Carton, J.; Yang, F.
2017-12-01
Systematic forecast errors, arising from model deficiencies, form a significant portion of the total forecast error in weather prediction models like the Global Forecast System (GFS). While much effort has been expended to improve models, substantial model error remains. The aim here is to (i) estimate the model deficiencies in the GFS that lead to systematic forecast errors, (ii) implement an online correction (i.e., within the model) scheme to correct GFS following the methodology of Danforth et al. [2007] and Danforth and Kalnay [2008, GRL]. Analysis Increments represent the corrections that new observations make on, in this case, the 6-hr forecast in the analysis cycle. Model bias corrections are estimated from the time average of the analysis increments divided by 6-hr, assuming that initial model errors grow linearly and first ignoring the impact of observation bias. During 2012-2016, seasonal means of the 6-hr model bias are generally robust despite changes in model resolution and data assimilation systems, and their broad continental scales explain their insensitivity to model resolution. The daily bias dominates the sub-monthly analysis increments and consists primarily of diurnal and semidiurnal components, also requiring a low dimensional correction. Analysis increments in 2015 and 2016 are reduced over oceans, which is attributed to improvements in the specification of the SSTs. These results encourage application of online correction, as suggested by Danforth and Kalnay, for mean, seasonal and diurnal and semidiurnal model biases in GFS to reduce both systematic and random errors. As the error growth in the short-term is still linear, estimated model bias corrections can be added as a forcing term in the model tendency equation to correct online. Preliminary experiments with GFS, correcting temperature and specific humidity online show reduction in model bias in 6-hr forecast. This approach can then be used to guide and optimize the design of sub-grid scale physical parameterizations, more accurate discretization of the model dynamics, boundary conditions, radiative transfer codes, and other potential model improvements which can then replace the empirical correction scheme. The analysis increments also provide guidance in testing new physical parameterizations.
Forsythe, Anna; Chandiwana, David; Barth, Janina; Thabane, Marroon; Baeck, Johan; Tremblay, Gabriel
2018-01-01
Several recent randomized controlled trials (RCTs) in hormone receptor-positive (HR+), human epidermal growth factor receptor 2-negative (HER2-) metastatic breast cancer (MBC) have demonstrated significant improvements in progression-free survival (PFS); however, few have reported improvement in overall survival (OS). The surrogacy of PFS or time to progression (TTP) for OS has not been formally investigated in HR+, HER2- MBC. A systematic literature review of RCTs in HR+, HER2- MBC was conducted to identify studies that reported both median PFS/TTP and OS. The correlation between PFS/TTP and OS was evaluated using Pearson's product-moment correlation and Spearman's rank correlation. Subgroup analyses were performed to explore possible reasons for heterogeneity. Errors-in-variables weighted least squares regression (LSR) was used to model incremental OS months as a function of incremental PFS/TTP months. An exploratory analysis investigated the impact of three covariates (chemotherapy vs hormonal/targeted therapy, PFS vs TTP, and first-line therapy vs second-line therapy or greater) on OS prediction. The lower 95% prediction band was used to determine the minimum incremental PFS/TTP months required to predict OS benefit (surrogate threshold effect [STE]). Forty studies were identified. There was a statistically significant correlation between median PFS/TTP and OS (Pearson =0.741, P =0.000; Spearman =0.650, P =0.000). These results proved consistent for chemotherapy and hormonal/targeted therapy. Univariate LSR analysis yielded an R 2 of 0.354 with 1 incremental PFS/TTP month corresponding to 1.13 incremental OS months. Controlling the type of treatment (chemotherapy vs hormonal/targeted therapy), line of therapy (first vs subsequent), and progression measure (PFS vs TTP) led to an improved R 2 of 0.569 with 1 PFS/TTP month corresponding to 0.78 OS months. The STE for OS benefit was 5-6 months of incremental PFS/TTP. We demonstrated a significant association between PFS/TTP and OS, which may justify the use of PFS/TTP as a surrogate for OS benefit in HR+, HER2- MBC.
Cram, Peter; Vijan, Sandeep; Wolbrink, Alex; Fendrick, A Mark
2003-01-01
Traditional cost-utility analysis assumes that all benefits from health-related interventions are captured by the quality-adjusted life-years (QALYs) gained by the few individuals whose outcome is improved by the intervention. However, it is possible that many individuals who do not directly benefit from an intervention receive utility, and therefore QALYs, because of the passive benefit (aka sense of security) provided by the existence of the intervention. The objective of this study was to evaluate the impact that varying quantities of passive benefit have on the cost-effectiveness of airline defibrillator programs. A decision analytic model with Markov processes was constructed to evaluate the cost-effectiveness of defibrillator deployment on domestic commercial passenger aircraft over 1 year. Airline passengers were assigned small incremental utility gains (.001-.01) during an estimated 3-hour flight to evaluate the impact of passive benefit on overall cost-effectiveness. In the base case analysis with no allowance for passive benefit, the cost-effectiveness of airline automated external defibrillator deployment was US dollars 34000 per QALY gained. If 1% of all passengers received utility gain of.01, the cost-effectiveness declined to US dollars 30000. Cost-effectiveness was enhanced when the quantity of passive benefit was raised or the percentage of individuals receiving passive benefit increased. Automated external defibrillator deployment on passenger aircraft is likely to be cost-effective. If a small percentage of airline passengers receive incremental utility gains from passive benefit of automated external defibrillator availability, the impact on overall cost-effectiveness may be substantial. Further research should attempt to clarify the magnitude and percentage of patients who receive passive benefit.
Determination of Small Animal Long Bone Properties Using Densitometry
NASA Technical Reports Server (NTRS)
Breit, Gregory A.; Goldberg, BethAnn K.; Whalen, Robert T.; Hargens, Alan R. (Technical Monitor)
1996-01-01
Assessment of bone structural property changes due to loading regimens or pharmacological treatment typically requires destructive mechanical testing and sectioning. Our group has accurately and non-destructively estimated three dimensional cross-sectional areal properties (principal moments of inertia, Imax and Imin, and principal angle, Theta) of human cadaver long bones from pixel-by-pixel analysis of three non-coplanar densitometry scans. Because the scanner beam width is on the order of typical small animal diapbyseal diameters, applying this technique to high-resolution scans of rat long bones necessitates additional processing to minimize errors induced by beam smearing, such as dependence on sample orientation and overestimation of Imax and Imin. We hypothesized that these errors are correctable by digital image processing of the raw scan data. In all cases, four scans, using only the low energy data (Hologic QDR-1000W, small animal mode), are averaged to increase image signal-to-noise ratio. Raw scans are additionally processed by interpolation, deconvolution by a filter derived from scanner beam characteristics, and masking using a variable threshold based on image dynamic range. To assess accuracy, we scanned an aluminum step phantom at 12 orientations over a range of 180 deg about the longitudinal axis, in 15 deg increments. The phantom dimensions (2.5, 3.1, 3.8 mm x 4.4 mm; Imin/Imax: 0.33-0.74) were comparable to the dimensions of a rat femur which was also scanned. Cross-sectional properties were determined at 0.25 mm increments along the length of the phantom and femur. The table shows average error (+/- SD) from theory of Imax, Imin, and Theta) over the 12 orientations, calculated from raw and fully processed phantom images, as well as standard deviations about the mean for the femur scans. Processing of phantom scans increased agreement with theory, indicating improved accuracy. Smaller standard deviations with processing indicate increased precision and repeatability. Standard deviations for the femur are consistent with those of the phantom. We conclude that in conjunction with digital image enhancement, densitometry scans are suitable for non-destructive determination of areal properties of small animal bones of comparable size to our phantom, allowing prediction of Imax and Imin within 2.5% and Theta within a fraction of a degree. This method represents a considerable extension of current methods of analyzing bone tissue distribution in small animal bones.
Concordium 2016: Data and Knowledge Transforming Health.
Devine, Beth
2017-04-20
Concordium 2016 celebrated the potential for data and knowledge to transform health. Through a series of plenaries, presentations, workshops and demonstrations, the conference highlighted projects among four themes: effectiveness and outcomes research, health care analytics and operations, public and population health, and quality improvement. The eight papers that comprise this special issue of eGEMs provide exemplars of solutions to the Big Data problems faced in today's healthcare environment. Several of the papers contain elements of multiple overlapping themes. We integrate these into five overlapping themes: telehealth, user-centered design/usability, clinic workflow, patient-centered care, and population health management through prediction modeling and risk adjustment. The effort to leverage all types of Big Data to improve health and healthcare is a monumental effort that will require the work of numerous stakeholders, and one that will unfold incrementally over time. This collection of eight papers reflects the current state of the art. Concordium 2017 will take a different form, inviting a small set of leaders in the field to focus on the next round of exciting and provocative research currently underway to improve the nation's health.
Improvement of maneuver aerodynamics by spanwise blowing
NASA Technical Reports Server (NTRS)
Erickson, G. E.; Campbell, J. F.
1977-01-01
Spanwise blowing was used to test a generalized wind-tunnel model to investigate component concepts in order to provide improved maneuver characteristics for advanced fighter aircraft. Primary emphasis was placed on performance, stability, and control at high angles of attack and subsonic speeds. Test data were obtained in the Langley high speed 7 by 10 foot tunnel at free stream Mach numbers up to 0.50 for a range of model angles of attack, jet momentum coefficients, and leading and trailing edge flap deflection angles. Spanwise blowing on a 44 deg swept trapezoidal wing resulted in leading edge vortex enhancement with subsequent large vortex induced lift increments and drag polar improvements at the higher angles of attack. Small deflections of a leading edge flap delayed these lift and drag benefits to higher angles of attack. In addition, blowing was more effective at higher Mach numbers. Spanwise blowing in conjunction with a deflected trailing edge flap resulted in lift and drag benefits that exceeded the summation of the effects of each high lift device acting alone. Asymmetric blowing was an effective lateral control device at the higher angles of attack.
Quasi-static responses and variational principles in gradient plasticity
NASA Astrophysics Data System (ADS)
Nguyen, Quoc-Son
2016-12-01
Gradient models have been much discussed in the literature for the study of time-dependent or time-independent processes such as visco-plasticity, plasticity and damage. This paper is devoted to the theory of Standard Gradient Plasticity at small strain. A general and consistent mathematical description available for common time-independent behaviours is presented. Our attention is focussed on the derivation of general results such as the description of the governing equations for the global response and the derivation of related variational principles in terms of the energy and the dissipation potentials. It is shown that the quasi-static response under a loading path is a solution of an evolution variational inequality as in classical plasticity. The rate problem and the rate minimum principle are revisited. A time-discretization by the implicit scheme of the evolution equation leads to the increment problem. An increment of the response associated with a load increment is a solution of a variational inequality and satisfies also a minimum principle if the energy potential is convex. The increment minimum principle deals with stables solutions of the variational inequality. Some numerical methods are discussed in view of the numerical simulation of the quasi-static response.
Carlson, Josh J; Ogale, Sarika; Dejonckheere, Fred; Sullivan, Sean D
2015-03-01
To estimate the cost-effectiveness of tocilizumab (TCZ) monotherapy (Mono) versus adalimumab (ADA) Mono from the US payer perspective in patients with rheumatoid arthritis for whom methotrexate is inappropriate. We compared TCZ Mono (8 mg/kg monthly) with ADA Mono (40 mg every other week), using efficacy results from a head-to-head study, ADalimumab ACTemrA (ADACTA). We calculated the incremental cost per responder (achievement of American College of Rheumatology [ACR] 20% improvement criteria, ACR 50% improvement criteria, ACR 70% improvement criteria, or low disease activity score) for TCZ versus ADA at 6 months. A patient-level simulation was used to estimate the lifetime incremental cost per quality-adjusted life-year (QALY) of initiating treatment with TCZ Mono versus ADA Mono. Both drugs are followed by an etanercept-certolizumab-palliative care sequence. Nonresponders discontinue at 6 months; responders experience a constant probability of discontinuation. Discontinuers move to the next treatment. ACR responses produce changes in the Health Assessment Questionnaire (HAQ) score. We mapped the HAQ score to utility to estimate QALYs. Costs include those related to hospitalization and those related to treatment (drug acquisition, administration, and monitoring). Probabilistic and one-way sensitivity analyses were conducted, along with several scenario analyses. Compared with ADA, TCZ was more effective, with an estimated 6-month incremental cost ranging from $6,570 per additional low disease activity score achiever to $14,265 per additional ACR 70% improvement criteria responder. The lifetime incremental cost-effectiveness ratio was $36,944/QALY. TCZ Mono is projected to be cost-effective compared with ADA Mono in patients with severe rheumatoid arthritis for whom methotrexate is not appropriate, from a US payer perspective. Copyright © 2015. Published by Elsevier Inc.
Financing mechanisms for capital improvements : interchanges : final report.
DOT National Transportation Integrated Search
2010-03-01
This report examines the use of alternative local financing mechanisms for interchange and interchange area infrastructure improvements. The financing mechanisms covered include transportation impact fees, tax increment financing, value capture finan...
An efficient numerical algorithm for transverse impact problems
NASA Technical Reports Server (NTRS)
Sankar, B. V.; Sun, C. T.
1985-01-01
Transverse impact problems in which the elastic and plastic indentation effects are considered, involve a nonlinear integral equation for the contact force, which, in practice, is usually solved by an iterative scheme with small increments in time. In this paper, a numerical method is proposed wherein the iterations of the nonlinear problem are separated from the structural response computations. This makes the numerical procedures much simpler and also efficient. The proposed method is applied to some impact problems for which solutions are available, and they are found to be in good agreement. The effect of the magnitude of time increment on the results is also discussed.
Agha, Syed A; Kalogeropoulos, Andreas P; Shih, Jeffrey; Georgiopoulou, Vasiliki V; Giamouzis, Grigorios; Anarado, Perry; Mangalat, Deepa; Hussain, Imad; Book, Wendy; Laskar, Sonjoy; Smith, Andrew L; Martin, Randolph; Butler, Javed
2009-09-01
Incremental value of echocardiography over clinical parameters for outcome prediction in advanced heart failure (HF) is not well established. We evaluated 223 patients with advanced HF receiving optimal therapy (91.9% angiotensin-converting enzyme inhibitor/angiotensin receptor blocker, 92.8% beta-blockers, 71.8% biventricular pacemaker, and/or defibrillator use). The Seattle Heart Failure Model (SHFM) was used as the reference clinical risk prediction scheme. The incremental value of echocardiographic parameters for event prediction (death or urgent heart transplantation) was measured by the improvement in fit and discrimination achieved by addition of standard echocardiographic parameters to the SHFM. After a median follow-up of 2.4 years, there were 38 (17.0%) events (35 deaths; 3 urgent transplants). The SHFM had likelihood ratio (LR) chi(2) 32.0 and C statistic 0.756 for event prediction. Left ventricular end-systolic volume, stroke volume, and severe tricuspid regurgitation were independent echocardiographic predictors of events. The addition of these parameters to SHFM improved LR chi(2) to 72.0 and C statistic to 0.866 (P < .001 and P=.019, respectively). Reclassifying the SHFM-predicted risk with use of the echocardiography-added model resulted in improved prognostic separation. Addition of standard echocardiographic variables to the SHFM results in significant improvement in risk prediction for patients with advanced HF.
Yeager, David S.; Lee, Hae Yeon; Jamieson, Jeremy
2016-01-01
This research integrated implicit theories and the biopsychosocial (BPS) model of challenge and threat, hypothesizing that adolescents would be more likely to conclude that they have the resources to meet the demands of an evaluative social situation when they were taught a belief that people have the potential to change their socially-relevant traits. Study 1 (N=60) randomly assigned high school adolescents to an incremental theory of personality or control condition, and then administered a standardized social stress task. Relative to controls, incremental theory participants exhibited improved stress appraisals, more adaptive neuroendocrine and cardiovascular responses (lower salivary cortisol, reduced vascular resistance, higher stroke volume, and more rapid return to homeostasis after stress offset), and better performance outcomes. Study 2 (N=205) used a daily diary intervention study to test high school adolescents’ stress reactivity outside the laboratory. Threat appraisals (days 5–9 post-intervention) and neuroendocrine responses (cortisol and DHEA-S; days 8–9 post-intervention only) were untethered from the intensity of daily stressors when adolescents received the incremental theory of personality intervention. The intervention also improved grades over freshman year. These findings offer new avenues for improving theories of adolescent stress and coping. PMID:27324267
Continuous microbial cultures maintained by electronically-controlled device
NASA Technical Reports Server (NTRS)
Eisler, W. J., Jr.; Webb, R. B.
1967-01-01
Photocell-controlled instrument maintains microbial culture. It uses commercially available chemostat glassware, provides adequate aeration through bubbling of the culture, maintains the population size and density, continuously records growth rates over small increments of time, and contains a simple, sterilizable nutrient control mechanism.
Federal Register 2010, 2011, 2012, 2013, 2014
2010-04-13
... transfer of defense articles, including technical data, and defense services for the manufacture of Small Diameter Bomb Increment I (SDB I) Weapon System in Italy for end-use by the Italian Air Force. The United...
Holographic evaluation of fatigue cracks by a compressive stress (HYSTERESIS) technique
NASA Technical Reports Server (NTRS)
Freska, S. A.; Rummel, W. D.
1974-01-01
Holographic interferometry compares unknown field of optical waves with known one. Differences are displayed as interference bands or fringes. Technique was evaluated on fatigue-cracked 2219-T87 aluminum-alloy panels. Small cracks were detected when specimen was incrementally unloaded.
Analysis of Lung Tissue Using Ion Beams
NASA Astrophysics Data System (ADS)
Alvarez, J. L.; Barrera, R.; Miranda, J.
2002-08-01
In this work a comparative study is presented of the contents of metals in lung tissue from healthy patients and with lung cancer, by means of two analytical techniques: Particle Induced X-ray Emission (PIXE) and Rutherford Backscattering Spectrometry (RBS). The samples of cancerous tissue were taken from 26 autopsies made to individuals died in the National Institute of Respiratory Disease (INER), 22 of cancer and 4 of other non-cancer biopsies. When analyzing the entirety of the samples, in the cancerous tissues, there were increments in the concentrations of S (4%), K (635%), Co (85%) and Cu (13%). Likewise, there were deficiencies in the concentrations of Cl (59%), Ca (6%), Fe (26%) and Zn (7%). Only in the cancerous tissues there were appearances of P, Ca, Ti, V, Cr, Mn, Ni, Br and Sr. The tissue samples were classified according to cancer types (adenocarcinomas, epidermoides and of small cell carcinoma), personal habits (smokers and alcoholic), genetic predisposition and residence place. There was a remarkable decrease in the concentration of Ca and a marked increment in the Cu in the epidermoide tissue samples with regard to those of adenocarcinoma or of small cells cancer. Also, decrements were detected in K and increments of Fe, Co and Cu in the sample belonging to people that resided in Mexico City with regard to those that resided in the State of Mexico.
NASA Astrophysics Data System (ADS)
Timmermans, R.; Denier van der Gon, H.; Segers, A.; Honore, C.; Perrussel, O.; Builtjes, P.; Schaap, M.
2012-04-01
Since a major part of the Earth's population lives in cities, it is of great importance to correctly characterise the air pollution levels over these urban areas. Many studies in the past have already been dedicated to this subject and have determined so-called urban increments: the impact of large cities on the air pollution levels. The impact of large cities on air pollution levels usually is determined with models driven by so-called downscaled emission inventories. In these inventories official country total emissions are gridded using information on for example population density and location of industries and roads. The question is how accurate are the downscaled inventories over cities or large urban areas. Within the EU FP 7 project MEGAPOLI project a new emission inventory has been produced including refined local emission data for two European megacities (Paris, London) and two urban conglomerations (the Po valley, Italy and the Rhine-Ruhr region, Germany) based on a bottom-up approach. The inventory has comparable national totals but remarkable difference at the city scale. Such a bottom up inventory is thought to be more accurate as it contains local knowledge. Within this study we compared modelled nitrogen dioxide (NO2) and particulate matter (PM) concentrations from the LOTOS-EUROS chemistry transport model driven by a conventional downscaled emission inventory (TNO-MACC inventory) with the concentrations from the same model driven by the new MEGAPOLI 'bottom-up' emission inventory focusing on the Paris region. Model predictions for Paris significantly improve using the new Megapoli inventory. Both the emissions as well as the simulated average concentrations of PM over urban sites in Paris are much lower due to the different spatial distribution of the anthropogenic emissions. The difference for the nearby rural stations is small implicating that also the urban increment for PM simulated using the bottom-up emission inventory is much smaller than for the downscaled emission inventory. Urban increments for PM calculated with downscaled emissions, as is common practice, might therefore be overestimated. This finding is likely to apply to other European Megacities as well.
Effect of incremental filling technique on adhesion of light-cured resin composite to cavity floor.
Chikawa, Hirokazu; Inai, Norimichi; Cho, Eitetsu; Kishikawa, Ryuzo; Otsuki, Masayuki; Foxton, Richard M; Tagami, Junji
2006-09-01
The purpose of this study was to evaluate the effect of various incremental filling techniques on adhesion between composite and cavity floor using light-cured resin composite. Black ABS resin and hybrid resin composite were used as mold materials--instead of dentin--for the preparation of cavities, and standardized to 5x5x5 mm. Each cavity was then treated with a bonding system (Clearfil SE bond). Resin composite (Clearfil Photo Core) was placed on the bonding resin using different incremental filling techniques or in bulk and irradiated for a total of 80 seconds using a halogen light unit. Specimens were subjected to the micro-tensile bond test at a crosshead speed of 1 mm/min. Data were analyzed by two-way ANOVA. The results indicated that an incremental filling technique was more effective in improving adhesion to the cavity floor than a bulk filling technique.
Strong correlation in incremental full configuration interaction
NASA Astrophysics Data System (ADS)
Zimmerman, Paul M.
2017-06-01
Incremental Full Configuration Interaction (iFCI) reaches high accuracy electronic energies via a many-body expansion of the correlation energy. In this work, the Perfect Pairing (PP) ansatz replaces the Hartree-Fock reference of the original iFCI method. This substitution captures a large amount of correlation at zero-order, which allows iFCI to recover the remaining correlation energy with low-order increments. The resulting approach, PP-iFCI, is size consistent, size extensive, and systematically improvable with increasing order of incremental expansion. Tests on multiple single bond, multiple double bond, and triple bond dissociations of main group polyatomics using double and triple zeta basis sets demonstrate the power of the method for handling strong correlation. The smooth dissociation profiles that result from PP-iFCI show that FCI-quality ground state computations are now within reach for systems with up to about 10 heavy atoms.
1979-07-24
During the 1970s, the focus at Dryden shifted from high-speed and high-altitude flight to incremental improvements in technology and aircraft efficiency. One manifestation of this trend occurred in the winglet flight research carried out on a KC-135 during 1979 and 1980. Richard Whitcomb at the Langley Research Center had originated the idea of adding small vertical fins to an aircraft's wing tips. His wind tunnel tests indicated that winglets produced a forward thrust, which reduced the strength of the vortices generated by an aircraft's wing tips and resulted in a reduction of drag and an increase in aircraft range. Whitcomb, who had previously developed the area rule concept and the supercritical wing, selected the best winglet shape for flight tests on a KC-135 tanker. When the tests were completed, the data showed that the winglets provided a 7 percent improvement in range over the standard KC-135. The obvious economic advantage at a time of high fuel costs caused winglets to be adopted on business jets, airliners, and heavy military transports.
KC-135A in flight - winglet study
NASA Technical Reports Server (NTRS)
1979-01-01
During the 1970s, the focus at Dryden shifted from high-speed and high-altitude flight to incremental improvements in technology and aircraft efficiency. One manifestation of this trend occurred in the winglet flight research carried out on a KC-135 during 1979 and 1980. Richard Whitcomb at the Langley Research Center had originated the idea of adding small vertical fins to an aircraft's wing tips. His wind tunnel tests indicated that winglets produced a forward thrust, which reduced the strength of the vortices generated by an aircraft's wing tips and resulted in a reduction of drag and an increase in aircraft range. Whitcomb, who had previously developed the area rule concept and the supercritical wing, selected the best winglet shape for flight tests on a KC-135 tanker. When the tests were completed, the data showed that the winglets provided a 7 percent improvement in range over the standard KC-135. The obvious economic advantage at a time of high fuel costs caused winglets to be adopted on business jets, airliners, and heavy military transports.
Financing mechanisms for capital improvements : interchanges, final report, March 2010.
DOT National Transportation Integrated Search
2010-03-01
This report examines the use of alternative local financing mechanisms for interchange and interchange area infrastructure improvements. The financing mechanisms covered include transportation impact fees, tax increment financing, value capture finan...
Lautt, W W; Legare, D J; Greenway, C V
1987-11-01
In dogs anesthetized with pentobarbital, central vena caval pressure (CVP), portal venous pressure (PVP), and intrahepatic lobar venous pressure (proximal to the hepatic venous sphincters) were measured. The objective was to determine some characteristics of the intrahepatic vascular resistance sites (proximal and distal to the hepatic venous sphincters) including testing predictions made using a recent mathematical model of distensible hepatic venous resistance. The stimulus used was a brief rise in CVP produced by transient occlusion of the thoracic vena cava in control state and when vascular resistance was elevated by infusions of norepinephrine or histamine, or by nerve stimulation. The percent transmission of the downstream pressure rise to upstream sites past areas of vascular resistance was elevated. Even small increments in CVP are partially transmitted upstream. The data are incompatible with the vascular waterfall phenomenon which predicts that venous pressure increments are not transmitted upstream until a critical pressure is overcome and then further increments would be 100% transmitted. The hepatic sphincters show the following characteristics. First, small rises in CVP are transmitted less than large elevations; as the CVP rises, the sphincters passively distend and allow a greater percent transmission upstream, thus a large rise in CVP is more fully transmitted than a small rise in CVP. Second, the amount of pressure transmission upstream is determined by the vascular resistance across which the pressure is transmitted. As nerves, norepinephrine, or histamine cause the hepatic sphincters to contract, the percent transmission becomes less and the distensibility of the sphincters is reduced. Similar characteristics are shown for the "presinusoidal" vascular resistance and the hepatic venous sphincter resistance.(ABSTRACT TRUNCATED AT 250 WORDS)
Rakesh Minocha; Walter C. Shortle
1993-01-01
Two simple and fast methods for the extraction of major inorganic cations (Ca, Mg, Mn, K) from small quantities of stemwood and needles of woody plants were developed. A 3.2- or 6.4-mm cobalt drill bit was used to shave samples from disks and increment cores of stemwood. For ion extraction, wood (ground or shavings) or needles were either homogenzied using a Tekmar...
A Platform for Developing Autonomy Technologies for Small Military Robots
2008-12-01
angular increments around the disk so described. A line scanner oriented so the plane of detected points is horizontal (e.g., the axis about which...behaviors can be implemented. Thus it will contain the custom scripts , executables, and data that compose the actual behavior of the robot. Currently, the...operating system was constructed to be relatively small and boot fast. Debian GNU/Linux, however, provides an installation script that downloads a
Tapping Transaction Costs to Forecast Acquisition Cost Breaches
2016-01-01
Diameter Bomb Increment II (SDB II) Space Based Infrared System (SBIRS) High Program Standard Missile (SM) - 2 Block IV Stryker Family of Vehicles...some limitations. First, the activities included in this category will vary somewhat from contractor to contractor. As a result, a small portion of...contract may vary over time as new costs are defined by the contractor as being related to SE/PM. This could explain a small portion of the increase in
Defensiveness versus remediation: self-theories and modes of self-esteem maintenance.
Nussbaum, A David; Dweck, Carol S
2008-05-01
How people maintain and repair their self-esteem has been a topic of widespread interest. In this article, the authors ask, What determines whether people will use direct, remedial actions, or defensive actions? In three studies, they tested the hypothesis that a belief in fixed intelligence (entity theory) would produce defensiveness, whereas a belief in improvable intelligence (incremental theory) would foster remediation. In each study, participants assigned to the entity condition opted for defensive self-esteem repair (downward comparison in Studies 1 and 3; a tutorial on already mastered material in Study 2), but those in the incremental condition opted for self-improvement (upward comparison in Studies 1 and 3; a tutorial on unmastered material in Study 2). Experiment 3 also linked these strategies to self-esteem repair; remedial strategies were the most effective in recovering lost self-esteem for those in the incremental condition, whereas defensive strategies were most effective for those in the entity condition.
Effectiveness of Mirror Therapy for Subacute Stroke in Relation to Chosen Factors.
Radajewska, Alina; Opara, Józef; Biliński, Grzegorz; Kaczorowska, Antonina; Nawrat-Szołtysik, Agnieszka; Kucińska, Aleksandra; Lepsy, Ewelina
The aim of this study was to determine the effectiveness of mirror therapy (MT) combined with comprehensive treatment and to investigate the possible relationships of functional state. Prospective, controlled trial of 60 stroke inpatients. The Functional Index "Repty" (FIR) was an outcome measure to assess changes of independence in daily activities. The Frenchay Arm Test (FAT) and Motor Status Score were outcome measures to assess changes in hand function. The analysis of pre- and posttest data indicated a significant improvement in hand function ([INCREMENT]FAT in the Mirror group p = .035, N = 30). Age factor indicated a significant change in relation to FIR outcome ([INCREMENT]FIR in the Mirror group p = .005, N = 30 and [INCREMENT]FIR in the Mirror group [left hand paresis] p = .037, N = 15). Additional MT influenced improvement in hand function. The age is significant in terms of functional state. The older adults are likely to benefit from MT. A positive impact of combining MT with other treatment was indicated.
On the validity of the incremental approach to estimate the impact of cities on air quality
NASA Astrophysics Data System (ADS)
Thunis, Philippe
2018-01-01
The question of how much cities are the sources of their own air pollution is not only theoretical as it is critical to the design of effective strategies for urban air quality planning. In this work, we assess the validity of the commonly used incremental approach to estimate the likely impact of cities on their air pollution. With the incremental approach, the city impact (i.e. the concentration change generated by the city emissions) is estimated as the concentration difference between a rural background and an urban background location, also known as the urban increment. We show that the city impact is in reality made up of the urban increment and two additional components and consequently two assumptions need to be fulfilled for the urban increment to be representative of the urban impact. The first assumption is that the rural background location is not influenced by emissions from within the city whereas the second requires that background concentration levels, obtained with zero city emissions, are equal at both locations. Because the urban impact is not measurable, the SHERPA modelling approach, based on a full air quality modelling system, is used in this work to assess the validity of these assumptions for some European cities. Results indicate that for PM2.5, these two assumptions are far from being fulfilled for many large or medium city sizes. For this type of cities, urban increments are largely underestimating city impacts. Although results are in better agreement for NO2, similar issues are met. In many situations the incremental approach is therefore not an adequate estimate of the urban impact on air pollution. This poses issues in terms of interpretation when these increments are used to define strategic options in terms of air quality planning. We finally illustrate the interest of comparing modelled and measured increments to improve our confidence in the model results.
Yeager, David S; Lee, Hae Yeon; Jamieson, Jeremy P
2016-08-01
This research integrated implicit theories of personality and the biopsychosocial model of challenge and threat, hypothesizing that adolescents would be more likely to conclude that they can meet the demands of an evaluative social situation when they were taught that people have the potential to change their socially relevant traits. In Study 1 (N = 60), high school students were assigned to an incremental-theory-of-personality or a control condition and then given a social-stress task. Relative to control participants, incremental-theory participants exhibited improved stress appraisals, more adaptive neuroendocrine and cardiovascular responses, and better performance outcomes. In Study 2 (N = 205), we used a daily-diary intervention to test high school students' stress reactivity outside the laboratory. Threat appraisals (Days 5-9 after intervention) and neuroendocrine responses (Days 8 and 9 after intervention only) were unrelated to the intensity of daily stressors when adolescents received the incremental-theory intervention. Students who received the intervention also had better grades over freshman year than those who did not. These findings offer new avenues for improving theories of adolescent stress and coping. © The Author(s) 2016.
Reynolds Number Effects on the Performance of Ailerons and Spoilers (Invited)
NASA Technical Reports Server (NTRS)
Mineck, R. E.
2001-01-01
The influence of Reynolds number on the performance of outboard spoilers and ailerons was investigated on a generic subsonic transport configuration in the National Transonic Facility over a chord Reynolds number range from 3 to 30 million and a Mach number range from 0.70 to 0.94. Spoiler deflection angles of 0, 10, and 20 degrees and aileron deflection angles of -10, 0, and 10 degrees were tested. Aeroelastic effects were minimized by testing at constant normalized dynamic pressure conditions over intermediate Reynolds number ranges. Results indicated that the increment in rolling moment due to spoiler deflection generally becomes more negative as the Reynolds number increases from 3 x 10(exp 6) to 22 x 10 (exp 6) with only small changes between Reynolds numbers of 22 x 10(exp 6) and 30 x 10(exp 6). The change in the increment in rolling moment coefficient with Reynolds number for the aileron deflected configuration is generally small with a general trend of increasing magnitude with increasing Reynolds number.
Alesi, Marianna; Rappo, Gaetano; Pepi, Annamaria
2016-01-01
One of the most significant current discussions has led to the hypothesis that domain-specific training programs alone are not enough to improve reading achievement or working memory abilities. Incremental or Entity personal conceptions of intelligence may be assumed to be an important prognostic factor to overcome domain-specific deficits. Specifically, incremental students tend to be more oriented toward change and autonomy and are able to adopt more efficacious strategies. This study aims at examining the effect of personal conceptions of intelligence to strengthen the efficacy of a multidimensional intervention program in order to improve decoding abilities and working memory. Participants included two children (M age = 10 years) with developmental dyslexia and different conceptions of intelligence. The children were tested on a whole battery of reading and spelling tests commonly used in the assessment of reading disabilities in Italy. Afterwards, they were given a multimedia test to measure motivational factors such as conceptions of intelligence and achievement goals. The children took part in the T.I.R.D. Multimedia Training for the Rehabilitation of Dyslexia (Rappo and Pepi, 2010) reinforced by specific units to improve verbal working memory for 3 months. This training consisted of specific tasks to rehabilitate both visual and phonological strategies (sound blending, word segmentation, alliteration test and rhyme test, letter recognition, digraph recognition, trigraph recognition, and word recognition as samples of visual tasks) and verbal working memory (rapid words and non-words recognition). Posttest evaluations showed that the child holding the incremental theory of intelligence improved more than the child holding a static representation. On the whole this study highlights the importance of treatment programs in which both specificity of deficits and motivational factors are both taken into account. There is a need to plan multifaceted intervention programs based on a transverse approach, considering both cognitive and motivational factors. PMID:26779069
Investigating the incremental validity of cognitive variables in early mathematics screening.
Clarke, Ben; Shanley, Lina; Kosty, Derek; Baker, Scott K; Cary, Mari Strand; Fien, Hank; Smolkowski, Keith
2018-03-26
The purpose of this study was to investigate the incremental validity of a set of domain general cognitive measures added to a traditional screening battery of early numeracy measures. The sample consisted of 458 kindergarten students of whom 285 were designated as severely at-risk for mathematics difficulty. Hierarchical multiple regression results indicated that Wechsler Abbreviated Scales of Intelligence (WASI) Matrix Reasoning and Vocabulary subtests, and Digit Span Forward and Backward measures explained a small, but unique portion of the variance in kindergarten students' mathematics performance on the Test of Early Mathematics Ability-Third Edition (TEMA-3) when controlling for Early Numeracy Curriculum Based Measurement (EN-CBM) screening measures (R² change = .01). Furthermore, the incremental validity of the domain general cognitive measures was relatively stronger for the severely at-risk sample. We discuss results from the study in light of instructional decision-making and note the findings do not justify adding domain general cognitive assessments to mathematics screening batteries. (PsycINFO Database Record (c) 2018 APA, all rights reserved).
NASA Astrophysics Data System (ADS)
Lee, Byungjin; Lee, Young Jae; Sung, Sangkyung
2018-05-01
A novel attitude determination method is investigated that is computationally efficient and implementable in low cost sensor and embedded platform. Recent result on attitude reference system design is adapted to further develop a three-dimensional attitude determination algorithm through the relative velocity incremental measurements. For this, velocity incremental vectors, computed respectively from INS and GPS with different update rate, are compared to generate filter measurement for attitude estimation. In the quaternion-based Kalman filter configuration, an Euler-like attitude perturbation angle is uniquely introduced for reducing filter states and simplifying propagation processes. Furthermore, assuming a small angle approximation between attitude update periods, it is shown that the reduced order filter greatly simplifies the propagation processes. For performance verification, both simulation and experimental studies are completed. A low cost MEMS IMU and GPS receiver are employed for system integration, and comparison with the true trajectory or a high-grade navigation system demonstrates the performance of the proposed algorithm.
Two Different Views on the World Around Us: The World of Uniformity versus Diversity.
Kwon, JaeHwan; Nayakankuppam, Dhananjay
2016-01-01
We propose that when individuals believe in fixed traits of personality (entity theorists), they are likely to expect a world of "uniformity." As such, they easily infer a population statistic from a small sample of data with confidence. In contrast, individuals who believe in malleable traits of personality (incremental theorists) are likely to presume a world of "diversity," such that they "hesitate" to infer a population statistic from a similarly sized sample. In four laboratory experiments, we found that compared to incremental theorists, entity theorists estimated a population mean from a sample with a greater level of confidence (Studies 1a and 1b), expected more homogeneity among the entities within a population (Study 2), and perceived an extreme value to be more indicative of an outlier (Study 3). These results suggest that individuals are likely to use their implicit self-theory orientations (entity theory versus incremental theory) to see a population in general as a constitution either of homogeneous or heterogeneous entities.
Code of Federal Regulations, 2010 CFR
2010-07-01
... describing the devices for air pollution control and process changes that you will use to comply with the emission limits and other requirements of this subpart. If you plan to reduce your small municipal waste...
Warfighter Information Network-Tactical Increment 3 (WIN-T Inc 3)
2013-12-01
T vehicles employed at BCT, Fires, (Ch-1) WIN-T Inc 3 December 2013 SAR April 16, 2014 16:49:41 UNCLASSIFIED 13 AVN , BfSB, and select force...passengers and crew from small arms fire, mines, IED and other anti-vehicle/ personnel threats. AVN , BfSB, and select force pooled assets...small arms fire, mines, IED and other anti-vehicle/ personnel threats. AVN , BfSB, and select force pooled assets operating within the
Spatial and velocity statistics of inertial particles in turbulent flows
NASA Astrophysics Data System (ADS)
Bec, J.; Biferale, L.; Cencini, M.; Lanotte, A. S.; Toschi, F.
2011-12-01
Spatial and velocity statistics of heavy point-like particles in incompressible, homogeneous, and isotropic three-dimensional turbulence is studied by means of direct numerical simulations at two values of the Taylor-scale Reynolds number Reλ ~ 200 and Reλ ~ 400, corresponding to resolutions of 5123 and 20483 grid points, respectively. Particles Stokes number values range from St ≈ 0.2 to 70. Stationary small-scale particle distribution is shown to display a singular -multifractal- measure, characterized by a set of generalized fractal dimensions with a strong sensitivity on the Stokes number and a possible, small Reynolds number dependency. Velocity increments between two inertial particles depend on the relative weight between smooth events - where particle velocity is approximately the same of the fluid velocity-, and caustic contributions - when two close particles have very different velocities. The latter events lead to a non-differentiable small-scale behaviour for the relative velocity. The relative weight of these two contributions changes at varying the importance of inertia. We show that moments of the velocity difference display a quasi bi-fractal-behavior and that the scaling properties of velocity increments for not too small Stokes number are in good agreement with a recent theoretical prediction made by K. Gustavsson and B. Mehlig arXiv: 1012.1789v1 [physics.flu-dyn], connecting the saturation of velocity scaling exponents with the fractal dimension of particle clustering.
Modern traffic control devices to improve safety at rural intersections.
DOT National Transportation Integrated Search
2011-12-01
"Engineers with the Texas Department of Transportation (TxDOT) frequently make changes to traffic control devices : (TCDs) to improve intersection safety. To use available funds judiciously, engineers make incremental changes in : order to select the...
Rolling scheduling of electric power system with wind power based on improved NNIA algorithm
NASA Astrophysics Data System (ADS)
Xu, Q. S.; Luo, C. J.; Yang, D. J.; Fan, Y. H.; Sang, Z. X.; Lei, H.
2017-11-01
This paper puts forth a rolling modification strategy for day-ahead scheduling of electric power system with wind power, which takes the operation cost increment of unit and curtailed wind power of power grid as double modification functions. Additionally, an improved Nondominated Neighbor Immune Algorithm (NNIA) is proposed for solution. The proposed rolling scheduling model has further improved the operation cost of system in the intra-day generation process, enhanced the system’s accommodation capacity of wind power, and modified the key transmission section power flow in a rolling manner to satisfy the security constraint of power grid. The improved NNIA algorithm has defined an antibody preference relation model based on equal incremental rate, regulation deviation constraints and maximum & minimum technical outputs of units. The model can noticeably guide the direction of antibody evolution, and significantly speed up the process of algorithm convergence to final solution, and enhance the local search capability.
Making Sense of School Turnarounds
ERIC Educational Resources Information Center
Hess, Frederick M.
2012-01-01
Today, in a sector flooded with $3.5 billion in School Improvement Grant funds and the resulting improvement plans, there's great faith that "turnaround" strategies are a promising way to tackle stubborn problems with persistently low-performing schools. Unlike traditional reform efforts, with their emphasis on incremental improvement, turnarounds…
Conversion of type of quantum well structure
NASA Technical Reports Server (NTRS)
Ning, Cun-Zheng (Inventor)
2007-01-01
A method for converting a Type 2 quantum well semiconductor material to a Type 1 material. A second layer of undoped material is placed between first and third layers of selectively doped material, which are separated from the second layer by undoped layers having small widths. Doping profiles are chosen so that a first electrical potential increment across a first layer-second layer interface is equal to a first selected value and/or a second electrical potential increment across a second layer-third layer interface is equal to a second selected value. The semiconductor structure thus produced is useful as a laser material and as an incident light detector material in various wavelength regions, such as a mid-infrared region.
Conversion of Type of Quantum Well Structure
NASA Technical Reports Server (NTRS)
Ning, Cun-Zheng (Inventor)
2007-01-01
A method for converting a Type 2 quantum well semiconductor material to a Type 1 material. A second layer of undoped material is placed between first and third layers of selectively doped material, which are separated from the second layer by undoped layers having small widths. Doping profiles are chosen so that a first electrical potential increment across a first layer-second layer interface is equal to a first selected value and/or a second electrical potential increment across a second layer-third layer interface is equal to a second selected value. The semiconductor structure thus produced is useful as a laser material and as an incident light detector material in various wavelength regions, such as a mid-infrared region.
The incremental costs of recommended therapy versus real world therapy in type 2 diabetes patients
Crivera, C.; Suh, D. C.; Huang, E. S.; Cagliero, E.; Grant, R. W.; Vo, L.; Shin, H. C.; Meigs, J. B.
2008-01-01
Background The goals of diabetes management have evolved over the past decade to become the attainment of near-normal glucose and cardiovascular risk factor levels. Improved metabolic control is achieved through optimized medication regimens, but costs specifically associated with such optimization have not been examined. Objective To estimate the incremental medication cost of providing optimal therapy to reach recommended goals versus actual therapy in patients with type 2 diabetes. Methods We randomly selected the charts of 601 type 2 diabetes patients receiving care from the outpatient clinics of Massachusetts General Hospital March 1, 1996–August 31, 1997 and abstracted clinical and medication data. We applied treatment algorithms based on 2004 clinical practice guidelines for hyperglycemia, hyperlipidemia, and hypertension to patients’ current medication therapy to determine how current medication regimens could be improved to attain recommended treatment goals. Four clinicians and three pharmacists independently applied the algorithms and reached consensus on recommended therapies. Mean incremental medication costs, the cost differences between current and recommended therapies, per patient (expressed in 2004 dollars) were calculated with 95% bootstrap confidence intervals (CIs). Results Mean patient age was 65 years old, mean duration of diabetes was 7.7 years, 32% had ideal glucose control, 25% had ideal systolic blood pressure, and 24% had ideal low-density lipoprotein cholesterol. Care for these diabetes patients was similar to that observed in recent national studies. If treatment algorithm recommendations were applied, the average annual medication cost/patient would increase from $1525 to $2164. Annual incremental costs/patient increased by $168 (95% CI $133–$206) for antihyperglycemic medications, $75 ($57–$93) for antihypertensive medications, $392 ($354–$434) for antihyperlipidemic medications, and $3 ($3–$4) for aspirin prophylaxis. Yearly incremental cost of recommended laboratory testing ranged from $77–$189/patient. Limitations Although baseline data come from the clinics of a single academic institution, collected in 1997, the care of these diabetes patients was remarkably similar to care recently observed nationally. In addition, the data are dependent on the medical record and may not accurately reflect patients’ actual experiences. Conclusion Average yearly incremental cost of optimizing drug regimens to achieve recommended treatment goals for type 2 diabetes was approximately $600/patient. These results provide valuable input for assessing the cost-effectiveness of improving comprehensive diabetes care. PMID:17076990
Barasa, Edwine W.; Ayieko, Philip; Cleary, Susan; English, Mike
2012-01-01
Background To improve care for children in district hospitals in Kenya, a multifaceted approach employing guidelines, training, supervision, feedback, and facilitation was developed, for brevity called the Emergency Triage and Treatment Plus (ETAT+) strategy. We assessed the cost effectiveness of the ETAT+ strategy, in Kenyan hospitals. Further, we estimate the costs of scaling up the intervention to Kenya nationally and potential cost effectiveness at scale. Methods and Findings Our cost-effectiveness analysis from the provider's perspective used data from a previously reported cluster randomized trial comparing the full ETAT+ strategy (n = 4 hospitals) with a partial intervention (n = 4 hospitals). Effectiveness was measured using 14 process measures that capture improvements in quality of care; their average was used as a summary measure of quality. Economic costs of the development and implementation of the intervention were determined (2009 US$). Incremental cost-effectiveness ratios were defined as the incremental cost per percentage improvement in (average) quality of care. Probabilistic sensitivity analysis was used to assess uncertainty. The cost per child admission was US$50.74 (95% CI 49.26–67.06) in intervention hospitals compared to US$31.1 (95% CI 30.67–47.18) in control hospitals. Each percentage improvement in average quality of care cost an additional US$0.79 (95% CI 0.19–2.31) per admitted child. The estimated annual cost of nationally scaling up the full intervention was US$3.6 million, approximately 0.6% of the annual child health budget in Kenya. A “what-if” analysis assuming conservative reductions in mortality suggests the incremental cost per disability adjusted life year (DALY) averted by scaling up would vary between US$39.8 and US$398.3. Conclusion Improving quality of care at scale nationally with the full ETAT+ strategy may be affordable for low income countries such as Kenya. Resultant plausible reductions in hospital mortality suggest the intervention could be cost-effective when compared to incremental cost-effectiveness ratios of other priority child health interventions. Please see later in the article for the Editors' Summary PMID:22719233
Barasa, Edwine W; Ayieko, Philip; Cleary, Susan; English, Mike
2012-01-01
To improve care for children in district hospitals in Kenya, a multifaceted approach employing guidelines, training, supervision, feedback, and facilitation was developed, for brevity called the Emergency Triage and Treatment Plus (ETAT+) strategy. We assessed the cost effectiveness of the ETAT+ strategy, in Kenyan hospitals. Further, we estimate the costs of scaling up the intervention to Kenya nationally and potential cost effectiveness at scale. Our cost-effectiveness analysis from the provider's perspective used data from a previously reported cluster randomized trial comparing the full ETAT+ strategy (n = 4 hospitals) with a partial intervention (n = 4 hospitals). Effectiveness was measured using 14 process measures that capture improvements in quality of care; their average was used as a summary measure of quality. Economic costs of the development and implementation of the intervention were determined (2009 US$). Incremental cost-effectiveness ratios were defined as the incremental cost per percentage improvement in (average) quality of care. Probabilistic sensitivity analysis was used to assess uncertainty. The cost per child admission was US$50.74 (95% CI 49.26-67.06) in intervention hospitals compared to US$31.1 (95% CI 30.67-47.18) in control hospitals. Each percentage improvement in average quality of care cost an additional US$0.79 (95% CI 0.19-2.31) per admitted child. The estimated annual cost of nationally scaling up the full intervention was US$3.6 million, approximately 0.6% of the annual child health budget in Kenya. A "what-if" analysis assuming conservative reductions in mortality suggests the incremental cost per disability adjusted life year (DALY) averted by scaling up would vary between US$39.8 and US$398.3. Improving quality of care at scale nationally with the full ETAT+ strategy may be affordable for low income countries such as Kenya. Resultant plausible reductions in hospital mortality suggest the intervention could be cost-effective when compared to incremental cost-effectiveness ratios of other priority child health interventions.
Dinh, Michael M; Bein, Kendall J; Hendrie, Delia; Gabbe, Belinda; Byrne, Christopher M; Ivers, Rebecca
2016-09-01
Objective The aim of the present study was to estimate the cost-effectiveness of trauma service funding enhancements at an inner city major trauma centre. Methods The present study was a cost-effectiveness analysis using retrospective trauma registry data of all major trauma patients (injury severity score >15) presenting after road trauma between 2001 and 2012. The primary outcome was cost per life year gained associated with the intervention period (2007-12) compared with the pre-intervention period (2001-06). Incremental costs were represented by all trauma-related funding enhancements undertaken between 2007 and 2010. Risk adjustment for years of life lost was conducted using zero-inflated negative binomial regression modelling. All costs were expressed in 2012 Australian dollar values. Results In all, 876 patients were identified during the study period. The incremental cost of trauma enhancements between 2007 and 2012 totalled $7.91million, of which $2.86million (36%) was attributable to road trauma patients. After adjustment for important covariates, the odds of in-hospital mortality reduced by around half (adjusted odds ratio (OR) 0.48; 95% confidence interval (CI) 0.27, 0.82; P=0.01). The incremental cost-effectiveness ratio was A$7600 per life year gained (95% CI A$5524, $19333). Conclusion Trauma service funding enhancements that enabled a quality improvement program at a single major trauma centre were found to be cost-effective based on current international and Australian standards. What is known about this topic? Trauma quality improvement programs have been implemented across most designated trauma hospitals in an effort to improve hospital care processes and outcomes for injured patients. These involve a combination of education and training, the use of audit and key performance indicators. What does this paper add? A trauma quality improvement program initiated at an Australian Major Trauma Centre was found to be cost-effective over 12 years with respect to years of life saved in road trauma patients. What are the implications for practitioners? The results suggest that adequate resourcing of trauma centres to enable quality improvement programs may be a cost-effective measure to reduce in-hospital mortality following road trauma.
Gómez-Garre, D; Martín-Ventura, J L; Granados, R; Sancho, T; Torres, R; Ruano, M; García-Puig, J; Egido, J
2006-04-01
Although structural and functional changes of resistance arteries have been proposed to participate in arterial hypertension (HTA) outcome, not all therapies may correct these alterations, even if they normalize the blood pressure (BP). The aim of this study was to investigate the mechanisms of the protection afforded by the angiotensin receptor antagonist losartan in resistance arteries from patients with essential HTA. In all, 22 untreated hypertensive patients were randomized to receive losartan or amlodipine for 1 year and the morphological characteristics of resistance vessels from subcutaneous biopsies were evaluated. Protein expression of connective tissue growth factor (CTGF), transforming growth factor beta (TGF-beta), and collagens III and IV was detected by immunohistochemistry. In comparison with normotensive subjects, resistance arteries from hypertensive patients showed a significant media:lumen (M/L) ratio increment and a higher protein expression of CTGF, TGF-beta, and collagens. After 1 year of treatment, both losartan and amlodipine similarly controlled BP. However, M/L only decreased in patients under losartan treatment, whereas in the amlodipine-treated group this ratio continued to increase significantly. The administration of losartan prevented significant increments in CTGF, TGF-beta, and collagens in resistance arteries. By contrast, amlodipine-treated patients showed a higher vascular CTGF, TGF-beta, and collagen IV staining than before treatment. Our results show that the administration of losartan, but not amlodipine, to hypertensive patients improves structural abnormalities and prevents the production of CTGF and TGF-beta in small arteries, despite similar BP lowering. These data may explain the molecular mechanisms of the better vascular protection afforded by drugs interfering with the renin-angiotensin system.
Dendroclimatic estimates of a drought index for northern Virginia
Puckett, Larry J.
1981-01-01
A 230-year record of the Palmer drought-severity index (PDSI) was estimated for northern Virginia from variations in widths of tree rings. Increment cores were extracted from eastern hemlock, Tsuga canadensis (L.) Carr., at three locations in northern Virginia. Measurements of annual growth increments were made and converted to standardized indices of growth. A response function was derived for hemlock to determine the growth-climate relationship. Growth was positively correlated with precipitation and negatively correlated with temperature during the May-July growing season. Combined standardized indices of growth were calibrated with the July PDSI. Growth accounted for 20-30 percent of the PDSI variance. Further regressions using factor scores of combined tree growth indices resulted in a small but significant improvement. Greatest improvement was made by using factor scores of growth indices of individual trees, thereby accounting for 64 percent of the July PDSI variance in the regression. Comparison of the results with a 241-year reconstruction from New York showed good agreement between low-frequency climatic trends. Analysis of the estimated Central Mountain climatic division of Virginia PDSI record indicated that, relative to the long-term record (1746-1975), dry years have occurred in disproportionally larger numbers during the last half of the 19th century and the mid-20th century. This trend appears reversed for the last half of the 18th century and the first half of the 19th century. Although these results are considered first-generation products, they are encouraging, suggesting that once additional tree-ring chronologies are constructed and techniques are refined, it will be possible to obtain more accurate estimates of prior climatic conditions in the mid-Atlantic region.
Fang, Ning; Sun, Wei
2015-04-21
A method, apparatus, and system for improved VA-TIRFM microscopy. The method comprises automatically controlled calibration of one or more laser sources by precise control of presentation of each laser relative a sample for small incremental changes of incident angle over a range of critical TIR angles. The calibration then allows precise scanning of the sample for any of those calibrated angles for higher and more accurate resolution, and better reconstruction of the scans for super resolution reconstruction of the sample. Optionally the system can be controlled for incident angles of the excitation laser at sub-critical angles for pseudo TIRFM. Optionally both above-critical angle and sub critical angle measurements can be accomplished with the same system.
Humanoids in Support of Lunar and Planetary Surface Operations
NASA Technical Reports Server (NTRS)
Stoica, Adrian; Keymeulen, Didier
2006-01-01
This paper presents a vision of humanoid robots as human's key partners in future space exploration, in particular for construction, maintenance/repair and operation of lunar/planetary habitats, bases and settlements. It integrates this vision with the recent plans for human and robotic exploration, aligning a set of milestones for operational capability of humanoids with the schedule for the next decades and development spirals in the Project Constellation. These milestones relate to a set of incremental challenges, for the solving of which new humanoid technologies are needed. A system of systems integrative approach that would lead to readiness of cooperating humanoid crews is sketched. Robot fostering, training/education techniques, and improved cognitive/sensory/motor development techniques are considered essential elements for achieving intelligent humanoids. A pilot project using small-scale Fujitsu HOAP-2 humanoid is outlined.
Software designs of image processing tasks with incremental refinement of computation.
Anastasia, Davide; Andreopoulos, Yiannis
2010-08-01
Software realizations of computationally-demanding image processing tasks (e.g., image transforms and convolution) do not currently provide graceful degradation when their clock-cycles budgets are reduced, e.g., when delay deadlines are imposed in a multitasking environment to meet throughput requirements. This is an important obstacle in the quest for full utilization of modern programmable platforms' capabilities since worst-case considerations must be in place for reasonable quality of results. In this paper, we propose (and make available online) platform-independent software designs performing bitplane-based computation combined with an incremental packing framework in order to realize block transforms, 2-D convolution and frame-by-frame block matching. The proposed framework realizes incremental computation: progressive processing of input-source increments improves the output quality monotonically. Comparisons with the equivalent nonincremental software realization of each algorithm reveal that, for the same precision of the result, the proposed approach can lead to comparable or faster execution, while it can be arbitrarily terminated and provide the result up to the computed precision. Application examples with region-of-interest based incremental computation, task scheduling per frame, and energy-distortion scalability verify that our proposal provides significant performance scalability with graceful degradation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Phadke, Amol; Shah, Nihar; Abhyankar, Nikit
Improving efficiency of air conditioners (ACs) typically involves improving the efficiency of various components such as compressors, heat exchangers, expansion valves, refrigerant,and fans. We estimate the incremental cost of improving the efficiency of room ACs based on the cost of improving the efficiency of its key components. Further, we estimate the retail price increase required to cover the cost of efficiency improvement, compare it with electricity bill savings, and calculate the payback period for consumers to recover the additional price of a more efficient AC. The finding that significant efficiency improvement is cost effective from a consumer perspective is robustmore » over a wide range of assumptions. If we assume a 50% higher incremental price compared to our baseline estimate, the payback period for the efficiency level of 3.5 ISEER is 1.1 years. Given the findings of this study, establishing more stringent minimum efficiency performance criteria (one-star level) should be evaluated rigorously considering significant benefits to consumers, energy security, and environment« less
NASA Astrophysics Data System (ADS)
Zhao, W.; Zha, G. C.; Kong, F. X.; Wu, M. L.; Feng, X.; Gao, S. Y.
2017-05-01
A Ti-6Al-4V alloy clad plate with a Tribaloy 700 alloy laser-clad layer is subjected to incremental shear deformation, and we evaluate the structural evolution and mechanical properties of the specimens. Results indicate the significance of the incremental shear deformation on the strengthening effect. The wear resistance and Vickers hardness of the laser-clad layer are enhanced due to increased dislocation density. The incremental shear deformation can increase the bonding strength of the laser-clad layer and the corresponding substrate and can break the columnar crystals in the laser-clad layer near the interface. These phenomena suggest that shear deformation eliminates the defects on the interface of the laser-clad layer and the substrate. Substrate hardness is evidently improved, and the strengthening effect is caused by the increased dislocation density and shear deformation. This deformation can then transform the α- and β-phases in the substrate into a high-intensity ω-phase.
Durkalec-Michalski, Krzysztof; Zawieja, Emilia Ewa; Zawieja, Bogna Ewa; Podgórski, Tomasz; Jurkowska, Dominika; Jeszka, Jan
2017-12-18
The study was aimed at assessing the influence of 3-week low glycemic index (LGI) versus moderate glycemic index (MGI) diet on substrate oxidation during incremental exercise. 17 runners completed two 3-week trials of either LGI or MGI diet in a randomised counterbalanced manner. Before and after each trial the incremental cycling test was performed. Metabolic alternations were observed only within tested diets and no significant differences in fat and carbohydrate (CHO) oxidation were found between MGI and LGI diets. Following MGI diet CHO oxidation rate increased. The AUC of fat oxidation decreased after both diets. Percent contribution of fat to energy yield declined, whereas contribution of CHO was augmented following MGI diet. This study indicates that the 3-week MGI diet increased the rate of carbohydrate oxidation during incremental cycling test and improved performance in acute intense exercise test, while both high-carbohydrate diets downregulated fat oxidation rate.
Speeding up nuclear magnetic resonance spectroscopy by the use of SMAll Recovery Times - SMART NMR
NASA Astrophysics Data System (ADS)
Vitorge, Bruno; Bodenhausen, Geoffrey; Pelupessy, Philippe
2010-11-01
A drastic reduction of the time required for two-dimensional NMR experiments can be achieved by reducing or skipping the recovery delay between successive experiments. Novel SMAll Recovery Times (SMART) methods use orthogonal pulsed field gradients in three spatial directions to select the desired pathways and suppress interference effects. Two-dimensional spectra of dilute amino acids with concentrations as low as 2 mM can be recorded in about 0.1 s per increment in the indirect domain.
Bonjour, Timothy J; Charny, Grigory; Thaxton, Robert E
2016-11-01
Rapid effective trauma resuscitations (TRs) decrease patient morbidity and mortality. Few studies have evaluated TR care times. Effective time goals and superior human patient simulator (HPS) training can improve patient survivability. The purpose of this study was to compare live TR to HPS resuscitation times to determine mean incremental resuscitation times and ascertain if simulation was educationally equivalent. The study was conducted at San Antonio Military Medical Center, Department of Defense Level I trauma center. This was a prospective observational study measuring incremental step times by trauma teams during trauma and simulation patient resuscitations. Trauma and simulation patient arms had 60 patients for statistical significance. Participants included Emergency Medicine residents and Physician Assistant residents as the trauma team leader. The trauma patient arm revealed a mean evaluation time of 10:33 and simulation arm 10:23. Comparable time characteristics in the airway, intravenous access, blood sample collection, and blood pressure data subsets were seen. TR mean times were similar to the HPS arm subsets demonstrating simulation as an effective educational tool. Effective stepwise approaches, incremental time goals, and superior HPS training can improve patient survivability and improved departmental productivity using TR teams. Reprint & Copyright © 2016 Association of Military Surgeons of the U.S.
An Improved Incremental Learning Approach for KPI Prognosis of Dynamic Fuel Cell System.
Yin, Shen; Xie, Xiaochen; Lam, James; Cheung, Kie Chung; Gao, Huijun
2016-12-01
The key performance indicator (KPI) has an important practical value with respect to the product quality and economic benefits for modern industry. To cope with the KPI prognosis issue under nonlinear conditions, this paper presents an improved incremental learning approach based on available process measurements. The proposed approach takes advantage of the algorithm overlapping of locally weighted projection regression (LWPR) and partial least squares (PLS), implementing the PLS-based prognosis in each locally linear model produced by the incremental learning process of LWPR. The global prognosis results including KPI prediction and process monitoring are obtained from the corresponding normalized weighted means of all the local models. The statistical indicators for prognosis are enhanced as well by the design of novel KPI-related and KPI-unrelated statistics with suitable control limits for non-Gaussian data. For application-oriented purpose, the process measurements from real datasets of a proton exchange membrane fuel cell system are employed to demonstrate the effectiveness of KPI prognosis. The proposed approach is finally extended to a long-term voltage prediction for potential reference of further fuel cell applications.
Content-Aware DataGuide with Incremental Index Update using Frequently Used Paths
NASA Astrophysics Data System (ADS)
Sharma, A. K.; Duhan, Neelam; Khattar, Priyanka
2010-11-01
Size of the WWW is increasing day by day. Due to the absence of structured data on the Web, it becomes very difficult for information retrieval tools to fully utilize the Web information. As a solution to this problem, XML pages come into play, which provide structural information to the users to some extent. Without efficient indexes, query processing can be quite inefficient due to an exhaustive traversal on XML data. In this paper an improved content-centric approach of Content-Aware DataGuide, which is an indexing technique for XML databases, is being proposed that uses frequently used paths from historical query logs to improve query performance. The index can be updated incrementally according to the changes in query workload and thus, the overhead of reconstruction can be minimized. Frequently used paths are extracted using any Sequential Pattern mining algorithm on subsequent queries in the query workload. After this, the data structures are incrementally updated. This indexing technique proves to be efficient as partial matching queries can be executed efficiently and users can now get the more relevant documents in results.
Ahn, Song Vogue; Baik, Soon Koo; Cho, Youn zoo; Koh, Sang Baek; Huh, Ji Hye; Chang, Yoosoo; Sung, Ki-Chul; Kim, Jang Young
2016-01-01
Aims The ratio of aspartate aminotransferase (AST) to alanine aminotransferase (ALT) is of great interest as a possible novel marker of metabolic syndrome. However, longitudinal studies emphasizing the incremental predictive value of the AST-to-ALT ratio in diagnosing individuals at higher risk of developing metabolic syndrome are very scarce. Therefore, our study aimed to evaluate the AST-to-ALT ratio as an incremental predictor of new onset metabolic syndrome in a population-based cohort study. Material and Methods The population-based cohort study included 2276 adults (903 men and 1373 women) aged 40–70 years, who participated from 2005–2008 (baseline) without metabolic syndrome and were followed up from 2008–2011. Metabolic syndrome was defined according to the harmonized definition of metabolic syndrome. Serum concentrations of AST and ALT were determined by enzymatic methods. Results During an average follow-up period of 2.6-years, 395 individuals (17.4%) developed metabolic syndrome. In a multivariable adjusted model, the odds ratio (95% confidence interval) for new onset of metabolic syndrome, comparing the fourth quartile to the first quartile of the AST-to-ALT ratio, was 0.598 (0.422–0.853). The AST-to-ALT ratio also improved the area under the receiver operating characteristic curve (AUC) for predicting new cases of metabolic syndrome (0.715 vs. 0.732, P = 0.004). The net reclassification improvement of prediction models including the AST-to-ALT ratio was 0.23 (95% CI: 0.124–0.337, P<0.001), and the integrated discrimination improvement was 0.0094 (95% CI: 0.0046–0.0143, P<0.001). Conclusions The AST-to-ALT ratio independently predicted the future development of metabolic syndrome and had incremental predictive value for incident metabolic syndrome. PMID:27560931
NASA Astrophysics Data System (ADS)
Fritz, S.; Scholes, R. J.; Obersteiner, M.; Bouma, J.
2007-12-01
The aim of the Global Earth Observation System of Systems (GEOSS) is to contribute to human wellbeing though improving the information available to decision-makers at all levels relating to human health and safety, protection of the global environment, the reduction of losses from natural disasters, and achieving sustainable development. Specifically, GEOSS proposes that better international co-operation in the collection, interpretation and sharing of Earth Observation information is an important and cost-effective mechanism for achieving this aim. While there is a widespread intuition that this proposition is correct, at some point the following question needs to be answered: how much additional investment in Earth Observation (and specifically, in its international integration) is enough? This leads directly to some challenging subsidiary questions, such as how can the benefits of Earth Observation be assessed? What are the incremental costs of GEOSS? Are there societal benefit areas where the return on investment is higher than in others? The Geo-Bene project has developed a `benefit chain' concept as a framework for addressing these questions. The basic idea is that an incremental improvement in the observing system (including its data collection, interpretation and information-sharing aspects) will result in an improvement in the quality of decisions based on that information. This will in turn lead to better societal outcomes, which have a value. This incremental value must be judged against the incremental cost of the improved observation system. Since in many cases there will be large uncertainties in the estimation of both the costs and the benefits, and it may not be possible to express one or both of them in monetary terms, we show how order-of-magnitude approaches and a qualitative understanding of the shape of the cost-benefit curves can help guide rational investment decision in Earth Observation systems.
Attrill, D C; Davies, R M; King, T A; Dickinson, M R; Blinkhorn, A S
2004-01-01
To quantify the temperature increments in a simulated dental pulp following irradiation with an Er:YAG laser, and to compare those increments when the laser is applied with and without water spray. Two cavities were prepared on either the buccal or lingual aspect of sound extracted teeth using the laser. One cavity was prepared with water spray, the other without and the order of preparation randomised. Identical preparation parameters were used for both cavities. Temperature increments were measured in the pulp chamber using a calibrated thermocouple and a novel pulp simulant. Maximum increments were 4.0 degrees C (water) and 24.7 degrees C (no water). Water was shown to be highly significant in reducing the overall temperature increments in all cases (p<0.001; paired t-test). None of the samples prepared up to a maximum of 135 J cumulative energy prepared with water spray exceeded that threshold at which pulpal damage can be considered to occur. Only 25% of those prepared without water spray remained below this threshold. Extrapolation of the figures suggests probably tolerable limits of continuous laser irradiation with water in excess to 160 J. With the incorporation of small breaks in the continuity of laser irradiation that occur in the in vivo situation, the cumulative energy dose tolerated by the pulp should far exceed these figures. The Er:YAG laser must be used in conjunction with water during cavity preparation. As such it should be considered as an effective tool for clinical use based on predicted pulpal responses to thermal stimuli.
Zapata-Vázquez, Rita Esther; Álvarez-Cervera, Fernando José; Alonzo-Vázquez, Felipe Manuel; García-Lira, José Ramón; Granados-García, Víctor; Pérez-Herrera, Norma Elena; Medina-Moreno, Manuel
2017-12-01
To conduct an economic evaluation of intracranial pressure (ICP) monitoring on the basis of current evidence from pediatric patients with severe traumatic brain injury, through a statistical model. The statistical model is a decision tree, whose branches take into account the severity of the lesion, the hospitalization costs, and the quality-adjusted life-year for the first 6 months post-trauma. The inputs consist of probability distributions calculated from a sample of 33 surviving children with severe traumatic brain injury, divided into two groups: with ICP monitoring (monitoring group) and without ICP monitoring (control group). The uncertainty of the parameters from the sample was quantified through a probabilistic sensitivity analysis using the Monte-Carlo simulation method. The model overcomes the drawbacks of small sample sizes, unequal groups, and the ethical difficulty in randomly assigning patients to a control group (without monitoring). The incremental cost in the monitoring group was Mex$3,934 (Mexican pesos), with an increase in quality-adjusted life-year of 0.05. The incremental cost-effectiveness ratio was Mex$81,062. The cost-effectiveness acceptability curve had a maximum at 54% of the cost effective iterations. The incremental net health benefit for a willingness to pay equal to 1 time the per capita gross domestic product for Mexico was 0.03, and the incremental net monetary benefit was Mex$5,358. The results of the model suggest that ICP monitoring is cost effective because there was a monetary gain in terms of the incremental net monetary benefit. Copyright © 2017. Published by Elsevier Inc.
Beran, Michael J; Parrish, Audrey E
2016-08-01
A key issue in understanding the evolutionary and developmental emergence of numerical cognition is to learn what mechanism(s) support perception and representation of quantitative information. Two such systems have been proposed, one for dealing with approximate representation of sets of items across an extended numerical range and another for highly precise representation of only small numbers of items. Evidence for the first system is abundant across species and in many tests with human adults and children, whereas the second system is primarily evident in research with children and in some tests with non-human animals. A recent paper (Choo & Franconeri, Psychonomic Bulletin & Review, 21, 93-99, 2014) with adult humans also reported "superprecise" representation of small sets of items in comparison to large sets of items, which would provide more support for the presence of a second system in human adults. We first presented capuchin monkeys with a test similar to that of Choo and Franconeri in which small or large sets with the same ratios had to be discriminated. We then presented the same monkeys with an expanded range of comparisons in the small number range (all comparisons of 1-9 items) and the large number range (all comparisons of 10-90 items in 10-item increments). Capuchin monkeys showed no increased precision for small over large sets in making these discriminations in either experiment. These data indicate a difference in the performance of monkeys to that of adult humans, and specifically that monkeys do not show improved discrimination performance for small sets relative to large sets when the relative numerical differences are held constant.
BigFoot: a program to reduce risk for indirect drive laser fusion
NASA Astrophysics Data System (ADS)
Thomas, Cliff
2017-10-01
The conventional approach to inertial confinement fusion (ICF) with indirect drive is to design for high convergence (40), DT areal density, and target gain. By construction, this strategy is challenged by low-mode control of the implosion (Legendre P2 and P4), instability, and difficulties interpreting data. Here we consider an alternative - an approach to ICF that emphasizes control. To begin, we optimize for hohlraum predictability, and coupling to the capsule. Rather than focus on density, we work on making a high-energy hotspot we can diagnose and ``tune'' at low convergence (20). Though gain is reduced, this makes it possible to study (and improve) stagnation physics in a regime relevant to ignition (1E16-1E17). Further improvements can then be made with small, incremental increases in areal density, target scale, etc. Details regarding the ``BigFoot'' platform and pulse are reported, including recent findings. Work that could enable additional improvements in capsule stability and hohlraum control will also be discussed. This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344.
Square Deal: Lower Bounds and Improved Relaxations for Tensor Recovery
2013-08-16
problem size n from 10 to 30 with increment 1, and the observation ratio ρ from 0.01 to 0.2 with increment 0.01. For each (ρ, n)-pair, we simulate 5 test ...Foundations of Computational Mathematics, 12(6):805–849, 2012. [CRT] Emmanuel J. Candès, Justin K. Romberg , and Terence Tao. Stable signal recov- ery...2012. [SDS10] Marco Signoretto, Lieven De Lathauwer, and Johan AK Suykens. Nuclear norms for tensors and their use for convex multilinear estimation
Váczi, Márk; Tollár, József; Meszler, Balázs; Juhász, Ivett; Karsai, István
2013-01-01
The aim of the present study was to investigate the effects of a short-term in-season plyometric training program on power, agility and knee extensor strength. Male soccer players from a third league team were assigned into an experimental and a control group. The experimental group, beside its regular soccer training sessions, performed a periodized plyometric training program for six weeks. The program included two training sessions per week, and maximal intensity unilateral and bilateral plyometric exercises (total of 40 – 100 foot contacts/session) were executed. Controls participated only in the same soccer training routine, and did not perform plyometrics. Depth vertical jump height, agility (Illinois Agility Test, T Agility Test) and maximal voluntary isometric torque in knee extensors using Multicont II dynamometer were evaluated before and after the experiment. In the experimental group small but significant improvements were found in both agility tests, while depth jump height and isometric torque increments were greater. The control group did not improve in any of the measures. Results of the study indicate that plyometric training consisting of high impact unilateral and bilateral exercises induced remarkable improvements in lower extremity power and maximal knee extensor strength, and smaller improvements in soccer-specific agility. Therefore, it is concluded that short-term plyometric training should be incorporated in the in-season preparation of lower level players to improve specific performance in soccer. PMID:23717351
Váczi, Márk; Tollár, József; Meszler, Balázs; Juhász, Ivett; Karsai, István
2013-03-01
The aim of the present study was to investigate the effects of a short-term in-season plyometric training program on power, agility and knee extensor strength. Male soccer players from a third league team were assigned into an experimental and a control group. The experimental group, beside its regular soccer training sessions, performed a periodized plyometric training program for six weeks. The program included two training sessions per week, and maximal intensity unilateral and bilateral plyometric exercises (total of 40 - 100 foot contacts/session) were executed. Controls participated only in the same soccer training routine, and did not perform plyometrics. Depth vertical jump height, agility (Illinois Agility Test, T Agility Test) and maximal voluntary isometric torque in knee extensors using Multicont II dynamometer were evaluated before and after the experiment. In the experimental group small but significant improvements were found in both agility tests, while depth jump height and isometric torque increments were greater. The control group did not improve in any of the measures. Results of the study indicate that plyometric training consisting of high impact unilateral and bilateral exercises induced remarkable improvements in lower extremity power and maximal knee extensor strength, and smaller improvements in soccer-specific agility. Therefore, it is concluded that short-term plyometric training should be incorporated in the in-season preparation of lower level players to improve specific performance in soccer.
Paxman, Jenny R; Hall, Anna C; Harden, Charlotte J; O'Keeffe, Jean; Simper, Trevor N
2011-05-01
The aim of this study was to investigate the effects of a group behavior change intervention involving self-selected, contextualized, and mediated goal setting on anthropometric, affective, and dietary markers of health. It was hypothesized that the intervention would elicit changes consistent with accepted health recommendations for obese individuals. A rolling program of 12-week "Small Changes" interventions during 24 months recruited 71 participants; each program accommodated 10 to 13 adults (body mass index [BMI] ≥ 30 kg/m²). Fifty-eight participants completed Small Changes. Repeated measures were made at baseline, 6 and 12 weeks. Anthropometric measures included height and weight (to calculate BMI), body composition, waist circumference, and blood pressure. Affective state was monitored using relevant validated questionnaires. Dietary assessment used 3-day household measures food diaries with Schofield equations to monitor underreporting. Relevant blood measures were recorded throughout. Across the measurement period, Small Changes elicited a significant reduction in body weight (baseline, 102.95 ± 15.47 vs 12 weeks 100.09 ± 16.01 kg, P < .0005), coupled with associated significant improvements in BMI, body fat percentage, and waist circumference measures. There were additional significant positive changes in measures of affective state including general well-being (baseline, 58.92 ± 21.22 vs 12 weeks 78.04 ± 14.60, P < .0005) and total mood disturbance (baseline, 31.19 ± 34.03 vs 12 weeks 2.67 ± 24.96, P < .0005). Dietary changes that occurred were largely consistent with evidenced-based recommendations for weight management and included significant reductions in total energy intake and in fat and saturated fat as a proportion of energy. The Small Changes approach can elicit a range of health-orientated benefits for obese participants, and although further work is needed to ascertain the longevity of such effects, the outcomes from Small Changes are likely to help inform health professionals when framing the future of weight management. Long-term follow-up of Small Changes is warranted. Copyright © 2011 Elsevier Inc. All rights reserved.
Comparative effectiveness and cost-effectiveness of the implantable miniature telescope.
Brown, Gary C; Brown, Melissa M; Lieske, Heidi B; Lieske, Philip A; Brown, Kathryn S; Lane, Stephen S
2011-09-01
To assess the preference-based comparative effectiveness (human value gain) and the cost-utility (cost-effectiveness) of a telescope prosthesis (implantable miniature telescope) for the treatment of end-stage, age-related macular degeneration (AMD). A value-based medicine, second-eye model, cost-utility analysis was performed to quantify the comparative effectiveness and cost-effectiveness of therapy with the telescope prosthesis. Published, evidence-based data from the IMT002 Study Group clinical trial. Ophthalmic utilities were obtained from a validated cohort of >1000 patients with ocular diseases. Comparative effectiveness data were converted from visual acuity to utility (value-based) format. The incremental costs (Medicare) of therapy versus no therapy were integrated with the value gain conferred by the telescope prosthesis to assess its average cost-utility. The incremental value gains and incremental costs of therapy referent to (1) a fellow eye cohort and (2) a fellow eye cohort of those who underwent intra-study cataract surgery were integrated in incremental cost-utility analyses. All value outcomes and costs were discounted at a 3% annual rate, as per the Panel on Cost-Effectiveness in Health and Medicine. Comparative effectiveness was quantified using the (1) quality-adjusted life-year (QALY) gain and (2) percent human value gain (improvement in quality of life). The QALY gain was integrated with incremental costs into the cost-utility ratio ($/QALY, or US dollars expended per QALY gained). The mean, discounted QALY gain associated with use of the telescope prosthesis over 12 years was 0.7577. When the QALY loss of 0.0004 attributable to the adverse events was factored into the model, the final QALY gain was 0.7573. This resulted in a 12.5% quality of life gain for the average patient during the 12 years of the model. The average cost-utility versus no therapy for use of the telescope prosthesis was $14389/QALY. The incremental cost-utility referent to control fellow eyes was $14063/QALY, whereas the incremental cost-utility referent to fellow eyes that underwent intra-study cataract surgery was $11805/QALY. Therapy with the telescope prosthesis considerably improves quality of life and at the same time is cost-effective by conventional standards. Copyright © 2011 American Academy of Ophthalmology. Published by Elsevier Inc. All rights reserved.
A triangular thin shell finite element: Nonlinear analysis. [structural analysis
NASA Technical Reports Server (NTRS)
Thomas, G. R.; Gallagher, R. H.
1975-01-01
Aspects of the formulation of a triangular thin shell finite element which pertain to geometrically nonlinear (small strain, finite displacement) behavior are described. The procedure for solution of the resulting nonlinear algebraic equations combines a one-step incremental (tangent stiffness) approach with one iteration in the Newton-Raphson mode. A method is presented which permits a rational estimation of step size in this procedure. Limit points are calculated by means of a superposition scheme coupled to the incremental side of the solution procedure while bifurcation points are calculated through a process of interpolation of the determinants of the tangent-stiffness matrix. Numerical results are obtained for a flat plate and two curved shell problems and are compared with alternative solutions.
Barlow, R. B.; Ramtoola, S.
1980-01-01
1 From measurements of the affinity constants of hydratropyltropine and its methiodide for muscarine-sensitive acetylcholine receptors in the guinea-pig ileum, the increment in log K for the hydroxyl group in atropine is 2.06 and in the methiodide it is 2.16. These effects are slightly bigger than any so far recorded with these receptors. 2 The estimate of the increment in apparent molal volume for the hydroxyl group is 1.1 cm3/mol in atropine and 1.0 cm3/mol in the methobromide. 3 The large effect of the group on affinity may be linked to its small apparent size in water as suggested in the previous paper. PMID:7470742
NASA Technical Reports Server (NTRS)
Bridges, P. G.; Cross, E. J., Jr.; Boatwright, D. W.
1977-01-01
The overall drag of the aircraft is expressed in terms of the measured increment of power required to overcome a corresponding known increment of drag, which is generated by a towed drogue. The simplest form of the governing equations, D = delta D SHP/delta SHP, is such that all of the parameters on the right side of the equation can be measured in flight. An evaluation of the governing equations has been performed using data generated by flight test of a Beechcraft T-34B. The simplicity of this technique and its proven applicability to sailplanes and small aircraft is well known. However, the method fails to account for airframe-propulsion system.
Reducing voluntary, avoidable turnover through selection.
Barrick, Murray R; Zimmerman, Ryan D
2005-01-01
The authors investigated the efficacy of several variables used to predict voluntary, organizationally avoidable turnover even before the employee is hired. Analyses conducted on applicant data collected in 2 separate organizations (N = 445) confirmed that biodata, clear-purpose attitudes and intentions, and disguised-purpose dispositional retention scales predicted voluntary, avoidable turnover (rs ranged from -.16 to -.22, R = .37, adjusted R = .33). Results also revealed that biodata scales and disguised-purpose retention scales added incremental validity, whereas clear-purpose retention scales did not explain significant incremental variance in turnover beyond what was explained by biodata and disguised-purpose scales. Furthermore, disparate impact (subgroup differences on race, sex, and age) was consistently small (average d = 0.12 when the majority group scored higher than the minority group).
USDA-ARS?s Scientific Manuscript database
There are few experimental data available on how herbicide sorption coefficients change across small increments within soil profiles. Soil profiles were obtained from three landform elements (eroded upper slope, deposition zone, and eroded waterway) in a strongly eroded agricultural field and segmen...
ERIC Educational Resources Information Center
Schmitt, Neal; Billington, Abigail; Keeney, Jessica; Reeder, Matthew; Pleskac, Timothy J.; Sinha, Ruchi; Zorzie, Mark
2011-01-01
Noncognitive attributes as the researchers have measured them do correlate with college GPA, but the incremental validity associated with these measures is relatively small. The noncognitive measures are correlated with other valued dimensions of student performance beyond the achievement reflected in college grades. There were much smaller…
Energy Conservation Programs | Climate Neutral Research Campuses | NREL
. Recognize accomplishments. One common theme is that successful programs check in often with the target very small scale with one building or one department. The success and savings from that effort can then be used to grow incrementally. Harvard University adopted this approach, where investment in one
A Note on the Incremental Validity of Aggregate Predictors.
ERIC Educational Resources Information Center
Day, H. D.; Marshall, David
Three computer simulations were conducted to show that very high aggregate predictive validity coefficients can occur when the across-case variability in absolute score stability occurring in both the predictor and criterion matrices is quite small. In light of the increase in internal consistency reliability achieved by the method of aggregation…
Fluorescence microscopy for measuring fibril angles in pine tracheids
Ralph O. Marts
1955-01-01
Observation and measurement of fibril angles in increment cores or similar small samples from living pine trees was facilitated by the use of fluorescence microscopy. Although some autofluorescence was present, brighter images could be obtained by staining the specimens with a 0.1% aqueous solution of a fluorochrome (Calcozine flavine TG extra concentrated, Calcozine...
The Year-Two Decline: Exploring the Incremental Experiences of a 1:1 Technology Initiative
ERIC Educational Resources Information Center
Swallow, Meredith
2015-01-01
Reports on one-to-one (1:1) technology initiatives emphasize overall favorable results; however, comprehensive multiyear studies looked at understate the progressive experiences of teachers and students. A small body of research suggested the second year of 1:1 technology programs manifested difficulties and struggles which significantly…
Reynolds Number Effects on the Performance of Lateral Control Devices
NASA Technical Reports Server (NTRS)
Mineck, Raymond E.
2000-01-01
The influence of Reynolds number on the performance of outboard spoilers and ailerons was investigated on a generic subsonic transport configuration in the National Transonic Facility over a chord Reynolds number range 41 from 3x10(exp 6) to 30xl0(exp 6) and a Mach number range from 0.50 to 0.94, Spoiler deflection angles of 0, 10, 15, and 20 deg and aileron deflection angles of -10, 0, and 10 deg were tested. Aeroelastic effects were minimized by testing at constant normalized dynamic pressure conditions over intermediate Reynolds number ranges. Results indicated that the increment in rolling moment due to spoiler deflection generally becomes more negative as the Reynolds number increases from 3x10(exp 6) to 22x10(exp 6) with only small changes between Reynolds numbers of 22x10(exp 6) and 30x10(exp 6). The change in the increment in rolling moment coefficient with Reynolds number for the aileron deflected configuration is generally small with a general trend of increasing magnitude with increasing Reynolds number.
Montero, Alberto J; Avancha, Kiran; Glück, Stefan; Lopes, Gilberto
2012-04-01
Bevacizumab in combination with chemotherapy increases progression-free survival (PFS), but not overall survival when compared to chemotherapy alone in the treatment of metastatic breast cancer (MBC). Recently in November, 2011 the Food and drug administration revoked approval of bevacizumab in combination with paclitaxel for the treatment of MBC. The European Medicines Agency, in contrast, maintained its approval of bevacizumab in MBC. While neither agency considers health economics in their decision-making process, one of the greatest challenges in oncology practice today is to reconcile hard-won small incremental clinical benefits with exponentially rising costs. To inform policy-makers in the US, this study aimed to assess the cost-effectiveness of bevacizumab/paclitaxel in MBC, from a payer perspective. We created a decision analytical model using efficacy and adverse events data from the ECOG 2100 trial. Health utilities were derived from available literature. Costs were obtained from the Center for Medicare Services Drug Payment Table and Physician Fee Schedule and are represented in 2010 US dollars. Quality-adjusted life-years (QALY) and incremental cost-effectiveness ratio (ICER) were calculated. Sensitivity analyses were performed. Bevacizumab added 0.49 years of PFS and 0.135 QALY with an incremental cost of $100,300, and therefore a cost of $204,000 per year of PFS gained and an ICER of $745,000 per QALY. The main drivers of the model were drug acquisition cost, PFS, and health utility values. Using a threshold of $150,000/QALY, drug price would have to be reduced by nearly 80% or alternatively PFS increased by 10 months to make bevacizumab cost-effective. The results of the model were robust in sensitivity analyses. Bevacizumab plus paclitaxel is not cost-effective in treating MBC. Value-based pricing and the development of biomarkers to improve patient selection are needed to better define the role of the drug in this population.
Improved performance of semiconductor laser tracking frequency gauge
NASA Astrophysics Data System (ADS)
Kaplan, D. M.; Roberts, T. J.; Phillips, J. D.; Reasenberg, R. D.
2018-03-01
We describe new results from the semiconductor-laser tracking frequency gauge, an instrument that can perform sub-picometer distance measurements and has applications in gravity research and in space-based astronomical instruments proposed for the study of light from extrasolar planets. Compared with previous results, we have improved incremental distance accuracy by a factor of two, to 0.9 pm in 80 s averaging time, and absolute distance accuracy by a factor of 20, to 0.17 μm in 1000 s. After an interruption of operation of a tracking frequency gauge used to control a distance, it is now possible, using a nonresonant measurement interferometer, to restore the distance to picometer accuracy by combining absolute and incremental distance measurements.
Kunz, Wolfgang G; Hunink, M G Myriam; Sommer, Wieland H; Beyer, Sebastian E; Meinel, Felix G; Dorn, Franziska; Wirth, Stefan; Reiser, Maximilian F; Ertl-Wagner, Birgit; Thierfelder, Kolja M
2016-11-01
Endovascular therapy in addition to standard care (EVT+SC) has been demonstrated to be more effective than SC in acute ischemic large vessel occlusion stroke. Our aim was to determine the cost-effectiveness of EVT+SC depending on patients' initial National Institutes of Health Stroke Scale (NIHSS) score, time from symptom onset, Alberta Stroke Program Early CT Score (ASPECTS), and occlusion location. A decision model based on Markov simulations estimated lifetime costs and quality-adjusted life years (QALYs) associated with both strategies applied in a US setting. Model input parameters were obtained from the literature, including recently pooled outcome data of 5 randomized controlled trials (ESCAPE [Endovascular Treatment for Small Core and Proximal Occlusion Ischemic Stroke], EXTEND-IA [Extending the Time for Thrombolysis in Emergency Neurological Deficits-Intra-Arterial], MR CLEAN [Multicenter Randomized Clinical Trial of Endovascular Treatment for Acute Ischemic Stroke in the Netherlands], REVASCAT [Randomized Trial of Revascularization With Solitaire FR Device Versus Best Medical Therapy in the Treatment of Acute Stroke Due to Anterior Circulation Large Vessel Occlusion Presenting Within 8 Hours of Symptom Onset], and SWIFT PRIME [Solitaire With the Intention for Thrombectomy as Primary Endovascular Treatment]). Probabilistic sensitivity analysis was performed to estimate uncertainty of the model results. Net monetary benefits, incremental costs, incremental effectiveness, and incremental cost-effectiveness ratios were derived from the probabilistic sensitivity analysis. The willingness-to-pay was set to $50 000/QALY. Overall, EVT+SC was cost-effective compared with SC (incremental cost: $4938, incremental effectiveness: 1.59 QALYs, and incremental cost-effectiveness ratio: $3110/QALY) in 100% of simulations. In all patient subgroups, EVT+SC led to gained QALYs (range: 0.47-2.12), and mean incremental cost-effectiveness ratios were considered cost-effective. However, subgroups with ASPECTS ≤5 or with M2 occlusions showed considerably higher incremental cost-effectiveness ratios ($14 273/QALY and $28 812/QALY, respectively) and only reached suboptimal acceptability in the probabilistic sensitivity analysis (75.5% and 59.4%, respectively). All other subgroups had acceptability rates of 90% to 100%. EVT+SC is cost-effective in most subgroups. In patients with ASPECTS ≤5 or with M2 occlusions, cost-effectiveness remains uncertain based on current data. © 2016 American Heart Association, Inc.
NASA Astrophysics Data System (ADS)
DeMarco, Adam Ward
The turbulent motions with the atmospheric boundary layer exist over a wide range of spatial and temporal scales and are very difficult to characterize. Thus, to explore the behavior of such complex flow enviroments, it is customary to examine their properties from a statistical perspective. Utilizing the probability density functions of velocity and temperature increments, deltau and deltaT, respectively, this work investigates their multiscale behavior to uncover the unique traits that have yet to be thoroughly studied. Utilizing diverse datasets, including idealized, wind tunnel experiments, atmospheric turbulence field measurements, multi-year ABL tower observations, and mesoscale models simulations, this study reveals remarkable similiarities (and some differences) between the small and larger scale components of the probability density functions increments fields. This comprehensive analysis also utilizes a set of statistical distributions to showcase their ability to capture features of the velocity and temperature increments' probability density functions (pdfs) across multiscale atmospheric motions. An approach is proposed for estimating their pdfs utilizing the maximum likelihood estimation (MLE) technique, which has never been conducted utilizing atmospheric data. Using this technique, we reveal the ability to estimate higher-order moments accurately with a limited sample size, which has been a persistent concern for atmospheric turbulence research. With the use robust Goodness of Fit (GoF) metrics, we quantitatively reveal the accuracy of the distributions to the diverse dataset. Through this analysis, it is shown that the normal inverse Gaussian (NIG) distribution is a prime candidate to be used as an estimate of the increment pdfs fields. Therefore, using the NIG model and its parameters, we display the variations in the increments over a range of scales revealing some unique scale-dependent qualities under various stability and ow conditions. This novel approach can provide a method of characterizing increment fields with the sole use of only four pdf parameters. Also, we investigate the capability of the current state-of-the-art mesoscale atmospheric models to predict the features and highlight the potential for use for future model development. With the knowledge gained in this study, a number of applications can benefit by using our methodology, including the wind energy and optical wave propagation fields.
Two Different Views on the World Around Us: The World of Uniformity versus Diversity
Nayakankuppam, Dhananjay
2016-01-01
We propose that when individuals believe in fixed traits of personality (entity theorists), they are likely to expect a world of “uniformity.” As such, they easily infer a population statistic from a small sample of data with confidence. In contrast, individuals who believe in malleable traits of personality (incremental theorists) are likely to presume a world of “diversity,” such that they “hesitate” to infer a population statistic from a similarly sized sample. In four laboratory experiments, we found that compared to incremental theorists, entity theorists estimated a population mean from a sample with a greater level of confidence (Studies 1a and 1b), expected more homogeneity among the entities within a population (Study 2), and perceived an extreme value to be more indicative of an outlier (Study 3). These results suggest that individuals are likely to use their implicit self-theory orientations (entity theory versus incremental theory) to see a population in general as a constitution either of homogeneous or heterogeneous entities. PMID:27977788
Enlargement and contracture of C2-ceramide channels.
Siskind, Leah J; Davoody, Amirparviz; Lewin, Naomi; Marshall, Stephanie; Colombini, Marco
2003-09-01
Ceramides are known to play a major regulatory role in apoptosis by inducing cytochrome c release from mitochondria. We have previously reported that ceramide, but not dihydroceramide, forms large and stable channels in phospholipid membranes and outer membranes of isolated mitochondria. C(2)-ceramide channel formation is characterized by conductance increments ranging from <1 to >200 nS. These conductance increments often represent the enlargement and contracture of channels rather than the opening and closure of independent channels. Enlargement is supported by the observation that many small conductance increments can lead to a large decrement. Also the initial conductances favor cations, but this selectivity drops dramatically with increasing total conductance. La(+3) causes rapid ceramide channel disassembly in a manner indicative of large conducting structures. These channels have a propensity to contract by a defined size (often multiples of 4 nS) indicating the formation of cylindrical channels with preferred diameters rather than a continuum of sizes. The results are consistent with ceramides forming barrel-stave channels whose size can change by loss or insertion of multiple ceramide columns.
Enlargement and Contracture of C2-Ceramide Channels
Siskind, Leah J.; Davoody, Amirparviz; Lewin, Naomi; Marshall, Stephanie; Colombini, Marco
2003-01-01
Ceramides are known to play a major regulatory role in apoptosis by inducing cytochrome c release from mitochondria. We have previously reported that ceramide, but not dihydroceramide, forms large and stable channels in phospholipid membranes and outer membranes of isolated mitochondria. C2-ceramide channel formation is characterized by conductance increments ranging from <1 to >200 nS. These conductance increments often represent the enlargement and contracture of channels rather than the opening and closure of independent channels. Enlargement is supported by the observation that many small conductance increments can lead to a large decrement. Also the initial conductances favor cations, but this selectivity drops dramatically with increasing total conductance. La+3 causes rapid ceramide channel disassembly in a manner indicative of large conducting structures. These channels have a propensity to contract by a defined size (often multiples of 4 nS) indicating the formation of cylindrical channels with preferred diameters rather than a continuum of sizes. The results are consistent with ceramides forming barrel-stave channels whose size can change by loss or insertion of multiple ceramide columns. PMID:12944273
Kieffer, James D.
2017-01-01
Abstract The most utilized method to measure swimming performance of fishes has been the critical swimming speed (UCrit) test. In this test, the fish is forced to swim against an incrementally increasing flow of water until fatigue. Before the water velocity is increased, the fish swims at the water velocity for a specific, pre-arranged time interval. The magnitude of the velocity increments and the time interval for each swimming period can vary across studies making the comparison between and within species difficult. This issue has been acknowledged in the literature, however, little empirical evidence exists that tests the importance of velocity and time increments on swimming performance in fish. A practical application for fish performance is through the design of fishways that enable fish to bypass anthropogenic structures (e.g. dams) that block migration routes, which is one of the causes of world-wide decline in sturgeon populations. While fishways will improve sturgeon conservation, they need to be specifically designed to accommodate the swimming capabilities specific for sturgeons, and it is possible that current swimming methodologies have under-estimated the swimming performance of sturgeons. The present study assessed the UCrit of shortnose sturgeon using modified UCrit to determine the importance of velocity increment (5 and 10 cm s−1) and time (5, 15 and 30 min) intervals on swimming performance. UCrit was found to be influenced by both time interval and water velocity. UCrit was generally lower in sturgeon when they were swum using 5cm s−1 compared with 10 cm s−1 increments. Velocity increment influences the UCrit more than time interval. Overall, researchers must consider the impacts of using particular swimming criteria when designing their experiments. PMID:28835841
Geophysical characterization of soil moisture spatial patterns in a tillage experiment
NASA Astrophysics Data System (ADS)
Martinez, G.; Vanderlinden, K.; Giráldez, J. V.; Muriel, J. L.
2009-04-01
Knowledge on the spatial soil moisture pattern can improve the characterisation of the hydrological response of either field-plots or small watersheds. Near-surface geophysical methods, such as electromagnetic induction (EMI), provide a means to map such patterns using non-invasive and non-destructive measurements of the soil apparent electrical conductivity (ECa. In this study ECa was measured using an EMI sensor and used to characterize spatially the hydrologic response of a cropped field to an intense shower. The study site is part of a long-term tillage experiment in Southern Spain in which Conventional Tillage (CT), Direct Drilling (DD) and Minimum Tillage (MT) are being evaluated since 1982. Soil ECa was measured before and after a rain event of 115 mm, near the soil surface and at deeper depth (ECas and ECad, respectively) using the EM38-DD EMI sensor. Simultaneously, elevation data were collected at each sampling point to generate a Digital Elevation Model (DEM). Soil moisture during the first survey was close to permanent wilting point and near field capacity during the second survey. For the first survey, both ECas and ECad, were higher in the CT and MT than in the DD plots. After the rain event, rill erosion appeared only in CT and MT plots were soil was uncovered, matching the drainage lines obtained from the DEM. Apparent electrical conductivity increased all over the field plot with higher increments in the DD plots. These plots showed the highest ECas and ECad values, in contrast to the spatial pattern found during the first sampling. Difference maps obtained from the two ECas and ECad samplings showed a clear difference between DD plots and CT and MT plots due to their distinct hydrologic response. Water infiltration was higher in the soil of the DD plots than in the MT and CT plots, as reflected by their ECad increment. Higher ECa increments were observed in the depressions of the terrain, where water and sediments accumulated. On the contrary, the most elevated places of the field showed lower ECa increments. When soil is wet topography dominates the hydrologic response of the field, while under drier conditions, hydraulic conductivity controls the soil water dynamics. These results show that when static soil properties, e.g. clay content, are spatially uniform, ECa can detect changes in dynamic properties like soil moisture content, characterizing their spatial pattern.
Modeling CANDU-6 liquid zone controllers for effects of thorium-based fuels
DOE Office of Scientific and Technical Information (OSTI.GOV)
St-Aubin, E.; Marleau, G.
2012-07-01
We use the DRAGON code to model the CANDU-6 liquid zone controllers and evaluate the effects of thorium-based fuels on their incremental cross sections and reactivity worth. We optimize both the numerical quadrature and spatial discretization for 2D cell models in order to provide accurate fuel properties for 3D liquid zone controller supercell models. We propose a low computer cost parameterized pseudo-exact 3D cluster geometries modeling approach that avoids tracking issues on small external surfaces. This methodology provides consistent incremental cross sections and reactivity worths when the thickness of the buffer region is reduced. When compared with an approximate annularmore » geometry representation of the fuel and coolant region, we observe that the cluster description of fuel bundles in the supercell models does not increase considerably the precision of the results while increasing substantially the CPU time. In addition, this comparison shows that it is imperative to finely describe the liquid zone controller geometry since it has a strong impact of the incremental cross sections. This paper also shows that liquid zone controller reactivity worth is greatly decreased in presence of thorium-based fuels compared to the reference natural uranium fuel, since the fission and the fast to thermal scattering incremental cross sections are higher for the new fuels. (authors)« less
Mosaly, Prithima R; Mazur, Lukasz M; Marks, Lawrence B
2017-10-01
The methods employed to quantify the baseline pupil size and task-evoked pupillary response (TEPR) may affect the overall study results. To test this hypothesis, the objective of this study was to assess variability in baseline pupil size and TEPR during two basic working memory tasks: constant load of 3-letters memorisation-recall (10 trials), and incremental load memorisation-recall (two trials of each load level), using two commonly used methods (1) change from trail/load specific baseline, (2) change from constant baseline. Results indicated that there was a significant shift in baseline between the trails for constant load, and between the load levels for incremental load. The TEPR was independent of shifts in baseline using method 1 only for constant load, and method 2 only for higher levels of incremental load condition. These important findings suggest that the assessment of both the baseline and methods to quantify TEPR are critical in ergonomics application, especially in studies with small number of trials per subject per condition. Practitioner Summary: Quantification of TEPR can be affected by shifts in baseline pupil size that are most likely affected by non-cognitive factors when other external factors are kept constant. Therefore, quantification methods employed to compute both baseline and TEPR are critical in understanding the information processing of humans in practical ergonomics settings.
Chiba, N; Veldhuyzen Van Zanten, S J O; Escobedo, S; Grace, E; Lee, J; Sinclair, P; Barkun, A; Armstrong, D; Thomson, A B R
2004-02-01
Adult Helicobacter pylori-positive patients by 13C-urea breath test with uninvestigated dyspepsia symptoms were randomized to 1-week eradication treatment with omeprazole, metronidazole and clarithromycin (OMC) vs. omeprazole and placebo antimicrobials (OPP) in the Canadian Adult Dyspepsia Empiric Treatment-H. pylori-positive (CADET-Hp) study. To perform an economic evaluation of this 1-year study. Following blind eradication treatment, family practitioners managed patients according to their usual practices. Health resource utilization information was collected prospectively. From the mean costs of the health resources consumed and the treatment outcomes, the incremental cost-effectiveness ratios and incremental net benefits of eradication treatment vs. OPP were determined. Eradication therapy significantly improved dyspepsia symptoms (treatment success: OMC, 50%; OPP, 36%; P = 0.02). The incremental cost-effectiveness ratio of OMC vs. OPP was - 387 Canadian dollars (CAD$) per treatment success (90% CI, - CAD$1707, CAD$607), indicating a lower cost with treatment success. The incremental net benefit analysis showed that H. pylori eradication was cost-effective if the willingness-to-pay value exceeded a nominal figure of CAD$100 from a health service perspective or CAD$607 from the societal perspective. In uninvestigated patients presenting with dyspepsia at the primary care level, eradication of H. pylori in those who are H. pylori positive leads to a cost-effective improvement in dyspepsia symptoms compared with a strategy of not eradicating H. pylori in these patients.
Lempereur, Morine; Martin-StPaul, Nicolas K; Damesin, Claire; Joffre, Richard; Ourcival, Jean-Marc; Rocheteau, Alain; Rambal, Serge
2015-08-01
Understanding whether tree growth is limited by carbon gain (source limitation) or by the direct effect of environmental factors such as water deficit or temperature (sink limitation) is crucial for improving projections of the effects of climate change on forest productivity. We studied the relationships between tree basal area (BA) variations, eddy covariance carbon fluxes, predawn water potential (Ψpd ) and temperature at different timescales using an 8-yr dataset and a rainfall exclusion experiment in a Quercus ilex Mediterranean coppice. At the daily timescale, during periods of low temperature (< 5°C) and high water deficit (< -1.1 MPa), gross primary productivity and net ecosystem productivity remained positive whereas the stem increment was nil. Thus, stem increment appeared limited by drought and temperature rather than by carbon input. Annual growth was accurately predicted by the duration of BA increment during spring (Δtt0-t1 ). The onset of growth (t0 ) was related to winter temperatures and the summer interruption of growth (t1 ) to a threshold Ψpd value of -1.1 MPa. We suggest that using environmental drivers (i.e. drought and temperature) to predict stem growth phenology can contribute to an improvement in vegetation models and may change the current projections of Mediterranean forest productivity under climate change scenarios. © 2015 CNRS-ADEME New Phytologist © 2015 New Phytologist Trust.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ueda, Yoshihiro, E-mail: ueda-yo@mc.pref.osaka.jp; Miyazaki, Masayoshi; Nishiyama, Kinji
2012-07-01
Purpose: To evaluate setup error and interfractional changes in tumor motion magnitude using an electric portal imaging device in cine mode (EPID cine) during the course of stereotactic body radiation therapy (SBRT) for non-small-cell lung cancer (NSCLC) and to calculate margins to compensate for these variations. Materials and Methods: Subjects were 28 patients with Stage I NSCLC who underwent SBRT. Respiratory-correlated four-dimensional computed tomography (4D-CT) at simulation was binned into 10 respiratory phases, which provided average intensity projection CT data sets (AIP). On 4D-CT, peak-to-peak motion of the tumor (M-4DCT) in the craniocaudal direction was assessed and the tumor centermore » (mean tumor position [MTP]) of the AIP (MTP-4DCT) was determined. At treatment, the tumor on cone beam CT was registered to that on AIP for patient setup. During three sessions of irradiation, peak-to-peak motion of the tumor (M-cine) and the mean tumor position (MTP-cine) were obtained using EPID cine and in-house software. Based on changes in tumor motion magnitude ( Increment M) and patient setup error ( Increment MTP), defined as differences between M-4DCT and M-cine and between MTP-4DCT and MTP-cine, a margin to compensate for these variations was calculated with Stroom's formula. Results: The means ({+-}standard deviation: SD) of M-4DCT and M-cine were 3.1 ({+-}3.4) and 4.0 ({+-}3.6) mm, respectively. The means ({+-}SD) of Increment M and Increment MTP were 0.9 ({+-}1.3) and 0.2 ({+-}2.4) mm, respectively. Internal target volume-planning target volume (ITV-PTV) margins to compensate for Increment M, Increment MTP, and both combined were 3.7, 5.2, and 6.4 mm, respectively. Conclusion: EPID cine is a useful modality for assessing interfractional variations of tumor motion. The ITV-PTV margins to compensate for these variations can be calculated.« less
Atmospheric response to Saharan dust deduced from ECMWF reanalysis (ERA) temperature increments
NASA Astrophysics Data System (ADS)
Kishcha, P.; Alpert, P.; Barkan, J.; Kirchner, I.; Machenhauer, B.
2003-09-01
This study focuses on the atmospheric temperature response to dust deduced from a new source of data the European Reanalysis (ERA) increments. These increments are the systematic errors of global climate models, generated in the reanalysis procedure. The model errors result not only from the lack of desert dust but also from a complex combination of many kinds of model errors. Over the Sahara desert the lack of dust radiative effect is believed to be a predominant model defect which should significantly affect the increments. This dust effect was examined by considering correlation between the increments and remotely sensed dust. Comparisons were made between April temporal variations of the ERA analysis increments and the variations of the Total Ozone Mapping Spectrometer aerosol index (AI) between 1979 and 1993. The distinctive structure was identified in the distribution of correlation composed of three nested areas with high positive correlation (>0.5), low correlation and high negative correlation (<-0.5). The innermost positive correlation area (PCA) is a large area near the center of the Sahara desert. For some local maxima inside this area the correlation even exceeds 0.8. The outermost negative correlation area (NCA) is not uniform. It consists of some areas over the eastern and western parts of North Africa with a relatively small amount of dust. Inside those areas both positive and negative high correlations exist at pressure levels ranging from 850 to 700 hPa, with the peak values near 775 hPa. Dust-forced heating (cooling) inside the PCA (NCA) is accompanied by changes in the static instability of the atmosphere above the dust layer. The reanalysis data of the European Center for Medium Range Weather Forecast (ECMWF) suggest that the PCA (NCA) corresponds mainly to anticyclonic (cyclonic) flow, negative (positive) vorticity and downward (upward) airflow. These findings are associated with the interaction between dust-forced heating/cooling and atmospheric circulation. This paper contributes to a better understanding of dust radiative processes missed in the model.
Metsaranta, Juha M.; Lieffers, Victor J.
2008-01-01
Background and Aims Changes in size inequality in tree populations are often attributed to changes in the mode of competition over time. The mode of competition may also fluctuate annually in response to variation in growing conditions. Factors causing growth rate to vary can also influence competition processes, and thus influence how size hierarchies develop. Methods Detailed data obtained by tree-ring reconstruction were used to study annual changes in size and size increment inequality in several even-aged, fire-origin jack pine (Pinus banksiana) stands in the boreal shield and boreal plains ecozones in Saskatchewan and Manitoba, Canada, by using the Gini and Lorenz asymmetry coefficients. Key Results The inequality of size was related to variables reflecting long-term stand dynamics (e.g. stand density, mean tree size and average competition, as quantified using a distance-weighted absolute size index). The inequality of size increment was greater and more variable than the inequality of size. Inequality of size increment was significantly related to annual growth rate at the stand level, and was higher when growth rate was low. Inequality of size increment was usually due primarily to large numbers of trees with low growth rates, except during years with low growth rate when it was often due to small numbers of trees with high growth rates. The amount of competition to which individual trees were subject was not strongly related to the inequality of size increment. Conclusions Differences in growth rate among trees during years of poor growth may form the basis for development of size hierarchies on which asymmetric competition can act. A complete understanding of the dynamics of these forests requires further evaluation of the way in which factors that influence variation in annual growth rate also affect the mode of competition and the development of size hierarchies. PMID:18089583
Metsaranta, Juha M; Lieffers, Victor J
2008-03-01
Changes in size inequality in tree populations are often attributed to changes in the mode of competition over time. The mode of competition may also fluctuate annually in response to variation in growing conditions. Factors causing growth rate to vary can also influence competition processes, and thus influence how size hierarchies develop. Detailed data obtained by tree-ring reconstruction were used to study annual changes in size and size increment inequality in several even-aged, fire-origin jack pine (Pinus banksiana) stands in the boreal shield and boreal plains ecozones in Saskatchewan and Manitoba, Canada, by using the Gini and Lorenz asymmetry coefficients. The inequality of size was related to variables reflecting long-term stand dynamics (e.g. stand density, mean tree size and average competition, as quantified using a distance-weighted absolute size index). The inequality of size increment was greater and more variable than the inequality of size. Inequality of size increment was significantly related to annual growth rate at the stand level, and was higher when growth rate was low. Inequality of size increment was usually due primarily to large numbers of trees with low growth rates, except during years with low growth rate when it was often due to small numbers of trees with high growth rates. The amount of competition to which individual trees were subject was not strongly related to the inequality of size increment. Differences in growth rate among trees during years of poor growth may form the basis for development of size hierarchies on which asymmetric competition can act. A complete understanding of the dynamics of these forests requires further evaluation of the way in which factors that influence variation in annual growth rate also affect the mode of competition and the development of size hierarchies.
Energy Efficient Engine exhaust mixer model technology report addendum; phase 3 test program
NASA Technical Reports Server (NTRS)
Larkin, M. J.; Blatt, J. R.
1984-01-01
The Phase 3 exhaust mixer test program was conducted to explore the trends established during previous Phases 1 and 2. Combinations of mixer design parameters were tested. Phase 3 testing showed that the best performance achievable within tailpipe length and diameter constraints is 2.55 percent better than an optimized separate flow base line. A reduced penetration design achieved about the same overall performance level at a substantially lower level of excess pressure loss but with a small reduction in mixing. To improve reliability of the data, the hot and cold flow thrust coefficient analysis used in Phases 1 and 2 was augmented by calculating percent mixing from traverse data. Relative change in percent mixing between configurations was determined from thrust and flow coefficient increments. The calculation procedure developed was found to be a useful tool in assessing mixer performance. Detailed flow field data were obtained to facilitate calibration of computer codes.
Towards a Functionally-Formed Air Traffic System-of-Systems
NASA Technical Reports Server (NTRS)
Conway, Sheila R.; Consiglio, Maria C.
2005-01-01
Incremental improvements to the national aviation infrastructure have not resulted in sufficient increases in capacity and flexibility to meet emerging demand. Unfortunately, revolutionary changes capable of substantial and rapid increases in capacity have proven elusive. Moreover, significant changes have been difficult to implement, and the operational consequences of such change, difficult to predict due to the system s complexity. Some research suggests redistributing air traffic control functions through the system, but this work has largely been dismissed out of hand, accused of being impractical. However, the case for functionally-based reorganization of form can be made from a theoretical, systems perspective. This paper investigates Air Traffic Management functions and their intrinsic biases towards centralized/distributed operations, grounded in systems engineering and information technology theories. Application of these concepts to a small airport operations design is discussed. From this groundwork, a robust, scalable system transformation plan may be made in light of uncertain demand.
The Influence of Body Mass on Physical Fitness Test Performance in Male Firefighter Applicants.
Phillips, Devin B; Scarlett, Michael P; Petersen, Stewart R
2017-11-01
The influence of body mass on test performance was investigated in 414 male firefighter applicants who completed a maximal treadmill test and five task-simulation tests while dressed in fire protective ensemble. Subjects were assigned to six mass categories from less than 70 kg to more than 110 kg, in 10 kg increments (n = 69 in each). Treadmill performance was lower (P < 0.05) in the two heaviest groups. Charged hose advance time was slower in the two lightest groups. The lightest group had slower times for weighted sled pull, forcible entry, and victim rescue tests. The heaviest group was slower on the ladder climb test. Lighter subjects had a small advantage in endurance-oriented tests while higher mass appeared to improve performance slightly in strength-oriented tests. However, mass explained only 4% to 19% of the variance in performance.
ERIC Educational Resources Information Center
Texas Interagency Council on Early Childhood Intervention, Austin.
This Texas plan to improve information and referral services in the area of developmental disabilities recommends an approach which focuses on providing improvements incrementally, spacing benefits over time, and periodically reassessing direction, alternatives, and costs/benefits. The plan stresses building a network which provides greater public…
ERIC Educational Resources Information Center
Donaldson, Morgaen L.; Weiner, Jennie
2017-01-01
This case focuses on the complexity of change and improvement within schools with a particular emphasis on the role of the principal and a science teacher leader. Although current rhetoric suggests that school improvement should happen quickly and consistently, research indicates that it is difficult, context specific, and incremental. In fact,…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shah, Nihar; Abhyankar, Nikit; Park, Won Young
Improving efficiency of air conditioners (ACs) typically involves improving the efficiency of various components such as compressors, heat exchangers, expansion valves, refrigerant and fans. We estimate the incremental cost of improving the efficiency of room ACs based on the cost of improving the efficiency of its key components. Further, we estimate the retail price increase required to cover the cost of efficiency improvement, compare it with electricity bill savings, and calculate the payback period for consumers to recover the additional price of a more efficient AC. We assess several efficiency levels, two of which are summarized below in the report.more » The finding that significant efficiency improvement is cost effective from a consumer perspective is robust over a wide range of assumptions. If we assume a 50% higher incremental price compared to our baseline estimate, the payback period for the efficiency level of 3.5 ISEER is 1.1 years. Given the findings of this study, establishing more stringent minimum efficiency performance criteria (one star level) should be evaluated rigorously considering significant benefits to consumers, energy security and environment.« less
NASA Astrophysics Data System (ADS)
Ham, Yoo-Geun; Song, Hyo-Jong; Jung, Jaehee; Lim, Gyu-Ho
2017-04-01
This study introduces a altered version of the incremental analysis updates (IAU), called the nonstationary IAU (NIAU) method, to enhance the assimilation accuracy of the IAU while retaining the continuity of the analysis. Analogous to the IAU, the NIAU is designed to add analysis increments at every model time step to improve the continuity in the intermittent data assimilation. Still, unlike the IAU, the NIAU method applies time-evolved forcing employing the forward operator as rectifications to the model. The solution of the NIAU is better than that of the IAU, of which analysis is performed at the start of the time window for adding the IAU forcing, in terms of the accuracy of the analysis field. It is because, in the linear systems, the NIAU solution equals that in an intermittent data assimilation method at the end of the assimilation interval. To have the filtering property in the NIAU, a forward operator to propagate the increment is reconstructed with only dominant singular vectors. An illustration of those advantages of the NIAU is given using the simple 40-variable Lorenz model.
SAIL: Summation-bAsed Incremental Learning for Information-Theoretic Text Clustering.
Cao, Jie; Wu, Zhiang; Wu, Junjie; Xiong, Hui
2013-04-01
Information-theoretic clustering aims to exploit information-theoretic measures as the clustering criteria. A common practice on this topic is the so-called Info-Kmeans, which performs K-means clustering with KL-divergence as the proximity function. While expert efforts on Info-Kmeans have shown promising results, a remaining challenge is to deal with high-dimensional sparse data such as text corpora. Indeed, it is possible that the centroids contain many zero-value features for high-dimensional text vectors, which leads to infinite KL-divergence values and creates a dilemma in assigning objects to centroids during the iteration process of Info-Kmeans. To meet this challenge, in this paper, we propose a Summation-bAsed Incremental Learning (SAIL) algorithm for Info-Kmeans clustering. Specifically, by using an equivalent objective function, SAIL replaces the computation of KL-divergence by the incremental computation of Shannon entropy. This can avoid the zero-feature dilemma caused by the use of KL-divergence. To improve the clustering quality, we further introduce the variable neighborhood search scheme and propose the V-SAIL algorithm, which is then accelerated by a multithreaded scheme in PV-SAIL. Our experimental results on various real-world text collections have shown that, with SAIL as a booster, the clustering performance of Info-Kmeans can be significantly improved. Also, V-SAIL and PV-SAIL indeed help improve the clustering quality at a lower cost of computation.
Incremental Learning of Context Free Grammars by Parsing-Based Rule Generation and Rule Set Search
NASA Astrophysics Data System (ADS)
Nakamura, Katsuhiko; Hoshina, Akemi
This paper discusses recent improvements and extensions in Synapse system for inductive inference of context free grammars (CFGs) from sample strings. Synapse uses incremental learning, rule generation based on bottom-up parsing, and the search for rule sets. The form of production rules in the previous system is extended from Revised Chomsky Normal Form A→βγ to Extended Chomsky Normal Form, which also includes A→B, where each of β and γ is either a terminal or nonterminal symbol. From the result of bottom-up parsing, a rule generation mechanism synthesizes minimum production rules required for parsing positive samples. Instead of inductive CYK algorithm in the previous version of Synapse, the improved version uses a novel rule generation method, called ``bridging,'' which bridges the lacked part of the derivation tree for the positive string. The improved version also employs a novel search strategy, called serial search in addition to minimum rule set search. The synthesis of grammars by the serial search is faster than the minimum set search in most cases. On the other hand, the size of the generated CFGs is generally larger than that by the minimum set search, and the system can find no appropriate grammar for some CFL by the serial search. The paper shows experimental results of incremental learning of several fundamental CFGs and compares the methods of rule generation and search strategies.
NASA Astrophysics Data System (ADS)
Blinov, N. A.; Zolotkov, V. N.; Lezin, A. Yu; Cheburkin, N. V.
1990-04-01
An analysis is made of transient stimulated scattering in a vibrationally nonequilibrium gas excited by a non-self-sustained discharge. A stability theory approach is used to describe the behavior of perturbation wave packets, yielding asymptotic expressions for the maximal increments of an instability of stimulated small-angle scattering by entropic and acoustic modes.
Performance Data Gathering and Representation from Fixed-Size Statistical Data
NASA Technical Reports Server (NTRS)
Yan, Jerry C.; Jin, Haoqiang H.; Schmidt, Melisa A.; Kutler, Paul (Technical Monitor)
1997-01-01
The two commonly-used performance data types in the super-computing community, statistics and event traces, are discussed and compared. Statistical data are much more compact but lack the probative power event traces offer. Event traces, on the other hand, are unbounded and can easily fill up the entire file system during program execution. In this paper, we propose an innovative methodology for performance data gathering and representation that offers a middle ground. Two basic ideas are employed: the use of averages to replace recording data for each instance and 'formulae' to represent sequences associated with communication and control flow. The user can trade off tracing overhead, trace data size with data quality incrementally. In other words, the user will be able to limit the amount of trace data collected and, at the same time, carry out some of the analysis event traces offer using space-time views. With the help of a few simple examples, we illustrate the use of these techniques in performance tuning and compare the quality of the traces we collected with event traces. We found that the trace files thus obtained are, indeed, small, bounded and predictable before program execution, and that the quality of the space-time views generated from these statistical data are excellent. Furthermore, experimental results showed that the formulae proposed were able to capture all the sequences associated with 11 of the 15 applications tested. The performance of the formulae can be incrementally improved by allocating more memory at runtime to learn longer sequences.
Energy and cost associated with ventilating office buildings in a tropical climate.
Rim, Donghyun; Schiavon, Stefano; Nazaroff, William W
2015-01-01
Providing sufficient amounts of outdoor air to occupants is a critical building function for supporting occupant health, well-being and productivity. In tropical climates, high ventilation rates require substantial amounts of energy to cool and dehumidify supply air. This study evaluates the energy consumption and associated cost for thermally conditioning outdoor air provided for building ventilation in tropical climates, considering Singapore as an example locale. We investigated the influence on energy consumption and cost of the following factors: outdoor air temperature and humidity, ventilation rate (L/s per person), indoor air temperature and humidity, air conditioning system coefficient of performance (COP), and cost of electricity. Results show that dehumidification of outdoor air accounts for more than 80% of the energy needed for building ventilation in Singapore's tropical climate. Improved system performance and/or a small increase in the indoor temperature set point would permit relatively large ventilation rates (such as 25 L/s per person) at modest or no cost increment. Overall, even in a thermally demanding tropical climate, the energy cost associated with increasing ventilation rate up to 25 L/s per person is less than 1% of the wages of an office worker in an advanced economy like Singapore's. This result implies that the benefits of increasing outdoor air ventilation rate up to 25 L/s per person--which is suggested to provide for productivity increases, lower sick building syndrome symptom prevalence, and reduced sick leave--can be much larger than the incremental cost of ventilation.
Costs of an ostomy self-management training program for cancer survivors.
Hornbrook, Mark C; Cobb, Martha D; Tallman, Nancy J; Colwell, Janice; McCorkle, Ruth; Ercolano, Elizabeth; Grant, Marcia; Sun, Virginia; Wendel, Christopher S; Hibbard, Judith H; Krouse, Robert S
2018-03-01
To measure incremental expenses to an oncologic surgical practice for delivering a community-based, ostomy nurse-led, small-group, behavior skills-training intervention to help bladder and colorectal cancer survivors understand and adjust to their ostomies and improve their health-related quality of life, as well as assist family caregivers to understand survivors' needs and provide appropriate supportive care. The intervention was a 5-session group behavior skills training in ostomy self-management following the principles of the Chronic Care Model. Faculty included Wound, Ostomy, and Continence Nurses (WOCNs) using an ostomy care curriculum. A gender-matched peer-in-time buddy was assigned to each ostomy survivor. The 4-session survivor curriculum included the following: self-management practice and solving immediate ostomy concerns; social well-being; healthy lifestyle; and a booster session. The single family caregiver session was coled by a WOCN and an ostomy peer staff member and covered relevant caregiver and ostomate support issues. Each cohort required 8 weeks to complete the intervention. Nonlabor inputs included ostomy supplies, teaching materials, automobile mileage for WOCNs, mailing, and meeting space rental. Intervention personnel were employed by the University of Arizona. Labor expenses included salaries and fringe benefits. The total incremental expense per intervention cohort of 4 survivors was $7246 or $1812 per patient. A WOCN-led group self-help ostomy survivorship intervention provided affordable, effective, care to cancer survivors with ostomies. Copyright © 2017 John Wiley & Sons, Ltd.
NASA Astrophysics Data System (ADS)
Kim, Kyoung-Hee; Kim, Mi-Seon; Kim, Hong-Gi; Yook, Hong-Sun
2010-04-01
The effect of gamma irradiation (0.5-2 kGy) on the physicochemical properties of peaches was investigated during a 6-day storage at 20±3 °C. Gamma irradiation is able to inactivate the four pathogens, namely Botrytis cinerea, Penicillium expansum, Rhizopus stolonifer var. stolonifer and Monilinia fructicola in peaches. Hardness significantly decreased with the increment of irradiation dose level whereas soluble solid and total polyphenol contents increased with increment of irradiation dose level. 1,1-diphenyl-2-picrylhydrazyl (DPPH) radical-scavenging activity of the irradiated peach was higher than that of control, and its activity increased with increment of irradiation dose level. These results suggest that gamma irradiation of peaches improved antioxidant activity, but dramatically affects the hardness throughout the entire storage time.
Jones, Andrew D
2017-01-01
The declining diversity of agricultural production and food supplies worldwide may have important implications for global diets. The primary objective of this review is to assess the nature and magnitude of the associations of agricultural biodiversity with diet quality and anthropometric outcomes in low- and middle-income countries. A comprehensive review of 5 databases using a priori exclusion criteria and application of a systematic, qualitative analysis to the findings of identified studies revealed that agricultural biodiversity has a small but consistent association with more diverse household- and individual-level diets, although the magnitude of this association varies with the extent of existing diversification of farms. Greater on-farm crop species richness is also associated with small, positive increments in young child linear stature. Agricultural diversification may contribute to diversified diets through both subsistence- and income-generating pathways and may be an important strategy for improving diets and nutrition outcomes in low- and middle-income countries. Six research priorities for future studies of the influence of agricultural biodiversity on nutrition outcomes are identified based on gaps in the research literature. PMID:29028270
Kerr, Kathleen F.; Meisner, Allison; Thiessen-Philbrook, Heather; Coca, Steven G.
2014-01-01
The field of nephrology is actively involved in developing biomarkers and improving models for predicting patients’ risks of AKI and CKD and their outcomes. However, some important aspects of evaluating biomarkers and risk models are not widely appreciated, and statistical methods are still evolving. This review describes some of the most important statistical concepts for this area of research and identifies common pitfalls. Particular attention is paid to metrics proposed within the last 5 years for quantifying the incremental predictive value of a new biomarker. PMID:24855282
Online and unsupervised face recognition for continuous video stream
NASA Astrophysics Data System (ADS)
Huo, Hongwen; Feng, Jufu
2009-10-01
We present a novel online face recognition approach for video stream in this paper. Our method includes two stages: pre-training and online training. In the pre-training phase, our method observes interactions, collects batches of input data, and attempts to estimate their distributions (Box-Cox transformation is adopted here to normalize rough estimates). In the online training phase, our method incrementally improves classifiers' knowledge of the face space and updates it continuously with incremental eigenspace analysis. The performance achieved by our method shows its great potential in video stream processing.
Ventura, Emily; Davis, Jaimie; Byrd-Williams, Courtney; Alexander, Katharine; McClain, Arianna; Lane, Christianne Joy; Spruijt-Metz, Donna; Weigensberg, Marc; Goran, Michael
2009-04-01
To examine if reductions in added sugar intake or increases in fiber intake in response to a 16-week intervention were related to improvements in metabolic outcomes related to type 2 diabetes mellitus risk. Secondary analysis of a randomized control trial. Intervention classes at a lifestyle laboratory and metabolic measures at the General Clinical Research Center. Fifty-four overweight Latino adolescents (mean [SD] age, 15.5 [1] years). Intervention Sixteen-week study with 3 groups: control, nutrition, or nutrition plus strength training. Body composition by dual-energy x-ray absorptiometry; visceral adipose tissue by magnetic resonance imaging; glucose and insulin incremental area under the curve by oral glucose tolerance test; insulin sensitivity, acute insulin response, and disposition index by intravenous glucose tolerance test; and dietary intake by 3-day records. Fifty-five percent of all participants decreased added sugar intake (mean decrease, 47 g/d) and 59% increased fiber intake (mean increase, 5 g/d), and percentages were similar in all intervention groups, including controls. Those who decreased added sugar intake had an improvement in glucose incremental area under the curve (-15% vs +3%; P = .049) and insulin incremental area under the curve (-33% vs -9%; P = .02). Those who increased fiber intake had an improvement in body mass index (-2% vs +2%; P = .01) and visceral adipose tissue (-10% vs no change; P = .03). Individuals who reduced added sugar intake by the equivalent of 1 can of soda per day or increased fiber intake by the equivalent of a cup of beans showed improvements in key risk factors for type 2 diabetes, specifically in insulin secretion and visceral fat. Improvements occurred independent of group assignment and were equally likely to occur in control group participants.
Helicity statistics in homogeneous and isotropic turbulence and turbulence models
NASA Astrophysics Data System (ADS)
Sahoo, Ganapati; De Pietro, Massimo; Biferale, Luca
2017-02-01
We study the statistical properties of helicity in direct numerical simulations of fully developed homogeneous and isotropic turbulence and in a class of turbulence shell models. We consider correlation functions based on combinations of vorticity and velocity increments that are not invariant under mirror symmetry. We also study the scaling properties of high-order structure functions based on the moments of the velocity increments projected on a subset of modes with either positive or negative helicity (chirality). We show that mirror symmetry is recovered at small scales, i.e., chiral terms are subleading and they are well captured by a dimensional argument plus anomalous corrections. These findings are also supported by a high Reynolds numbers study of helical shell models with the same chiral symmetry of Navier-Stokes equations.
Testing the Fraunhofer line discriminator by sensing fluorescent dye
NASA Technical Reports Server (NTRS)
Stoertz, G. E.
1969-01-01
The experimental Fraunhofer Line Discriminator (FLD) has detected increments of Rhodamine WT dye as small as 1 ppb in 1/2 meter depths. It can be inferred that increments considerably smaller than 1 ppb will be detectable in depths considerably greater than 1/2 meter. Turbidity of the water drastically reduces luminescence or even completely blocks the transmission of detectable luminescence to the FLD. Attenuation of light within the water by turbidity and by the dye itself are the major factors to be considered in interpreting FLD records and in relating luminescence coefficient to dye concentration. An airborne test in an H-19 helicopter established feasibility of operating the FLD from the aircraft power supply, and established that the rotor blades do not visibly affect the monitoring of incident solar radiation.
Uncertainties in derived temperature-height profiles
NASA Technical Reports Server (NTRS)
Minzner, R. A.
1974-01-01
Nomographs were developed for relating uncertainty in temperature T to uncertainty in the observed height profiles of both pressure p and density rho. The relative uncertainty delta T/T is seen to depend not only upon the relative uncertainties delta P/P or delta rho/rho, and to a small extent upon the value of T or H, but primarily upon the sampling-height increment Delta h, the height increment between successive observations of p or delta. For a fixed value of delta p/p, the value of delta T/T varies inversely with Delta h. No limit exists in the fineness of usable height resolution of T which may be derived from densities, while a fine height resolution in pressure-height data leads to temperatures with unacceptably large uncertainties.
A continuous quality improvement team approach to adverse drug reaction reporting.
Flowers, P; Dzierba, S; Baker, O
1992-07-01
Crossfunctional teams can generate more new ideas, concepts, and possible solutions than does a department-based process alone. Working collaboratively can increase knowledge of teams using CQI approaches and appropriate tools. CQI produces growth and development at multiple levels resulting from involvement in the process of incremental improvement.
Hatoum-Aslan, Asma; Samai, Poulami; Maniv, Inbal; Jiang, Wenyan; Marraffini, Luciano A
2013-09-27
Small RNAs undergo maturation events that precisely determine the length and structure required for their function. CRISPRs (clustered regularly interspaced short palindromic repeats) encode small RNAs (crRNAs) that together with CRISPR-associated (cas) genes constitute a sequence-specific prokaryotic immune system for anti-viral and anti-plasmid defense. crRNAs are subject to multiple processing events during their biogenesis, and little is known about the mechanism of the final maturation step. We show that in the Staphylococcus epidermidis type III CRISPR-Cas system, mature crRNAs are measured in a Cas10·Csm ribonucleoprotein complex to yield discrete lengths that differ by 6-nucleotide increments. We looked for mutants that impact this crRNA size pattern and found that an alanine substitution of a conserved aspartate residue of Csm3 eliminates the 6-nucleotide increments in the length of crRNAs. In vitro, recombinant Csm3 binds RNA molecules at multiple sites, producing gel-shift patterns that suggest that each protein binds 6 nucleotides of substrate. In vivo, changes in the levels of Csm3 modulate the crRNA size distribution without disrupting the 6-nucleotide periodicity. Our data support a model in which multiple Csm3 molecules within the Cas10·Csm complex bind the crRNA with a 6-nucleotide periodicity to function as a ruler that measures the extent of crRNA maturation.
Malerba, M; Boni, E; Tantucci, C; Filippi, B; Romagnoni, G; Grassi, V
1996-01-01
The effects on exercise tolerance after acute administration of beta 2-agonists were investigated in 11 patients with partly reversible chronic airway obstruction after 400 micrograms of salbutamol (S) given intravenously (i.v.) and after 400 micrograms i.v. of a new selective beta 2-agonist, broxaterol (B), by a cardiopulmonary incremental exercise test. At rest, while VE increased in respect to basal conditions (C) after S (from 13.3 +/- 2.2 to 14.4 +/- 2.8 l/min; p < 0.05) and after B (from 13.6 +/- 3.1 to 15.5 +/- 3.6 l/min; p < 0.05), VO2, VCO2 and VO2/HR showed no substantial variations. A small, not significant reduction of PaO2 was observed both after S (from 82.7 +/- 11.7 to 79.1 +/- 16.7 mm Hg) and B (from 81.6 +/- 10.5 to 78.0 +/- 11.0 mm Hg). The maximum workload increased neither after S (from 67.5 +/- 39.1 to 66.6 +/- 37.0 W) nor after B (from 65.7 +/- 39.3 to 60.0 +/- 35.8 W). At peak of exercise, VO2, VCO2 and VO2/HR did not change after S and B as compared with C, whereas VE remained higher after both beta 2-agonists throughout the effort. VO2 at ventilatory anaerobic threshold (AT) was significantly greater either after S (from 744 +/- 378 to 815 +/- 302 ml/min; p < 0.05) and after B (from 756 +/- 290 to 842 +/- 292 ml/min; p < 0.05). The PaO2 increase shown by these patients during effort was greater after beta 2-agonists administration, delta PaO2 from rest to peak of exercise amounting to 14.9 +/- 14.3 vs. 7.8 +/- 8.2 mm Hg after S and to 17.8 +/- 15.1 vs. 8.8 +/- 10.9 mm Hg after B, in respect to relative baseline (p < 0.05). We conclude that beta 2-agonists, when given acutely, do not improve exercise tolerance in patients with reversible chronic airflow obstruction, although these drugs can induce a small increment of ventilatory AT. In addition, arterial blood gases do not deteriorate at rest and are better preserved during exercise after beta 2-agonists.
Flueckiger, Peter; Longstreth, Will; Herrington, David; Yeboah, Joseph
2018-02-01
Limited data exist on the performance of the revised Framingham Stroke Risk Score (R-FSRS) and the R-FSRS in conjunction with nontraditional risk markers. We compared the R-FSRS, original FSRS, and the Pooled Cohort Equation for stroke prediction and assessed the improvement in discrimination by nontraditional risk markers. Six thousand seven hundred twelve of 6814 participants of the MESA (Multi-Ethnic Study of Atherosclerosis) were included. Cox proportional hazard, area under the curve, net reclassification improvement, and integrated discrimination increment analysis were used to assess and compare each stroke prediction risk score. Stroke was defined as fatal/nonfatal strokes (hemorrhagic or ischemic). After mean follow-up of 10.7 years, 231 of 6712 (3.4%) strokes were adjudicated (2.7% ischemic strokes). Mean stroke risks using the R-FSRS, original FSRS, and Pooled Cohort Equation were 4.7%, 5.9%, and 13.5%. The R-FSRS had the best calibration (Hosmer-Lemeshow goodness-of-fit, χ 2 =6.55; P =0.59). All risk scores were predictive of incident stroke. C statistics of R-FSRS (0.716) was similar to Pooled Cohort Equation (0.716), but significantly higher than the original FSRS (0.653; P =0.01 for comparison with R-FSRS). Adding nontraditional risk markers individually to the R-FSRS did not improve discrimination of the R-FSRS in the area under the curve analysis, but did improve category-less net reclassification improvement and integrated discrimination increment for incident stroke. The addition of coronary artery calcium to R-FSRS produced the highest category-less net reclassification improvement (0.36) and integrated discrimination increment (0.0027). Similar results were obtained when ischemic strokes were used as the outcome. The R-FSRS downgraded stroke risk but had better calibration and discriminative ability for incident stroke compared with the original FSRS. Nontraditional risk markers modestly improved the discriminative ability of the R-FSRS, with coronary artery calcium performing the best. © 2018 American Heart Association, Inc.
Cho, Iksung; Al'Aref, Subhi J; Berger, Adam; Ó Hartaigh, Bríain; Gransar, Heidi; Valenti, Valentina; Lin, Fay Y; Achenbach, Stephan; Berman, Daniel S; Budoff, Matthew J; Callister, Tracy Q; Al-Mallah, Mouaz H; Cademartiri, Filippo; Chinnaiyan, Kavitha; Chow, Benjamin J W; DeLago, Augustin; Villines, Todd C; Hadamitzky, Martin; Hausleiter, Joerg; Leipsic, Jonathon; Shaw, Leslee J; Kaufmann, Philipp A; Feuchtner, Gudrun; Kim, Yong-Jin; Maffei, Erica; Raff, Gilbert; Pontone, Gianluca; Andreini, Daniele; Marques, Hugo; Rubinshtein, Ronen; Chang, Hyuk-Jae; Min, James K
2018-03-14
The long-term prognostic benefit of coronary computed tomographic angiography (CCTA) findings of coronary artery disease (CAD) in asymptomatic populations is unknown. From the prospective multicentre international CONFIRM long-term study, we evaluated asymptomatic subjects without known CAD who underwent both coronary artery calcium scoring (CACS) and CCTA (n = 1226). Coronary computed tomographic angiography findings included the severity of coronary artery stenosis, plaque composition, and coronary segment location. Using the C-statistic and likelihood ratio tests, we evaluated the incremental prognostic utility of CCTA findings over a base model that included a panel of traditional risk factors (RFs) as well as CACS to predict long-term all-cause mortality. During a mean follow-up of 5.9 ± 1.2 years, 78 deaths occurred. Compared with the traditional RF alone (C-statistic 0.64), CCTA findings including coronary stenosis severity, plaque composition, and coronary segment location demonstrated improved incremental prognostic utility beyond traditional RF alone (C-statistics range 0.71-0.73, all P < 0.05; incremental χ2 range 20.7-25.5, all P < 0.001). However, no added prognostic benefit was offered by CCTA findings when added to a base model containing both traditional RF and CACS (C-statistics P > 0.05, for all). Coronary computed tomographic angiography improved prognostication of 6-year all-cause mortality beyond a set of conventional RF alone, although, no further incremental value was offered by CCTA when CCTA findings were added to a model incorporating RF and CACS.
Product Quality Modelling Based on Incremental Support Vector Machine
NASA Astrophysics Data System (ADS)
Wang, J.; Zhang, W.; Qin, B.; Shi, W.
2012-05-01
Incremental Support vector machine (ISVM) is a new learning method developed in recent years based on the foundations of statistical learning theory. It is suitable for the problem of sequentially arriving field data and has been widely used for product quality prediction and production process optimization. However, the traditional ISVM learning does not consider the quality of the incremental data which may contain noise and redundant data; it will affect the learning speed and accuracy to a great extent. In order to improve SVM training speed and accuracy, a modified incremental support vector machine (MISVM) is proposed in this paper. Firstly, the margin vectors are extracted according to the Karush-Kuhn-Tucker (KKT) condition; then the distance from the margin vectors to the final decision hyperplane is calculated to evaluate the importance of margin vectors, where the margin vectors are removed while their distance exceed the specified value; finally, the original SVs and remaining margin vectors are used to update the SVM. The proposed MISVM can not only eliminate the unimportant samples such as noise samples, but also can preserve the important samples. The MISVM has been experimented on two public data and one field data of zinc coating weight in strip hot-dip galvanizing, and the results shows that the proposed method can improve the prediction accuracy and the training speed effectively. Furthermore, it can provide the necessary decision supports and analysis tools for auto control of product quality, and also can extend to other process industries, such as chemical process and manufacturing process.
Effects of shoe cleat position on physiology and performance of competitive cyclists.
Paton, Carl D
2009-12-01
Aerobic economy is an important factor that affects the performance of competitive cyclists. It has been suggested that placing the foot more anteriorly on the bicycle pedals may improve economy over the traditional foot position by improving pedaling efficiency. The current study examines the effects of changing the anterior-posterior pedal foot position on the physiology and performance of well-trained cyclists. In a crossover study, 10 competitive cyclists completed two maximal incremental and two submaximal tests in either their preferred (control) or a forward (arch) foot position. Maximum oxygen consumption and peak power output were determined from the incremental tests for both foot positions. On two further occasions, cyclists also completed a two-part 60-min submaximal test that required them to maintain a constant power output (equivalent to 60% of their incremental peak power) for 30 min, during which respiratory and blood lactate samples were taken at predetermined intervals. Thereafter, subjects completed a 30-min self-paced maximal effort time trial. Relative to the control, the mean changes (+/-90% confidence limits) in the arch condition were as follows: maximum oxygen consumption, -0.5% (+/-2.0%); incremental peak power output, -0.8% (+/-1.3%); steady-state oxygen consumption at 60%, -2.4% (+/-1.1%); steady-state heart rate 60%, 0.4% (+/-1.7%); lactate concentration 60%, 8.7% (+/-14.4%); and mean time trial power, -1.5% (+/-2.9%). We conclude that there was no substantial physiological or performance advantage in this group using an arch-cleat shoe position in comparison with a cyclist's normal preferred condition.
Jirapinyo, Pichamol; Abidi, Wasif M; Aihara, Hiroyuki; Zaki, Theodore; Tsay, Cynthia; Imaeda, Avlin B; Thompson, Christopher C
2017-10-01
Preclinical simulator training has the potential to decrease endoscopic procedure time and patient discomfort. This study aims to characterize the learning curve of endoscopic novices in a part-task simulator and propose a threshold score for advancement to initial clinical cases. Twenty novices with no prior endoscopic experience underwent repeated endoscopic simulator sessions using the part-task simulator. Simulator scores were collected; their inverse was averaged and fit to an exponential curve. The incremental improvement after each session was calculated. Plateau was defined as the session after which incremental improvement in simulator score model was less than 5%. Additionally, all participants filled out questionnaires regarding simulator experience after sessions 1, 5, 10, 15, and 20. A visual analog scale and NASA task load index were used to assess levels of comfort and demand. Twenty novices underwent 400 simulator sessions. Mean simulator scores at sessions 1, 5, 10, 15, and 20 were 78.5 ± 5.95, 176.5 ± 17.7, 275.55 ± 23.56, 347 ± 26.49, and 441.11 ± 38.14. The best fit exponential model was [time/score] = 26.1 × [session #] -0.615 ; r 2 = 0.99. This corresponded to an incremental improvement in score of 35% after the first session, 22% after the second, 16% after the third and so on. Incremental improvement dropped below 5% after the 12th session corresponding to the predicted score of 265. Simulator training was related to higher comfort maneuvering an endoscope and increased readiness for supervised clinical endoscopy, both plateauing between sessions 10 and 15. Mental demand, physical demand, and frustration levels decreased with increased simulator training. Preclinical training using an endoscopic part-task simulator appears to increase comfort level and decrease mental and physical demand associated with endoscopy. Based on a rigorous model, we recommend that novices complete a minimum of 12 training sessions and obtain a simulator score of at least 265 to be best prepared for clinical endoscopy.
Buczinski, S; Ménard, J; Timsit, E
2016-07-01
Thoracic ultrasonography (TUS) is a specific and relatively sensitive method to diagnose bronchopneumonia (BP) in dairy calves. Unfortunately, as it requires specific training and equipment, veterinarians typically base their diagnosis on thoracic auscultation (AUSC), which is rapid and easy to perform. We hypothesized that the use of TUS, in addition to AUSC, can significantly increase accuracy of BP diagnosis. Therefore, the objectives were to (i) determine the incremental value of TUS over AUSC for diagnosis of BP in preweaned dairy calves and (ii) assess diagnostic accuracy of AUSC. Two hundred and nine dairy calves (<1 month of age) were enrolled in this cross-sectional study. Prospective cross-sectional study. All calves from a veal calves unit were examined (independent operators) using the Wisconsin Calf Respiratory Scoring Criteria (CRSC), AUSC, and TUS. A Bayesian latent class approach was used to estimate the incremental value of AUSC over TUS (integrated discrimination improvement [IDI]) and the diagnostic accuracy of AUSC. Abnormal CRSC, AUSC, and TUS were recorded in 3.3, 53.1, and 23.9% of calves, respectively. AUSC was sensitive (72.9%; 95% Bayesian credible interval [BCI]: 50.1-96.4%), but not specific (53.3%; 95% BCI: 43.3-64.0%) to diagnose BP. Compared to AUSC, TUS was more specific (92.9%; 95% BCI: 86.5-97.1%), but had similar sensitivity (76.5%; 95% BCI: 60.2-88.8%). The incremental value of TUS over AUSC was high (IDI = 43.7%; 5% BCI: 22.0-63.0%) significantly improving proportions of sick and healthy calves appropriately classified. The use of TUS over AUSC significantly improved accuracy of BP diagnosis in dairy calves. Copyright © 2016 The Authors. Journal of Veterinary Internal Medicine published by Wiley Periodicals, Inc. on behalf of the American College of Veterinary Internal Medicine.
Validity of the alcohol purchase task: a meta-analysis.
Kiselica, Andrew M; Webber, Troy A; Bornovalova, Marina A
2016-05-01
Behavioral economists assess alcohol consumption as a function of unit price. This method allows construction of demand curves and demand indices, which are thought to provide precise numerical estimates of risk for alcohol problems. One of the more commonly used behavioral economic measures is the Alcohol Purchase Task (APT). Although the APT has shown promise as a measure of risk for alcohol problems, the construct validity and incremental utility of the APT remain unclear. This paper presents a meta-analysis of the APT literature. Sixteen studies were included in the meta-analysis. Studies were gathered via searches of the PsycInfo, PubMed, Web of Science and EconLit research databases. Random-effects meta-analyses with inverse variance weighting were used to calculate summary effect sizes for each demand index-drinking outcome relationship. Moderation of these effects by drinking status (regular versus heavy drinkers) was examined. Additionally, tests of the incremental utility of the APT indices in predicting drinking problems above and beyond measuring alcohol consumption were performed. The APT indices were correlated in the expected directions with drinking outcomes, although many effects were small in size. These effects were typically not moderated by the drinking status of the samples. Additionally, the intensity metric demonstrated incremental utility in predicting alcohol use disorder symptoms beyond measuring drinking. The Alcohol Purchase Task appears to have good construct validity, but limited incremental utility in estimating risk for alcohol problems. © 2015 Society for the Study of Addiction.
Identification of platelet refractoriness in oncohematologic patients
Ferreira, Aline Aparecida; Zulli, Roberto; Soares, Sheila; de Castro, Vagner; Moraes-Souza, Helio
2011-01-01
OBJECTIVES: To identify the occurrence and the causes of platelet refractoriness in oncohematologic patients. INTRODUCTION: Platelet refractoriness (unsatisfactory post-transfusion platelet increment) is a severe problem that impairs the treatment of oncohematologic patients and is not routinely investigated in most Brazilian services. METHODS: Forty-four episodes of platelet concentrate transfusion were evaluated in 16 patients according to the following parameters: corrected count increment, clinical conditions and detection of anti-platelet antibodies by the platelet immunofluorescence test (PIFT) and panel reactive antibodies against human leukocyte antigen class I (PRA-HLA). RESULTS: Of the 16 patients evaluated (median age: 53 years), nine (56%) were women, seven of them with a history of pregnancy. An unsatisfactory increment was observed in 43% of the transfusion events, being more frequent in transfusions of random platelet concentrates (54%). Platelet refractoriness was confirmed in three patients (19%), who presented immunologic and non-immunologic causes. Alloantibodies were identified in eight patients (50%) by the PIFT and in three (19%) by the PRA-HLA. Among alloimmunized patients, nine (64%) had a history of transfusion, and three as a result of pregnancy (43%). Of the former, two were refractory (29%). No significant differences were observed, probably as a result of the small sample size. CONCLUSION: The high rate of unsatisfactory platelet increment, refractoriness and alloimmunization observed support the need to set up protocols for the investigation of this complication in all chronically transfused patients, a fundamental requirement for the guarantee of adequate management. PMID:21437433
Three-dimensional architecture of macrofibrils in the human scalp hair cortex.
Harland, Duane P; Walls, Richard J; Vernon, James A; Dyer, Jolon M; Woods, Joy L; Bell, Fraser
2014-03-01
Human scalp hairs are comprised of a central cortex enveloped by plate-like cuticle cells. The elongate cortex cells of mature fibres are composed primarily of macrofibrils-bundles of hard-keratin intermediate filaments (IFs) chemically cross-linked within a globular protein matrix. In wool, three cell types (ortho-, meso- and paracortex) contain macrofibrils with distinctly different filament arrangements and matrix fractions, but in human hair macrofibril-cell type relationships are less clear. Here we show that hair macrofibrils all have a similar matrix fraction (∼0.4) and are typically composed of a double-twist architecture in which a central IF is surrounded by concentric rings of tangentially-angled IFs. The defining parameter is the incremental angle increase (IF-increment) between IFs of successive rings. Unlike the wool orthocortex, hair double-twist macrofibrils have considerable inter-macrofibril variation in IF increment (0.05-0.35°/nm), and macrofibril size and IF increment are negatively correlated. Correspondingly, angular difference between central and outer-most IFs is up to 40° in small macrofibrils, but only 5-10° in large macrofibrils. Single cells were observed containing mixtures of macrofibrils with different diameters. These new observations advance our understanding of the nano-level and cell-level organisation of human hair, with implications for interpretation of structure with respect the potential roles of cortex cell types in defining the mechanical properties of hair. Copyright © 2014 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Wang, Hongjin; Hsieh, Sheng-Jen; Peng, Bo; Zhou, Xunfei
2016-07-01
A method without requirements on knowledge about thermal properties of coatings or those of substrates will be interested in the industrial application. Supervised machine learning regressions may provide possible solution to the problem. This paper compares the performances of two regression models (artificial neural networks (ANN) and support vector machines for regression (SVM)) with respect to coating thickness estimations made based on surface temperature increments collected via time resolved thermography. We describe SVM roles in coating thickness prediction. Non-dimensional analyses are conducted to illustrate the effects of coating thicknesses and various factors on surface temperature increments. It's theoretically possible to correlate coating thickness with surface increment. Based on the analyses, the laser power is selected in such a way: during the heating, the temperature increment is high enough to determine the coating thickness variance but low enough to avoid surface melting. Sixty-one pain-coated samples with coating thicknesses varying from 63.5 μm to 571 μm are used to train models. Hyper-parameters of the models are optimized by 10-folder cross validation. Another 28 sets of data are then collected to test the performance of the three methods. The study shows that SVM can provide reliable predictions of unknown data, due to its deterministic characteristics, and it works well when used for a small input data group. The SVM model generates more accurate coating thickness estimates than the ANN model.
Estimating evaporative vapor generation from automobiles based on parking activities.
Dong, Xinyi; Tschantz, Michael; Fu, Joshua S
2015-07-01
A new approach is proposed to quantify the evaporative vapor generation based on real parking activity data. As compared to the existing methods, two improvements are applied in this new approach to reduce the uncertainties: First, evaporative vapor generation from diurnal parking events is usually calculated based on estimated average parking duration for the whole fleet, while in this study, vapor generation rate is calculated based on parking activities distribution. Second, rather than using the daily temperature gradient, this study uses hourly temperature observations to derive the hourly incremental vapor generation rates. The parking distribution and hourly incremental vapor generation rates are then adopted with Wade-Reddy's equation to estimate the weighted average evaporative generation. We find that hourly incremental rates can better describe the temporal variations of vapor generation, and the weighted vapor generation rate is 5-8% less than calculation without considering parking activity. Copyright © 2015 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Imada, Keita; Nakamura, Katsuhiko
This paper describes recent improvements to Synapse system for incremental learning of general context-free grammars (CFGs) and definite clause grammars (DCGs) from positive and negative sample strings. An important feature of our approach is incremental learning, which is realized by a rule generation mechanism called “bridging” based on bottom-up parsing for positive samples and the search for rule sets. The sizes of rule sets and the computation time depend on the search strategies. In addition to the global search for synthesizing minimal rule sets and serial search, another method for synthesizing semi-optimum rule sets, we incorporate beam search to the system for synthesizing semi-minimal rule sets. The paper shows several experimental results on learning CFGs and DCGs, and we analyze the sizes of rule sets and the computation time.
Melisa L. Holman; David L. Peterson
2006-01-01
We compared annual basal area increment (BAI) at different spatial scales among all size classes and species at diverse locations in the wet western and dry northeastern Olympic Mountains. Weak growth correlations at small spatial scales (average R = 0.084-0.406) suggest that trees are responding to local growth conditions. However, significant...
Growth and soil moisture in thinned lodgepole pine.
Walter G. Dahms
1971-01-01
A lodgepole pine levels-of-growing-stock study showed that trees growing at lower stand densities had longer crowns and grew more rapidly in diameter but did not grow significantly faster in height. Gross cubic-volume increment decreased with decreasing stand density. The decrease was small per unit of density at the higher densities but much greater at the lower...
Diameter Growth in Even- and Uneven-Aged Northern Hardwoods in New Hampshire Under Partial Cutting
William B. Leak
2004-01-01
One important concern in the conversion of even-aged stands to an uneven aged condition through individual-tree or small-group cutting is the growth response throughout the diameter-class distribution, especially of the understoty trees Increment-core sampling of an older, uneven-aged northern hardwood stand in New Hampshire under management for about 50 years...
ERIC Educational Resources Information Center
Peng, Shanzhong; Ferreira, Fernando A. F.; Zheng, He
2017-01-01
In this study, we develop a firm-dominated incremental cooperation model. Following the critical review of current literature and various cooperation models, we identified a number of strengths and shortcomings that form the basis for our framework. The objective of our theoretical model is to contribute to overcome the existing gap within…
Design and construction of a novel rotary magnetostrictive motor
NASA Astrophysics Data System (ADS)
Zhou, Nanjia; Blatchley, Charles C.; Ibeh, Christopher C.
2009-04-01
Magnetostriction can be used to induce linear incremental motion, which is effective in giant magnetostrictive inchworm motors. Such motors possess the advantage of combining small step incremental motion with large force. However, continuous rotation may be preferred in practical applications. This paper describes a novel magnetostrictive rotary motor using terfenol-D (Tb0.3Dy0.7Fe1.9) material as the driving element. The motor is constructed of two giant magnetostrictive actuators with shell structured flexure-hinge and leaf springs. These two actuators are placed in a perpendicular position to minimize the coupling displacement of the two actuators. The principal design parameters of the actuators and strain amplifiers are optimally determined, and its static analysis is undertaken through finite element analysis software. The small movements of the magnetostrictive actuators are magnified by about three times using oval shell structured amplifiers. When two sinusoidal wave currents with 90° phase shift are applied to the magnetostrictive actuators, purely rotational movement can be produced as in the orbit of a Lissajous diagram in an oscillograph, and this movement is used to drive the rotor of the motor. A prototype has been constructed and tested.
Cromwell, Ian; van der Hoek, Kimberly; Malfair Taylor, Suzanne C; Melosky, Barbara; Peacock, Stuart
2012-06-01
Erlotinib has been approved as a third-line treatment for advanced non-small-cell lung cancer (NSCLC) in British Columbia (BC). A cost-effectiveness analysis was conducted to compare costs and effectiveness in patients who received third-line erlotinib to those in a historical patient cohort that would have been eligible had erlotinib been available. In a population of patients who have been treated with drugs for advanced NSCLC, overall survival (OS), progression-to-death survival (PTD) and probability of survival one year after end of second-line (1YS) were determined using a Kaplan-Meier survival analysis. Costs were collected retrospectively from the perspective of the BC health care system. Incremental mean OS was 90 days (0.25 LYG), and incremental mean cost was $11,102 (CDN 2009), resulting in a mean ICER of $36,838/LYG. Univariate sensitivity analysis yielded ICERs ranging from $21,300 to $51,700/LYG. Our analysis suggests that erlotinib may be an effective and cost-effective third-line treatment for advanced NSCLC compared to best supportive care. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.
Lift hysteresis at stall as an unsteady boundary-layer phenomenon
NASA Technical Reports Server (NTRS)
Moore, Franklin K
1956-01-01
Analysis of rotating stall of compressor blade rows requires specification of a dynamic lift curve for the airfoil section at or near stall, presumably including the effect of lift hysteresis. Consideration of the magnus lift of a rotating cylinder suggests performing an unsteady boundary-layer calculation to find the movement of the separation points of an airfoil fixed in a stream of variable incidence. The consideration of the shedding of vorticity into the wake should yield an estimate of lift increment proportional to time rate of change of angle of attack. This increment is the amplitude of the hysteresis loop. An approximate analysis is carried out according to the foregoing ideas for a 6:1 elliptic airfoil at the angle of attack for maximum lift. The assumptions of small perturbations from maximum lift are made, permitting neglect of distributed vorticity in the wake. The calculated hysteresis loop is counterclockwise. Finally, a discussion of the forms of hysteresis loops is presented; and, for small reduced frequency of oscillation, it is concluded that the concept of a viscous "time lag" is appropriate only for harmonic variations of angle of attack with time at mean conditions other than maximum lift.
ERIC Educational Resources Information Center
Cates, Camille
1979-01-01
Argues that incrementalism's weakness is that it is another rational approach to problem solving when what is needed is a nonrational approach--creativity. Offers guidelines for improving creativity in oneself and in the work environment. (IRT)
Leadership & Sustainability: System Thinkers in Action
ERIC Educational Resources Information Center
Fullan, Michael
2004-01-01
As agencies have pushed for greater performance and public accountability over the past two decades, some incremental improvements have been seen. All too often experience reveals that these improvements are temporary. This book provides a comprehensive examination of what leaders at all levels of the educational system can do to pave the way for…
Statistical characteristics of surrogate data based on geophysical measurements
NASA Astrophysics Data System (ADS)
Venema, V.; Bachner, S.; Rust, H. W.; Simmer, C.
2006-09-01
In this study, the statistical properties of a range of measurements are compared with those of their surrogate time series. Seven different records are studied, amongst others, historical time series of mean daily temperature, daily rain sums and runoff from two rivers, and cloud measurements. Seven different algorithms are used to generate the surrogate time series. The best-known method is the iterative amplitude adjusted Fourier transform (IAAFT) algorithm, which is able to reproduce the measured distribution as well as the power spectrum. Using this setup, the measurements and their surrogates are compared with respect to their power spectrum, increment distribution, structure functions, annual percentiles and return values. It is found that the surrogates that reproduce the power spectrum and the distribution of the measurements are able to closely match the increment distributions and the structure functions of the measurements, but this often does not hold for surrogates that only mimic the power spectrum of the measurement. However, even the best performing surrogates do not have asymmetric increment distributions, i.e., they cannot reproduce nonlinear dynamical processes that are asymmetric in time. Furthermore, we have found deviations of the structure functions on small scales.
The design of transfer trajectory for Ivar asteroid exploration mission
NASA Astrophysics Data System (ADS)
Qiao, Dong; Cui, Hutao; Cui, Pingyuan
2009-12-01
An impending demand for exploring the small bodies, such as the comets and the asteroids, envisioned the Chinese Deep Space exploration mission to the Near Earth asteroid Ivar. A design and optimal method of transfer trajectory for asteroid Ivar is discussed in this paper. The transfer trajectory for rendezvous with asteroid Ivar is designed by means of Earth gravity assist with deep space maneuver (Delta-VEGA) technology. A Delta-VEGA transfer trajectory is realized by several trajectory segments, which connect the deep space maneuver and swingby point. Each trajectory segment is found by solving Lambert problem. Through adjusting deep maneuver and arrival time, the match condition of swingby is satisfied. To reduce the total mission velocity increments further, a procedure is developed which minimizes total velocity increments for this scheme of transfer trajectory for asteroid Ivar. The trajectory optimization problem is solved with a quasi-Newton algorithm utilizing analytic first derivatives, which are derived from the transversality conditions associated with the optimization formulation and primer vector theory. The simulation results show the scheme for transfer trajectory causes C3 and total velocity increments decrease of 48.80% and 13.20%, respectively.
Interaction of the alpha-toxin of Staphylococcus aureus with the liposome membrane.
Ikigai, H; Nakae, T
1987-02-15
When the liposome membrane is exposed to the alpha-toxin of Staphylococcus aureus, fluorescence of the tryptophan residue(s) of the toxin molecule increases concomitantly with the degree of toxin-hexamer formation (Ikigai, H., and Nakae, T. (1985) Biochem. Biophys. Res. Commun. 130, 175-181). In the present study, the toxin-membrane interaction was distinguished from the hexamer formation by the fluorescence energy transfer from the tryptophan residue(s) of the toxin molecule to the dansylated phosphatidylethanolamine in phosphatidylcholine liposome. Measurement of these two parameters yielded the following results. The effect of the toxin concentration and phospholipid concentration on these two parameters showed first order kinetics. The effect of liposome size on the energy transfer and the fluorescence increment of the tryptophan residue(s) was only detectable in small liposomes. Under moderately acidic or basic conditions, the fluorescence energy transfer always preceded the fluorescence increment of the tryptophan residue(s). The fluorescence increment at 336 nm at temperatures below 20 degrees C showed a latent period, whereas the fluorescence energy transfer did not. These results were thought to indicate that when alpha-toxin damages the target membrane, the molecule interacts with the membrane first, and then undergoes oligomerization within the membrane.
Gilson, L; Mkanje, R; Grosskurth, H; Mosha, F; Picard, J; Gavyole, A; Todd, J; Mayaud, P; Swai, R; Fransen, L; Mabey, D; Mills, A; Hayes, R
A community-randomised trial was undertaken to assess the impact, cost, and cost-effectiveness of averting HIV-1 infection through improved management of sexually transmitted diseases (STDs) by primary-health-care workers in Mwanza Region, Tanzania. The impact of improved treatment services for STDs on HIV-1 incidence was assessed by comparison of six intervention communities with six matched communities. We followed up a random cohort of 12,537 adults aged 15-54 years for 2 years to record incidence of HIV-1 infection. The total and incremental costs of the intervention were estimated (ingredients approach) and used to calculate the total cost per case treated, the incremental cost per HIV-1 infection averted, and the incremental cost per disability-adjusted life-year (DALY) saved. During 2 years of follow-up, 11,632 cases of STDs were treated in the intervention health units. The baseline prevalence of HIV-1 infection was 4%. The incidence of HIV-1 infection during the 2 years was 1.16% in the intervention communities and 1.86% in the comparison communities. An estimated 252 HIV-1 infections were averted each year. The total annual cost of the intervention was US$59,060 (1993 prices), equivalent to $0.39 per head of population served. The cost for STD case treated was $10.15, of which the drug cost was $2.11. The incremental annual cost of the intervention was $54,839, equivalent to $217.62 per HIV-1 infection averted and $10.33 per DALY saved (based on Tanzanian life expectancy) or $9.45 per DALY saved (based on the assumptions of the World Development Report). In a sensitivity analysis of factors influencing cost-effectiveness, cost per DALY saved ranged from $2.51 to $47.86. Improved management of STDs in rural health units reduced the incidence of HIV-1 infection in the general population by about 40%. The estimated cost-effectiveness of this intervention ($10 per DALY) compares favourably with that of, for example, childhood immunisation programmes ($12-17 per DALY). Cost-effectiveness should be further improved when the intervention is applied on a larger scale. Resources should be made available for this highly cost-effective HIV control strategy.
Atmospheric response to Saharan dust deduced from ECMWF reanalysis increments
NASA Astrophysics Data System (ADS)
Kishcha, P.; Alpert, P.; Barkan, J.; Kirchner, I.; Machenhauer, B.
2003-04-01
This study focuses on the atmospheric temperature response to dust deduced from a new source of data - the European Reanalysis (ERA) increments. These increments are the systematic errors of global climate models, generated in reanalysis procedure. The model errors result not only from the lack of desert dust but also from a complex combination of many kinds of model errors. Over the Sahara desert the dust radiative effect is believed to be a predominant model defect which should significantly affect the increments. This dust effect was examined by considering correlation between the increments and remotely-sensed dust. Comparisons were made between April temporal variations of the ERA analysis increments and the variations of the Total Ozone Mapping Spectrometer aerosol index (AI) between 1979 and 1993. The distinctive structure was identified in the distribution of correlation composed of three nested areas with high positive correlation (> 0.5), low correlation, and high negative correlation (<-0.5). The innermost positive correlation area (PCA) is a large area near the center of the Sahara desert. For some local maxima inside this area the correlation even exceeds 0.8. The outermost negative correlation area (NCA) is not uniform. It consists of some areas over the eastern and western parts of North Africa with a relatively small amount of dust. Inside those areas both positive and negative high correlations exist at pressure levels ranging from 850 to 700 hPa, with the peak values near 775 hPa. Dust-forced heating (cooling) inside the PCA (NCA) is accompanied by changes in the static stability of the atmosphere above the dust layer. The reanalysis data of the European Center for Medium Range Weather Forecast(ECMWF) suggests that the PCA (NCA) corresponds mainly to anticyclonic (cyclonic) flow, negative (positive) vorticity, and downward (upward) airflow. These facts indicate an interaction between dust-forced heating /cooling and atmospheric circulation. The April correlation results are supported by the analysis of vertical distribution of dust concentration, derived from the 24-hour dust prediction system at Tel Aviv University (website: http://earth.nasa.proj.ac.il/dust/current/). For other months the analysis is more complicated because of the essential increasing of humidity along with the northward progress of the ITCZ and the significant impact on the increments.
Timmer-Bonte, Johanna N H; Adang, Eddy M M; Smit, Hans J M; Biesma, Bonne; Wilschut, Frank A; Bootsma, Gerben P; de Boo, Theo M; Tjan-Heijnen, Vivianne C G
2006-07-01
Recently, a Dutch, randomized, phase III trial demonstrated that, in small-cell lung cancer patients at risk of chemotherapy-induced febrile neutropenia (FN), the addition of granulocyte colony-stimulating factor (GCSF) to prophylactic antibiotics significantly reduced the incidence of FN in cycle 1 (24% v 10%; P = .01). We hypothesized that selecting patients at risk of FN might increase the cost-effectiveness of GCSF prophylaxis. Economic analysis was conducted alongside the clinical trial and was focused on the health care perspective. Primary outcome was the difference in mean total costs per patient in cycle 1 between both prophylactic strategies. Cost-effectiveness was expressed as costs per percent-FN-prevented. For the first cycle, the mean incremental costs of adding GCSF amounted to 681 euro (95% CI, -36 to 1,397 euro) per patient. For the entire treatment period, the mean incremental costs were substantial (5,123 euro; 95% CI, 3,908 to 6,337 euro), despite a significant reduction in the incidence of FN and related savings in medical care consumption. The incremental cost-effectiveness ratio was 50 euro per percent decrease of the probability of FN (95% CI, -2 to 433 euro) in cycle 1, and the acceptability for this willingness to pay was approximately 50%. Despite the selection of patients at risk of FN, the addition of GCSF to primary antibiotic prophylaxis did not result in cost savings. If policy makers are willing to pay 240 euro for each percent gain in effect (ie, 3,360 euro for a 14% reduction in FN), the addition of GCSF can be considered cost effective.
Kawazoe, Nobuo; Liu, Guoxiang; Chiang, Chifa; Zhang, Yan; Aoyama, Atsuko
2015-01-01
ABSTRACT A new public health insurance scheme has been gradually introduced in rural provinces in China since 2003. This would likely cause an increment in the use of health services. It is known that the association between health insurance coverage and health service utilization varies among different age groups. This study aims to examine the association between extending health insurance coverage and increment in outpatient service utilization of small children in rural China, and to identify other factors associated with the outpatient service utilization. A household survey was conducted in 2 counties in north China in August 2010, targeting 107 selected households with a child aged 12–59 months. The questionnaire included modules on demographic information such as ages of children and parents, enrollment status of health insurance, the number of episodes of illness as perceived by parents, month of incidence of episode and outpatient service utilization at each episode. Based on the utilization at each episode of illness, a random effects logistic regression model was employed to analyze the association. It was found that eligibility for the reimbursement of outpatient medical expenses was not significantly associated with decision to seek care or choice of health facility. This might be in part due to the low level of reimbursement which could discourage the use of insured, and to the close relationship with village clinic workers which would encourage the use of uninsured. Three other factors were significantly associated with increment in the outpatient service utilization; age of children, mother’s education, and number of children in a household. PMID:26412893
Nazir, J; Hart, W M
2014-06-01
To carry out a cost-utility analysis comparing initial treatment of patients with overactive bladder (OAB) with solifenacin 5 mg/day versus either trospium 20 mg twice a day or trospium 60 mg/day from the perspective of the German National Health Service. A decision analytic model with a 3 month cycle was developed to follow a cohort of OAB patients treated with either solifenacin or trospium during a 1 year period. Costs and utilities were accumulated as patients transitioned through the four cycles in the model. Some of the solifenacin patients were titrated from 5 mg to 10 mg/day at 3 months. Utility values were obtained from the published literature and pad use was based on a US resource utilization study. Adherence rates for individual treatments were derived from a United Kingdom general practitioner database review. The change in the mean number of urgency urinary incontinence episodes/day from after 12 weeks was the main outcome measure. Baseline effectiveness values for solifenacin and trospium were calculated using the Poisson distribution. Patients who failed second-line therapy were referred to a specialist visit. Results were expressed in terms of incremental cost-utility ratios. Total annual costs for solifenacin, trospium 20 mg and trospium 60 mg were €970.01, €860.05 and €875.05 respectively. Drug use represented 43%, 28% and 29% of total costs and pad use varied between 45% and 57%. Differences between cumulative utilities were small but favored solifenacin (0.6857 vs. 0.6802 to 0.6800). The baseline incremental cost-effectiveness ratio ranged from €16,657 to €19,893 per QALY. The difference in cumulative utility favoring solifenacin was small (0.0055-0.0057 QALYs). A small absolute change in the cumulative utilities can have a marked impact on the overall incremental cost-effectiveness ratios (ICERs) and care should be taken when interpreting the results. Solifenacin would appear to be cost-effective with an ICER of no more than €20,000/QALY. However, small differences in utility between the alternatives means that the results are sensitive to adjustments in the values of the assigned utilities, effectiveness and discontinuation rates.
Lorgelly, Paula K.; Dias, Joseph J.; Bradley, Mary J.; Burke, Frank D.
2005-01-01
OBJECTIVE: There is insufficient evidence regarding the clinical and cost-effectiveness of surgical interventions for carpal tunnel syndrome. This study evaluates the cost, effectiveness and cost-effectiveness of minimally invasive surgery compared with conventional open surgery. PATIENTS AND METHODS: 194 sufferers (208 hands) of carpal tunnel syndrome were randomly assigned to each treatment option. A self-administered questionnaire assessed the severity of patients' symptoms and functional status pre- and postoperatively. Treatment costs were estimated from resource use and hospital financial data. RESULTS: Minimally invasive carpal tunnel decompression is marginally more effective than open surgery in terms of functional status, but not significantly so. Little improvement in symptom severity was recorded for either intervention. Minimally invasive surgery was found to be significantly more costly than open surgery. The incremental cost effectiveness ratio for functional status was estimated to be 197 UK pounds, such that a one percentage point improvement in functioning costs 197 UK pounds when using the minimally invasive technique. CONCLUSIONS: Minimally invasive carpal tunnel decompression appears to be more effective but more costly. Initial analysis suggests that the additional expense for such a small improvement in function and no improvement in symptoms would not be regarded as value-for-money, such that minimally invasive carpal tunnel release is unlikely to be considered a cost-effective alternative to the traditional open surgery procedure. PMID:15720906
Polygalacturonase production by calcium alginate immobilized Enterobacter aerogenes NBO2 cells.
Darah, I; Nisha, M; Lim, Sheh-Hong
2015-03-01
Bacterial cells of Enterobacter aerogenes NBO2 were entrapped in calcium alginate beads in order to enhance polygalacturonase production compared to free cells. The optimized condition of 5 % (w/v) sodium alginate concentration, agitation speed of 250 rpm, and 15 beads of calcium alginate with inoculum size of 4 % (v/v; 5.4 × 10(7) cells/ml) produced 23.48 U/mL of polygalacturonase compared to free cells of 18.54 U/ml. There was about 26.6 % increment in polygalaturonase production. However, in this study, there was 296.6 % of increment in polygalacturonase production after improvement parameters compared to before improvement parameters of calcium alginate bead immobilization cells (5.92 U/ml). This research has indicated that optimized physical parameters of calcium alginate bead immobilization cells have significantly enhanced the production of polygalacturonase.
Nasir, Hina; Javaid, Nadeem; Sher, Muhammad; Qasim, Umar; Khan, Zahoor Ali; Alrajeh, Nabil; Niaz, Iftikhar Azim
2016-01-01
This paper embeds a bi-fold contribution for Underwater Wireless Sensor Networks (UWSNs); performance analysis of incremental relaying in terms of outage and error probability, and based on the analysis proposition of two new cooperative routing protocols. Subject to the first contribution, a three step procedure is carried out; a system model is presented, the number of available relays are determined, and based on cooperative incremental retransmission methodology, closed-form expressions for outage and error probability are derived. Subject to the second contribution, Adaptive Cooperation in Energy (ACE) efficient depth based routing and Enhanced-ACE (E-ACE) are presented. In the proposed model, feedback mechanism indicates success or failure of data transmission. If direct transmission is successful, there is no need for relaying by cooperative relay nodes. In case of failure, all the available relays retransmit the data one by one till the desired signal quality is achieved at destination. Simulation results show that the ACE and E-ACE significantly improves network performance, i.e., throughput, when compared with other incremental relaying protocols like Cooperative Automatic Repeat reQuest (CARQ). E-ACE and ACE achieve 69% and 63% more throughput respectively as compared to CARQ in hard underwater environment. PMID:27420061
Kerr, Kathleen F; Meisner, Allison; Thiessen-Philbrook, Heather; Coca, Steven G; Parikh, Chirag R
2014-08-07
The field of nephrology is actively involved in developing biomarkers and improving models for predicting patients' risks of AKI and CKD and their outcomes. However, some important aspects of evaluating biomarkers and risk models are not widely appreciated, and statistical methods are still evolving. This review describes some of the most important statistical concepts for this area of research and identifies common pitfalls. Particular attention is paid to metrics proposed within the last 5 years for quantifying the incremental predictive value of a new biomarker. Copyright © 2014 by the American Society of Nephrology.
NASA Astrophysics Data System (ADS)
Ginzburg, V. N.; Kochetkov, A. A.; Potemkin, A. K.; Khazanov, E. A.
2018-04-01
It has been experimentally confirmed that self-cleaning of a laser beam from spatial noise during propagation in free space makes it possible to suppress efficiently the self-focusing instability without applying spatial filters. Measurements of the instability increment by two independent methods have demonstrated quantitative agreement with theory and high efficiency of small-scale self-focusing suppression. This opens new possibilities for using optical elements operating in transmission (frequency doublers, phase plates, beam splitters, polarisers, etc.) in beams with intensities on the order of a few TW cm‑2.
An electronic system for measuring thermophysical properties of wind tunnel models
NASA Technical Reports Server (NTRS)
Corwin, R. R.; Kramer, J. S.
1975-01-01
An electronic system is described which measures the surface temperature of a small portion of the surface of the model or sample at high speeds using an infrared radiometer. This data is processed along with heating rate data from the reference heat gauge in a small computer and prints out the desired thermophysical properties, time, surface temperature, and reference heat rate. This system allows fast and accurate property measurements over thirty temperature increments. The technique, the details of the apparatus, the procedure for making these measurements, and the results of some preliminary tests are presented.
Creation of a small high-throughput screening facility.
Flak, Tod
2009-01-01
The creation of a high-throughput screening facility within an organization is a difficult task, requiring a substantial investment of time, money, and organizational effort. Major issues to consider include the selection of equipment, the establishment of data analysis methodologies, and the formation of a group having the necessary competencies. If done properly, it is possible to build a screening system in incremental steps, adding new pieces of equipment and data analysis modules as the need grows. Based upon our experience with the creation of a small screening service, we present some guidelines to consider in planning a screening facility.
Reversion phenomena of Cu-Cr alloys
NASA Technical Reports Server (NTRS)
Nishikawa, S.; Nagata, K.; Kobayashi, S.
1985-01-01
Cu-Cr alloys which were given various aging and reversion treatments were investigated in terms of electrical resistivity and hardness. Transmission electron microscopy was one technique employed. Some results obtained are as follows: the increment of electrical resistivity after the reversion at a constant temperature decreases as the aging temperature rises. In a constant aging condition, the increment of electrical resistivity after the reversion increases, and the time required for a maximum reversion becomes shorter as the reversion temperature rises. The reversion phenomena can be repeated, but its amount decreases rapidly by repetition. At first, the amount of reversion increases with aging time and reaches its maximum, and then tends to decrease again. Hardness changes by the reversion are very small, but the hardness tends to soften slightly. Any changes in transmission electron micrographs by the reversion treatment cannot be detected.
Mouri, Hideaki; Hori, Akihiro; Kawashima, Yoshihide
2004-12-01
The most elementary structures of turbulence, i.e., vortex tubes, are studied using velocity data obtained in a laboratory experiment for boundary layers with Reynolds numbers Re(lambda) =295-1258 . We conduct conditional averaging for enhancements of a small-scale velocity increment and obtain the typical velocity profile for vortex tubes. Their radii are of the order of the Kolmogorov length. Their circulation velocities are of the order of the root-mean-square velocity fluctuation. We also obtain the distribution of the interval between successive enhancements of the velocity increment as the measure of the spatial distribution of vortex tubes. They tend to cluster together below about the integral length and more significantly below about the Taylor microscale. These properties are independent of the Reynolds number and are hence expected to be universal.
Identification of high shears and compressive discontinuities in the inner heliosphere
DOE Office of Scientific and Technical Information (OSTI.GOV)
Greco, A.; Perri, S.
2014-04-01
Two techniques, the Partial Variance of Increments (PVI) and the Local Intermittency Measure (LIM), have been applied and compared using MESSENGER magnetic field data in the solar wind at a heliocentric distance of about 0.3 AU. The spatial properties of the turbulent field at different scales, spanning the whole inertial range of magnetic turbulence down toward the proton scales have been studied. LIM and PVI methodologies allow us to identify portions of an entire time series where magnetic energy is mostly accumulated, and regions of intermittent bursts in the magnetic field vector increments, respectively. A statistical analysis has revealed thatmore » at small time scales and for high level of the threshold, the bursts present in the PVI and the LIM series correspond to regions of high shear stress and high magnetic field compressibility.« less
Integrating practical, regulatory and ethical strategies for enhancing farm animal welfare.
Mellor, D J; Stafford, K J
2001-11-01
To provide an integrated view of relationships between assessment of animal welfare. societal expectations regarding animal welfare standards, the need for regulation, and two ethical strategies for promoting animal welfare, emphasising farm animals. Ideas in relevant papers and key insights were outlined and illustrated, where appropriate, by New Zealand experience with different facets of the welfare management of farm animals. An animal's welfare is good when its nutritional, environmental, health, behavioural and mental needs are met. Compromise may occur in one or more of these areas and is assessed by scientifically-informed best judgement using parameters validated by directed research and objective analysis in clinical and practical settings. There is a wide range of perceptions of what constitutes good and bad welfare in society, so that animal welfare standards cannot be left to individual preferences to determine. Rather, the promotion of animal welfare is seen as requiring central regulation, but managed in a way that allows for adjustments based on new scientific knowledge of animals' needs and changing societal perceptions of what is acceptable and unacceptable treatment of animals. Concepts of 'minimal welfare', representing the threshold of cruelty, and 'acceptable welfare', representing higher, more acceptable standards than those that merely avoid cruelty, are outlined. They are relevant to economic analyses, which deal with determinants of animal welfare standards based on financial costs and the desire of the public to feel broadly comfortable about the treatment of the animals that are used to serve their needs. Ethical strategies for promoting animal welfare can be divided broadly into the 'gold standard' approach and the 'incremental improvement' approach. The first defines the ideal that is to be required in a particular situation and will accept nothing less than that ideal, whereas the second aims to improve welfare in a step-wise fashion by setting a series of achievable goals, seeing each small advance as worthwhile progress towards the same ideal. 'Incremental improvement' is preferred. This also has application in veterinary practice where the professional commitment to maintain good welfare standards may at times conflict with financial constraints experienced by clients.
[Economic impact of nosocomial bacteraemia. A comparison of three calculation methods].
Riu, Marta; Chiarello, Pietro; Terradas, Roser; Sala, Maria; Castells, Xavier; Knobel, Hernando; Cots, Francesc
2016-12-01
The excess cost associated with nosocomial bacteraemia (NB) is used as a measurement of the impact of these infections. However, some authors have suggested that traditional methods overestimate the incremental cost due to the presence of various types of bias. The aim of this study was to compare three assessment methods of NB incremental cost to correct biases in previous analyses. Patients who experienced an episode of NB between 2005 and 2007 were compared with patients grouped within the same All Patient Refined-Diagnosis-Related Group (APR-DRG) without NB. The causative organisms were grouped according to the Gram stain, and whether bacteraemia was caused by a single or multiple microorganisms, or by a fungus. Three assessment methods are compared: stratification by disease; econometric multivariate adjustment using a generalised linear model (GLM); and propensity score matching (PSM) was performed to control for biases in the econometric model. The analysis included 640 admissions with NB and 28,459 without NB. The observed mean cost was €24,515 for admissions with NB and €4,851.6 for controls (without NB). Mean incremental cost was estimated at €14,735 in stratified analysis. Gram positive microorganism had the lowest mean incremental cost, €10,051. In the GLM, mean incremental cost was estimated as €20,922, and adjusting with PSM, the mean incremental cost was €11,916. The three estimates showed important differences between groups of microorganisms. Using enhanced methodologies improves the adjustment in this type of study and increases the value of the results. Copyright © 2015 Elsevier España, S.L.U. and Sociedad Española de Enfermedades Infecciosas y Microbiología Clínica. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vandecaveye, Vincent, E-mail: Vincent.Vandecaveye@uzleuven.be; Dirix, Piet; De Keyzer, Frederik
2012-03-01
Purpose: To evaluate diffusion-weighted imaging (DWI) for assessment of treatment response in head and neck squamous cell carcinoma (HNSCC) three weeks after the end of chemoradiotherapy (CRT). Methods and Materials: Twenty-nine patients with HNSCC underwent magnetic resonance imaging (MRI) prior to and 3 weeks after CRT, including T{sub 2}-weighted and pre- and postcontrast T{sub 1}-weighted sequences and an echo-planar DWI sequence with six b values (0 to 1,000 s/mm{sup 2}), from which the apparent diffusion coefficient (ADC) was calculated. ADC changes 3 weeks posttreatment compared to baseline ( Increment ADC) between responding and nonresponding primary lesions and adenopathies were correlatedmore » with 2 years locoregional control and compared with a Mann-Whitney test. In a blinded manner, the Increment ADC was compared to conventional MRI 3 weeks post-CRT and the routinely implemented CT, on average 3 months post-CRT, which used size-related and morphological criteria. Positive and negative predictive values (PPV and NPV, respectively) were compared between the Increment ADC and anatomical imaging. Results: The Increment ADC of lesions with later tumor recurrence was significantly lower than lesions with complete remission for both primary lesions (-2.3% {+-} 0.3% vs. 80% {+-} 41%; p < 0.0001) and adenopathies (19.9% {+-} 32% vs. 63% {+-} 36%; p = 0.003). The Increment ADC showed a PPV of 89% and an NPV of 100% for primary lesions and a PPV of 70% and an NPV of 96% for adenopathies per neck side. DWI improved PPV and NPV compared to anatomical imaging. Conclusion: DWI with the Increment ADC 3 weeks after concluding CRT for HNSCC allows for early assessment of treatment response.« less
GPU-accelerated Kernel Regression Reconstruction for Freehand 3D Ultrasound Imaging.
Wen, Tiexiang; Li, Ling; Zhu, Qingsong; Qin, Wenjian; Gu, Jia; Yang, Feng; Xie, Yaoqin
2017-07-01
Volume reconstruction method plays an important role in improving reconstructed volumetric image quality for freehand three-dimensional (3D) ultrasound imaging. By utilizing the capability of programmable graphics processing unit (GPU), we can achieve a real-time incremental volume reconstruction at a speed of 25-50 frames per second (fps). After incremental reconstruction and visualization, hole-filling is performed on GPU to fill remaining empty voxels. However, traditional pixel nearest neighbor-based hole-filling fails to reconstruct volume with high image quality. On the contrary, the kernel regression provides an accurate volume reconstruction method for 3D ultrasound imaging but with the cost of heavy computational complexity. In this paper, a GPU-based fast kernel regression method is proposed for high-quality volume after the incremental reconstruction of freehand ultrasound. The experimental results show that improved image quality for speckle reduction and details preservation can be obtained with the parameter setting of kernel window size of [Formula: see text] and kernel bandwidth of 1.0. The computational performance of the proposed GPU-based method can be over 200 times faster than that on central processing unit (CPU), and the volume with size of 50 million voxels in our experiment can be reconstructed within 10 seconds.
Does cerebral oxygen delivery limit incremental exercise performance?
Olin, J. Tod; Dimmen, Andrew C.; Polaner, David M.; Kayser, Bengt; Roach, Robert C.
2011-01-01
Previous studies have suggested that a reduction in cerebral oxygen delivery may limit motor drive, particularly in hypoxic conditions, where oxygen transport is impaired. We hypothesized that raising end-tidal Pco2 (PetCO2) during incremental exercise would increase cerebral blood flow (CBF) and oxygen delivery, thereby improving peak power output (Wpeak). Amateur cyclists performed two ramped exercise tests (25 W/min) in a counterbalanced order to compare the normal, poikilocapnic response against a clamped condition, in which PetCO2 was held at 50 Torr throughout exercise. Tests were performed in normoxia (barometric pressure = 630 mmHg, 1,650 m) and hypoxia (barometric pressure = 425 mmHg, 4,875 m) in a hypobaric chamber. An additional trial in hypoxia investigated effects of clamping at a lower PetCO2 (40 Torr) from ∼75 to 100% Wpeak to reduce potential influences of respiratory acidosis and muscle fatigue imposed by clamping PetCO2 at 50 Torr. Metabolic gases, ventilation, middle cerebral artery CBF velocity (transcranial Doppler), forehead pulse oximetry, and cerebral (prefrontal) and muscle (vastus lateralis) hemoglobin oxygenation (near infrared spectroscopy) were monitored across trials. Clamping PetCO2 at 50 Torr in both normoxia (n = 9) and hypoxia (n = 11) elevated CBF velocity (∼40%) and improved cerebral hemoglobin oxygenation (∼15%), but decreased Wpeak (6%) and peak oxygen consumption (11%). Clamping at 40 Torr near maximal effort in hypoxia (n = 6) also improved cerebral oxygenation (∼15%), but again limited Wpeak (5%). These findings demonstrate that increasing mass cerebral oxygen delivery via CO2-mediated vasodilation does not improve incremental exercise performance, at least when accompanied by respiratory acidosis. PMID:21921244
for a variety of energy efficiency improvements, including AFV conversions and incremental costs, with cover up to 100% of the project costs ranging from $50,000 to $1 million and must be repaid after one
Railroad decision support tools for track maintenance.
DOT National Transportation Integrated Search
2016-01-01
North American railroads spend billions of dollars each year on track maintenance. With : expenditures of this level, incremental improvements in planning or execution of maintenance projects can result in either substantial savings or the ability to...
Aakre, Kenneth T; Valley, Timothy B; O'Connor, Michael K
2010-03-01
Lean Six Sigma process improvement methodologies have been used in manufacturing for some time. However, Lean Six Sigma process improvement methodologies also are applicable to radiology as a way to identify opportunities for improvement in patient care delivery settings. A multidisciplinary team of physicians and staff conducted a 100-day quality improvement project with the guidance of a quality advisor. By using the framework of DMAIC (define, measure, analyze, improve, and control), time studies were performed for all aspects of patient and technologist involvement. From these studies, value stream maps for the current state and for the future were developed, and tests of change were implemented. Comprehensive value stream maps showed that before implementation of process changes, an average time of 20.95 minutes was required for completion of a bone densitometry study. Two process changes (ie, tests of change) were undertaken. First, the location for completion of a patient assessment form was moved from inside the imaging room to the waiting area, enabling patients to complete the form while waiting for the technologist. Second, the patient was instructed to sit in a waiting area immediately outside the imaging rooms, rather than in the main reception area, which is far removed from the imaging area. Realignment of these process steps, with reduced technologist travel distances, resulted in a 3-minute average decrease in the patient cycle time. This represented a 15% reduction in the initial patient cycle time with no change in staff or costs. Radiology process improvement projects can yield positive results despite small incremental changes.
Small Diameter Bomb Increment II (SDB II)
2013-12-01
in 2013: Electromagnetic Environments and Effects and Hazards of Electromagnetic Radiation to Ordnance . Reliability Growth Testing started in June...unclassified c. THIS PAGE unclassified Standard Form 298 (Rev. 8-98) Prescribed by ANSI Std Z39-18 SDB II December 2013 SAR April 16, 2014 17:24:29...Framework EMC - Electromagnetic Compatibility EMI - Electromagnetic Interference GESP - GIG Enterprise Service Profiles GIG - Global Information Grid i.e
Mechanical Limits to Size in Wave-Swept Organisms.
1983-11-10
complanata, the probability of destruction and the size- specific increase in the risk of destruction are both substantial. It is conjectured that the...barnacle, Semibalanus cariosus) the size-specific increment in the risk of destruction is small and the size limits imposed on these organisms are...constructed here provides an experimental approach to examining many potential effects of environmental stress caused by flowing water. For example, these
Hatoum-Aslan, Asma; Samai, Poulami; Maniv, Inbal; Jiang, Wenyan; Marraffini, Luciano A.
2013-01-01
Small RNAs undergo maturation events that precisely determine the length and structure required for their function. CRISPRs (clustered regularly interspaced short palindromic repeats) encode small RNAs (crRNAs) that together with CRISPR-associated (cas) genes constitute a sequence-specific prokaryotic immune system for anti-viral and anti-plasmid defense. crRNAs are subject to multiple processing events during their biogenesis, and little is known about the mechanism of the final maturation step. We show that in the Staphylococcus epidermidis type III CRISPR-Cas system, mature crRNAs are measured in a Cas10·Csm ribonucleoprotein complex to yield discrete lengths that differ by 6-nucleotide increments. We looked for mutants that impact this crRNA size pattern and found that an alanine substitution of a conserved aspartate residue of Csm3 eliminates the 6-nucleotide increments in the length of crRNAs. In vitro, recombinant Csm3 binds RNA molecules at multiple sites, producing gel-shift patterns that suggest that each protein binds 6 nucleotides of substrate. In vivo, changes in the levels of Csm3 modulate the crRNA size distribution without disrupting the 6-nucleotide periodicity. Our data support a model in which multiple Csm3 molecules within the Cas10·Csm complex bind the crRNA with a 6-nucleotide periodicity to function as a ruler that measures the extent of crRNA maturation. PMID:23935102
Excimer laser therapy and narrowband ultraviolet B therapy for exfoliative cheilitis.
Bhatia, Bhavnit K; Bahr, Brooks A; Murase, Jenny E
2015-06-01
Exfoliative cheilitis is a condition of unknown etiology characterized by hyperkeratosis and scaling of vermilion epithelium with cyclic desquamation. It remains largely refractory to treatment, including corticosteroid therapy, antibiotics, antifungals, and immunosuppressants. We sought to evaluate the safety and efficacy of excimer laser therapy and narrowband ultraviolet B therapy in female patients with refractory exfoliative cheilitis. We reviewed the medical records of two female patients who had been treated unsuccessfully for exfoliative cheilitis. We implemented excimer laser therapy, followed by hand-held narrowband UVB treatments for maintenance therapy, and followed them for clinical improvement and adverse effects. Both patients experienced significant clinical improvement with minimal adverse effects with excimer laser therapy 600-700 mJ/cm 2 twice weekly for several months. The most common adverse effects were bleeding and burning, which occurred at higher doses. The hand-held narrowband UVB unit was also an effective maintenance tool. Limitations include small sample size and lack of standardization of starting dose and dose increments. Excimer laser therapy is a well-tolerated and effective treatment for refractory exfoliative cheilitis with twice weekly laser treatments of up to 700 mJ/cm 2 . Transitioning to the hand-held narrowband UVB device was also an effective maintenance strategy.
USDA-ARS?s Scientific Manuscript database
Materials and Methods The simulation exercise and model improvement were implemented in phase-wise. In the first modelling activities, the model sensitivities were evaluated to given CO2 concentrations varying from 360 to 720 'mol mol-1 at an interval of 90 'mol mol-1 and air temperature increments...
ERIC Educational Resources Information Center
Burns, Matthew K.; Dean, Vincent J.; Foley, Sarah
2004-01-01
Research has consistently demonstrated that strategic preteaching activities led to improved reading fluency, but lacked studies examining the effect on reading comprehension. The current study investigated the effect of teaching unknown key words as a preteaching strategy with 20 students identified as learning disabled in basic reading skills…
2018-01-01
Our first aim was to compare the anaerobic threshold (AnT) determined by the incremental protocol with the reverse lactate threshold test (RLT), investigating the previous cycling experience effect. Secondarily, an alternative RLT application based on heart rate was proposed. Two groups (12 per group-according to cycling experience) were evaluated on cycle ergometer. The incremental protocol started at 25 W with increments of 25 W at each 3 minutes, and the AnT was calculated by bissegmentation, onset of blood lactate concentration and maximal deviation methods. The RLT was applied in two phases: a) lactate priming segment; and b) reverse segment; the AnT (AnTRLT) was calculated based on a second order polynomial function. The AnT from the RLT was calculated based on the heart rate (AnTRLT-HR) by the second order polynomial function. In regard of the Study 1, most of statistical procedures converged for similarity between the AnT determined from the bissegmentation method and AnTRLT. For 83% of non-experienced and 75% of experienced subjects the bias was 4% and 2%, respectively. In Study 2, no difference was found between the AnTRLT and AnTRLT-HR. For 83% of non-experienced and 91% of experienced subjects, the bias between AnTRLT and AnTRLT-HR was similar (i.e. 6%). In summary, the AnT determined by the incremental protocol and RLT are consistent. The AnT can be determined during the RLT via heart rate, improving its applicability. However, future studies are required to improve the agreement between variables. PMID:29534108
Levy, Jonathan I; Biton, Leiran; Hopke, Philip K; Zhang, K Max; Rector, Lisa
2017-07-01
Biomass facilities have received increasing attention as a strategy to increase the use of renewable fuels and decrease greenhouse gas emissions from the electric generation and heating sectors, but these facilities can potentially increase local air pollution and associated health effects. Comparing the economic costs and public health benefits of alternative biomass fuel, heating technology, and pollution control technology options provides decision-makers with the necessary information to make optimal choices in a given location. For a case study of a combined heat and power biomass facility in Syracuse, New York, we used stack testing to estimate emissions of fine particulate matter (PM 2.5 ) for both the deployed technology (staged combustion pellet boiler with an electrostatic precipitator) and a conventional alternative (wood chip stoker boiler with a multicyclone). We used the atmospheric dispersion model AERMOD to calculate the contribution of either fuel-technology configuration to ambient primary PM 2.5 in a 10km×10km region surrounding the facility, and we quantified the incremental contribution to population mortality and morbidity. We assigned economic values to health outcomes and compared the health benefits of the lower-emitting technology with the incremental costs. In total, the incremental annualized cost of the lower-emitting pellet boiler was $190,000 greater, driven by a greater cost of the pellet fuel and pollution control technology, offset in part by reduced fuel storage costs. PM 2.5 emissions were a factor of 23 lower with the pellet boiler with electrostatic precipitator, with corresponding differences in contributions to ambient primary PM 2.5 concentrations. The monetary value of the public health benefits of selecting the pellet-fired boiler technology with electrostatic precipitator was $1.7 million annually, greatly exceeding the differential costs even when accounting for uncertainties. Our analyses also showed complex spatial patterns of health benefits given non-uniform age distributions and air pollution levels. The incremental investment in a lower-emitting staged combustion pellet boiler with an electrostatic precipitator was well justified by the population health improvements over the conventional wood chip technology with a multicyclone, even given the focus on only primary PM 2.5 within a small spatial domain. Our analytical framework could be generalized to other settings to inform optimal strategies for proposed new facilities or populations. Copyright © 2017. Published by Elsevier Inc.
Energy and Cost Associated with Ventilating Office Buildings in a Tropical Climate
Rim, Donghyun; Schiavon, Stefano; Nazaroff, William W.
2015-01-01
Providing sufficient amounts of outdoor air to occupants is a critical building function for supporting occupant health, well-being and productivity. In tropical climates, high ventilation rates require substantial amounts of energy to cool and dehumidify supply air. This study evaluates the energy consumption and associated cost for thermally conditioning outdoor air provided for building ventilation in tropical climates, considering Singapore as an example locale. We investigated the influence on energy consumption and cost of the following factors: outdoor air temperature and humidity, ventilation rate (L/s per person), indoor air temperature and humidity, air conditioning system coefficient of performance (COP), and cost of electricity. Results show that dehumidification of outdoor air accounts for more than 80% of the energy needed for building ventilation in Singapore’s tropical climate. Improved system performance and/or a small increase in the indoor temperature set point would permit relatively large ventilation rates (such as 25 L/s per person) at modest or no cost increment. Overall, even in a thermally demanding tropical climate, the energy cost associated with increasing ventilation rate up to 25 L/s per person is less than 1% of the wages of an office worker in an advanced economy like Singapore’s. This result implies that the benefits of increasing outdoor air ventilation rate up to 25 L/s per person — which is suggested to provide for productivity increases, lower sick building syndrome symptom prevalence, and reduced sick leave — can be much larger than the incremental cost of ventilation. PMID:25822504
NASA Astrophysics Data System (ADS)
Carette, Yannick; Vanhove, Hans; Duflou, Joost
2018-05-01
Single Point Incremental Forming is a flexible process that is well-suited for small batch production and rapid prototyping of complex sheet metal parts. The distributed nature of the deformation process and the unsupported sheet imply that controlling the final accuracy of the workpiece is challenging. To improve the process limits and the accuracy of SPIF, the use of multiple forming passes has been proposed and discussed by a number of authors. Most methods use multiple intermediate models, where the previous one is strictly smaller than the next one, while gradually increasing the workpieces' wall angles. Another method that can be used is the manufacture of a smoothed-out "base geometry" in the first pass, after which more detailed features can be added in subsequent passes. In both methods, the selection of these intermediate shapes is freely decided by the user. However, their practical implementation in the production of complex freeform parts is not straightforward. The original CAD model can be manually adjusted or completely new CAD models can be created. This paper discusses an automatic method that is able to extract the base geometry from a full STL-based CAD model in an analytical way. Harmonic decomposition is used to express the final geometry as the sum of individual surface harmonics. It is then possible to filter these harmonic contributions to obtain a new CAD model with a desired level of geometric detail. This paper explains the technique and its implementation, as well as its use in the automatic generation of multi-step geometries.
Araya, Ricardo; Flynn, Terry; Rojas, Graciela; Fritsch, Rosemarie; Simon, Greg
2006-08-01
The authors compared the incremental cost-effectiveness of a stepped-care, multicomponent program with usual care for the treatment of depressed women in primary care in Santiago, Chile. A cost-effectiveness study was conducted of a previous randomized controlled trial involving 240 eligible women with DSM-IV major depression who were selected from a consecutive sample of adult women attending primary care clinics. The patients were randomly allocated to usual care or a multicomponent stepped-care program led by a nonmedical health care worker. Depression-free days and health care costs derived from local sources were assessed after 3 and 6 months. A health service perspective was used in the economic analysis. Complete data were determined for 80% of the randomly assigned patients. After we adjusted for initial severity, women receiving the stepped-care program had a mean of 50 additional depression-free days over 6 months relative to patients allocated to usual care. The stepped-care program was marginally more expensive than usual care (an extra 216 Chilean pesos per depression-free day). There was a 90% probability that the incremental cost of obtaining an extra depression-free day with the intervention would not exceed 300 pesos (1.04 US dollars). The stepped-care program was significantly more effective and marginally more expensive than usual care for the treatment of depressed women in primary care. Small investments to improve depression appear to yield larger gains in poorer environments. Simple and inexpensive treatment programs tested in developing countries might provide good study models for developed countries.
López-Gómez, Miguel; Hidalgo-Castellanos, Javier; Muñoz-Sánchez, J Rubén; Marín-Peña, Agustín J; Lluch, Carmen; Herrera-Cervera, José A
2017-07-01
Polyamines (PAs) such as spermidine (Spd) and spermine (Spm) are small ubiquitous polycationic compounds that contribute to plant adaptation to salt stress. The positive effect of PAs has been associated to a cross-talk with other anti-stress hormones such as brassinosteroids (BRs). In this work we have studied the effects of exogenous Spd and Spm pre-treatments in the response to salt stress of the symbiotic interaction between Medicago truncatula and Sinorhizobium meliloti by analyzing parameters related to nitrogen fixation, oxidative damage and cross-talk with BRs in the response to salinity. Exogenous PAs treatments incremented the foliar and nodular Spd and Spm content which correlated with an increment of the nodule biomass and nitrogenase activity. Exogenous Spm treatment partially prevented proline accumulation which suggests that this polyamine could replace the role of this amino acid in the salt stress response. Additionally, Spd and Spm pre-treatments reduced the levels of H 2 O 2 and lipid peroxidation under salt stress. PAs induced the expression of genes involved in BRs biosynthesis which support a cross-talk between PAs and BRs in the salt stress response of M. truncatula-S. meliloti symbiosis. In conclusion, exogenous PAs improved the response to salinity of the M. truncatula-S. meliloti symbiosis by reducing the oxidative damage induced under salt stress conditions. In addition, in this work we provide evidences of the cross-talk between PAs and BRs in the adaptive responses to salinity. Copyright © 2017 Elsevier Masson SAS. All rights reserved.
Anderson, Christian E; Wang, Charlie Y; Gu, Yuning; Darrah, Rebecca; Griswold, Mark A; Yu, Xin; Flask, Chris A
2018-04-01
The regularly incremented phase encoding-magnetic resonance fingerprinting (RIPE-MRF) method is introduced to limit the sensitivity of preclinical MRF assessments to pulsatile and respiratory motion artifacts. As compared to previously reported standard Cartesian-MRF methods (SC-MRF), the proposed RIPE-MRF method uses a modified Cartesian trajectory that varies the acquired phase-encoding line within each dynamic MRF dataset. Phantoms and mice were scanned without gating or triggering on a 7T preclinical MRI scanner using the RIPE-MRF and SC-MRF methods. In vitro phantom longitudinal relaxation time (T 1 ) and transverse relaxation time (T 2 ) measurements, as well as in vivo liver assessments of artifact-to-noise ratio (ANR) and MRF-based T 1 and T 2 mean and standard deviation, were compared between the two methods (n = 5). RIPE-MRF showed significant ANR reductions in regions of pulsatility (P < 0.005) and respiratory motion (P < 0.0005). RIPE-MRF also exhibited improved precision in T 1 and T 2 measurements in comparison to the SC-MRF method (P < 0.05). The RIPE-MRF and SC-MRF methods displayed similar mean T 1 and T 2 estimates (difference in mean values < 10%). These results show that the RIPE-MRF method can provide effective motion artifact suppression with minimal impact on T 1 and T 2 accuracy for in vivo small animal MRI studies. Magn Reson Med 79:2176-2182, 2018. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.
Khansaritoreh, Elmira; Dulamsuren, Choimaa; Klinge, Michael; Ariunbaatar, Tumurbaatar; Bat-Enerel, Banzragch; Batsaikhan, Ganbaatar; Ganbaatar, Kherlenchimeg; Saindovdon, Davaadorj; Yeruult, Yolk; Tsogtbaatar, Jamsran; Tuya, Daramragchaa; Leuschner, Christoph; Hauck, Markus
2017-09-01
Forest fragmentation has been found to affect biodiversity and ecosystem functioning in multiple ways. We asked whether forest size and isolation in fragmented woodlands influences the climate warming sensitivity of tree growth in the southern boreal forest of the Mongolian Larix sibirica forest steppe, a naturally fragmented woodland embedded in grassland, which is highly affected by warming, drought, and increasing anthropogenic forest destruction in recent time. We examined the influence of stand size and stand isolation on the growth performance of larch in forests of four different size classes located in a woodland-dominated forest-steppe area and small forest patches in a grassland-dominated area. We found increasing climate sensitivity and decreasing first-order autocorrelation of annual stemwood increment with decreasing stand size. Stemwood increment increased with previous year's June and August precipitation in the three smallest forest size classes, but not in the largest forests. In the grassland-dominated area, the tree growth dependence on summer rainfall was highest. Missing ring frequency has strongly increased since the 1970s in small, but not in large forests. In the grassland-dominated area, the increase was much greater than in the forest-dominated landscape. Forest regeneration decreased with decreasing stand size and was scarce or absent in the smallest forests. Our results suggest that the larch trees in small and isolated forest patches are far more susceptible to climate warming than in large continuous forests pointing to a grim future for the forests in this strongly warming region of the boreal forest that is also under high land use pressure. © 2017 John Wiley & Sons Ltd.
Enhanced proximity warning system (EPWS) for locomotives
DOT National Transportation Integrated Search
1997-10-01
The primary focus of the Enhanced Proximity Warning System (EPWS) is to provide a cost effective means to improve safety of railroad operations, with the ability to implement on an incremental, building block approach. The main safety objective...
Planar isotropy of passive scalar turbulent mixing with a mean perpendicular gradient.
Danaila, L; Dusek, J; Le Gal, P; Anselmet, F; Brun, C; Pumir, A
1999-08-01
A recently proposed evolution equation [Vaienti et al., Physica D 85, 405 (1994)] for the probability density functions (PDF's) of turbulent passive scalar increments obtained under the assumptions of fully three-dimensional homogeneity and isotropy is submitted to validation using direct numerical simulation (DNS) results of the mixing of a passive scalar with a nonzero mean gradient by a homogeneous and isotropic turbulent velocity field. It is shown that this approach leads to a quantitatively correct balance between the different terms of the equation, in a plane perpendicular to the mean gradient, at small scales and at large Péclet number. A weaker assumption of homogeneity and isotropy restricted to the plane normal to the mean gradient is then considered to derive an equation describing the evolution of the PDF's as a function of the spatial scale and the scalar increments. A very good agreement between the theory and the DNS data is obtained at all scales. As a particular case of the theory, we derive a generalized form for the well-known Yaglom equation (the isotropic relation between the second-order moments for temperature increments and the third-order velocity-temperature mixed moments). This approach allows us to determine quantitatively how the integral scale properties influence the properties of mixing throughout the whole range of scales. In the simple configuration considered here, the PDF's of the scalar increments perpendicular to the mean gradient can be theoretically described once the sources of inhomogeneity and anisotropy at large scales are correctly taken into account.
NASA Astrophysics Data System (ADS)
Chen, Zhi; Hu, Kun; Stanley, H. Eugene; Novak, Vera; Ivanov, Plamen Ch.
2006-03-01
We investigate the relationship between the blood flow velocities (BFV) in the middle cerebral arteries and beat-to-beat blood pressure (BP) recorded from a finger in healthy and post-stroke subjects during the quasisteady state after perturbation for four different physiologic conditions: supine rest, head-up tilt, hyperventilation, and CO2 rebreathing in upright position. To evaluate whether instantaneous BP changes in the steady state are coupled with instantaneous changes in the BFV, we compare dynamical patterns in the instantaneous phases of these signals, obtained from the Hilbert transform, as a function of time. We find that in post-stroke subjects the instantaneous phase increments of BP and BFV exhibit well-pronounced patterns that remain stable in time for all four physiologic conditions, while in healthy subjects these patterns are different, less pronounced, and more variable. We propose an approach based on the cross-correlation of the instantaneous phase increments to quantify the coupling between BP and BFV signals. We find that the maximum correlation strength is different for the two groups and for the different conditions. For healthy subjects the amplitude of the cross-correlation between the instantaneous phase increments of BP and BFV is small and attenuates within 3-5 heartbeats. In contrast, for post-stroke subjects, this amplitude is significantly larger and cross-correlations persist up to 20 heartbeats. Further, we show that the instantaneous phase increments of BP and BFV are cross-correlated even within a single heartbeat cycle. We compare the results of our approach with three complementary methods: direct BP-BFV cross-correlation, transfer function analysis, and phase synchronization analysis. Our findings provide insight into the mechanism of cerebral vascular control in healthy subjects, suggesting that this control mechanism may involve rapid adjustments (within a heartbeat) of the cerebral vessels, so that BFV remains steady in response to changes in peripheral BP.
Chen, Zhi; Hu, Kun; Stanley, H Eugene; Novak, Vera; Ivanov, Plamen Ch
2006-03-01
We investigate the relationship between the blood flow velocities (BFV) in the middle cerebral arteries and beat-to-beat blood pressure (BP) recorded from a finger in healthy and post-stroke subjects during the quasisteady state after perturbation for four different physiologic conditions: supine rest, head-up tilt, hyperventilation, and CO2 rebreathing in upright position. To evaluate whether instantaneous BP changes in the steady state are coupled with instantaneous changes in the BFV, we compare dynamical patterns in the instantaneous phases of these signals, obtained from the Hilbert transform, as a function of time. We find that in post-stroke subjects the instantaneous phase increments of BP and BFV exhibit well-pronounced patterns that remain stable in time for all four physiologic conditions, while in healthy subjects these patterns are different, less pronounced, and more variable. We propose an approach based on the cross-correlation of the instantaneous phase increments to quantify the coupling between BP and BFV signals. We find that the maximum correlation strength is different for the two groups and for the different conditions. For healthy subjects the amplitude of the cross-correlation between the instantaneous phase increments of BP and BFV is small and attenuates within 3-5 heartbeats. In contrast, for post-stroke subjects, this amplitude is significantly larger and cross-correlations persist up to 20 heartbeats. Further, we show that the instantaneous phase increments of BP and BFV are cross-correlated even within a single heartbeat cycle. We compare the results of our approach with three complementary methods: direct BP-BFV cross-correlation, transfer function analysis, and phase synchronization analysis. Our findings provide insight into the mechanism of cerebral vascular control in healthy subjects, suggesting that this control mechanism may involve rapid adjustments (within a heartbeat) of the cerebral vessels, so that BFV remains steady in response to changes in peripheral BP.
How to set the stage for a full-fledged clinical trial testing 'incremental haemodialysis'.
Casino, Francesco Gaetano; Basile, Carlo
2017-07-21
Most people who make the transition to maintenance haemodialysis (HD) therapy are treated with a fixed dose of thrice-weekly HD (3HD/week) regimen without consideration of their residual kidney function (RKF). The RKF provides an effective and naturally continuous clearance of both small and middle molecules, plays a major role in metabolic homeostasis, nutritional status and cardiovascular health, and aids in fluid management. The RKF is associated with better patient survival and greater health-related quality of life. Its preservation is instrumental to the prescription of incremental (1HD/week to 2HD/week) HD. The recently heightened interest in incremental HD has been hindered by the current limitations of the urea kinetic model (UKM), which tend to overestimate the needed dialysis dose in the presence of a substantial RKF. A recent paper by Casino and Basile suggested a variable target model (VTM), which gives more clinical weight to the RKF and allows less frequent HD treatments at lower RKF as opposed to the fixed target model, based on the wrong concept of the clinical equivalence between renal and dialysis clearance. A randomized controlled trial (RCT) enrolling incident patients and comparing incremental HD (prescribed according to the VTM) with the standard 3HD/week schedule and focused on hard outcomes, such as survival and health-related quality of life of patients, is urgently needed. The first step in designing such a study is to compute the 'adequacy lines' and the associated fitting equations necessary for the most appropriate allocation of the patients in the two arms and their correct and safe follow-up. In conclusion, the potentially important clinical and financial implications of the incremental HD render it highly promising and warrant RCTs. The UKM is the keystone for conducting such studies. © The Author 2017. Published by Oxford University Press on behalf of ERA-EDTA. All rights reserved.
Environmental monitoring at Mound: 1986 report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Carfagno, D.G.; Farmer, B.M.
1987-05-11
The local environment around Mound was monitored for tritium and plutonium-238. The results are reported for 1986. Environmental media analyzed included air, water, vegetation, foodstuffs, and sediment. The average concentrations of plutonium-238 and tritium were within the DOE interim air and water Derived Concentration Guides (DCG) for these radionuclides. The average incremental concentrations of plutonium-238 and tritium oxide in air measured at all offsite locations during 1986 were 0.03% and 0.01%, respectively, of the DOE DCGs for uncontrolled areas. The average incremental concentration of plutonium-238 measured at all locations in the Great Miami River during 1986 was 0.0005% of themore » DOE DCG. The average incremental concentration of tritium measured at all locations in the Great Miami River during 1986 was 0.005% of the DOE DCG. The average incremental concentrations of plutonium-238 found during 1986 in surface and area drinking water were less than 0.00006% of the DOE DCG. The average incremental concentration of tritium in surface water was less than 0.005% of the DOE DCG. All tritium in drinking water data is compared to the US EPA Drinking Water Standard. The average concentrations in local private and municipal drinking water systems were less than 25% and 1.5%, respectively. Although no DOE DCG is available for foodstuffs, the average concentrations are a small fraction of the water DCG (0.04%). The concentrations of sediment samples obtained at offsite surface water sampling locations were extremely low and therefore represent no adverse impact to the environment. The dose equivalent estimates for the average air, water, and foodstuff concentrations indicate that the levels are within 1% of the DOE standard of 100 mrem. None of these exceptions, however, had an adverse impact on the water quality of the Great Miami River or caused the river to exceed Ohio Stream Standards. 20 refs., 5 figs., 31 tabs.« less
Griffiths, Ulla K; Santos, Andreia C; Nundy, Neeti; Jacoby, Erica; Matthias, Dipika
2011-01-29
Disposable-syringe jet injectors (DSJIs) have the potential to deliver vaccines safely and affordably to millions of children around the world. We estimated the incremental costs of transitioning from needles and syringes to delivering childhood vaccines with DSJIs in Brazil, India, and South Africa. Two scenarios were assessed: (1) DSJI delivery of all vaccines at current dose and depth; (2) a change to intradermal (ID) delivery with DSJIs for hepatitis B and yellow fever vaccines, while the other vaccines are delivered by DSJIs at current dose and depth. The main advantage of ID delivery is that only a small fraction of the standard dose may be needed to obtain an immune response similar to that of subcutaneous or intramuscular injection. Cost categories included were vaccines, injection equipment, waste management, and vaccine transport. Some delivery cost items, such as training and personnel were excluded as were treatment cost savings caused by a reduction in diseases transmitted due to unsafe injections. In the standard dose and depth scenario, the incremental costs of introducing DSJIs per fully vaccinated child amount to US$ 0.57 in Brazil, US$ 0.65 in India and US$ 1.24 in South Africa. In the ID scenario, there are cost savings of US$ 0.11 per child in Brazil, and added costs of US$ 0.45 and US$ 0.76 per child in India and South Africa, respectively. The most important incremental cost item is jet injector disposable syringes. The incremental costs should be evaluated against other vaccine delivery technologies that can deliver the same benefits to patients, health care workers, and the community. DSJIs deserve consideration by global and national decision-makers as a means to expand access to ID delivery and to enhance safety at marginal additional cost. Copyright © 2010 Elsevier Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Greenwood, Margaret Stautberg
2015-12-01
To design an ultrasonic sensor to measure the attenuation and density of a slurry carried by a large steel pipeline (diameter up to 70 cm) is the goal of this research. The pitch-catch attenuation sensor, placed in a small section of the pipeline, contains a send unit with a focused transducer that focuses the ultrasound to a small region of the receive unit on the opposite wall. The focused transducer consists of a section of a sphere (base ~12 cm) on the outer side of the send unit and a 500 kHz piezoelectric shell of PZT5A epoxied to it. Themore » Rayleigh surface integral is used to calculate the pressure in steel and in water (slurry). An incremental method to plot the paths of ultrasonic rays shows that the rays focus where expected. Further, there is a region where the parallel rays are perpendicular to the wall of the receive unit. Designs for pipeline diameters of 25 cm and 71 cm show that the pressure in water at the receive transducer is about 17 times that for a pitch-catch system using 5 cm diameter disk transducers. The enhanced signal increases the sensitivity of the measurements and improves the signal-to-noise ratio.« less
Jones, Andrew D
2017-10-01
The declining diversity of agricultural production and food supplies worldwide may have important implications for global diets. The primary objective of this review is to assess the nature and magnitude of the associations of agricultural biodiversity with diet quality and anthropometric outcomes in low- and middle-income countries. A comprehensive review of 5 databases using a priori exclusion criteria and application of a systematic, qualitative analysis to the findings of identified studies revealed that agricultural biodiversity has a small but consistent association with more diverse household- and individual-level diets, although the magnitude of this association varies with the extent of existing diversification of farms. Greater on-farm crop species richness is also associated with small, positive increments in young child linear stature. Agricultural diversification may contribute to diversified diets through both subsistence- and income-generating pathways and may be an important strategy for improving diets and nutrition outcomes in low- and middle-income countries. Six research priorities for future studies of the influence of agricultural biodiversity on nutrition outcomes are identified based on gaps in the research literature. © The Author(s) 2017. Published by Oxford University Press on behalf of the International Life Sciences Institute.
Peña, Raul; Ávila, Alfonso; Muñoz, David; Lavariega, Juan
2015-01-01
The recognition of clinical manifestations in both video images and physiological-signal waveforms is an important aid to improve the safety and effectiveness in medical care. Physicians can rely on video-waveform (VW) observations to recognize difficult-to-spot signs and symptoms. The VW observations can also reduce the number of false positive incidents and expand the recognition coverage to abnormal health conditions. The synchronization between the video images and the physiological-signal waveforms is fundamental for the successful recognition of the clinical manifestations. The use of conventional equipment to synchronously acquire and display the video-waveform information involves complex tasks such as the video capture/compression, the acquisition/compression of each physiological signal, and the video-waveform synchronization based on timestamps. This paper introduces a data hiding technique capable of both enabling embedding channels and synchronously hiding samples of physiological signals into encoded video sequences. Our data hiding technique offers large data capacity and simplifies the complexity of the video-waveform acquisition and reproduction. The experimental results revealed successful embedding and full restoration of signal's samples. Our results also demonstrated a small distortion in the video objective quality, a small increment in bit-rate, and embedded cost savings of -2.6196% for high and medium motion video sequences.
Feasibility of Recruiting Families into a Heart Disease Prevention Program Based on Dietary Patterns
Schumacher, Tracy L.; Burrows, Tracy L.; Thompson, Deborah I.; Spratt, Neil J.; Callister, Robin; Collins, Clare E.
2015-01-01
Offspring of parents with a history of cardiovascular disease (CVD) inherit a similar genetic profile and share diet and lifestyle behaviors. This study aimed to evaluate the feasibility of recruiting families at risk of CVD to a dietary prevention program, determine the changes in diet achieved, and program acceptability. Families were recruited into a pilot parallel group randomized controlled trial consisting of a three month evidence-based dietary intervention, based on the Mediterranean and Portfolio diets. Feasibility was assessed by recruitment and retention rates, change in diet by food frequency questionnaire, and program acceptability by qualitative interviews and program evaluation. Twenty one families were enrolled over 16 months, with fourteen families (n = 42 individuals) completing the study. Post-program dietary changes in the intervention group included small daily increases in vegetable serves (0.8 ± 1.3) and reduced usage of full-fat milk (−21%), cheese (−12%) and meat products (−17%). Qualitative interviews highlighted beneficial changes in food purchasing habits. Future studies need more effective methods of recruitment to engage families in the intervention. Once engaged, families made small incremental improvements in their diets. Evaluation indicated that feedback on diet and CVD risk factors, dietetic counselling and the resources provided were appropriate for a program of this type. PMID:26308048
Greenwood, Margaret Stautberg
2015-12-01
To design an ultrasonic sensor to measure the attenuation and density of slurry carried by a large steel pipeline (diameter up to 70 cm) is the goal of this research. The pitch-catch attenuation sensor, placed in a small section of the pipeline, contains a send unit with a focused transducer that focuses the ultrasound to a small region of the receive unit on the opposite wall. The focused transducer consists of a section of a sphere (base ∼12 cm) on the outer side of the send unit and a 500 kHz piezoelectric shell of lead zirconate titanate epoxied to it. The Rayleigh surface integral is used to calculate the pressure in steel and in water (slurry). An incremental method to plot the paths of ultrasonic rays shows that the rays focus where expected. Further, there is a region where the parallel rays are perpendicular to the wall of the receive unit. Designs for pipeline diameters of 25 and 71 cm show that the pressure in water at the receive transducer is about 17 times that for a pitch-catch system using 5 cm diameter disk transducers. The enhanced signal increases the sensitivity of the measurements and improves the signal-to-noise ratio.
Schumacher, Tracy L; Burrows, Tracy L; Thompson, Deborah I; Spratt, Neil J; Callister, Robin; Collins, Clare E
2015-08-21
Offspring of parents with a history of cardiovascular disease (CVD) inherit a similar genetic profile and share diet and lifestyle behaviors. This study aimed to evaluate the feasibility of recruiting families at risk of CVD to a dietary prevention program, determine the changes in diet achieved, and program acceptability. Families were recruited into a pilot parallel group randomized controlled trial consisting of a three month evidence-based dietary intervention, based on the Mediterranean and Portfolio diets. Feasibility was assessed by recruitment and retention rates, change in diet by food frequency questionnaire, and program acceptability by qualitative interviews and program evaluation. Twenty one families were enrolled over 16 months, with fourteen families (n = 42 individuals) completing the study. Post-program dietary changes in the intervention group included small daily increases in vegetable serves (0.8 ± 1.3) and reduced usage of full-fat milk (-21%), cheese (-12%) and meat products (-17%). Qualitative interviews highlighted beneficial changes in food purchasing habits. Future studies need more effective methods of recruitment to engage families in the intervention. Once engaged, families made small incremental improvements in their diets. Evaluation indicated that feedback on diet and CVD risk factors, dietetic counselling and the resources provided were appropriate for a program of this type.
Sosnaud, Benjamin; Beckfield, Jason
2017-09-01
It has been suggested that as medicine advances and mortality declines, socioeconomic disparities in health outcomes will grow. Yet, most research on this topic uses data from affluent Western democracies, where mortality is declining in small increments. We argue that the Global South represents the ideal setting to study this issue in a context of rapid mortality decline. We evaluate two competing hypotheses: (1) there is a trade-off between population health and health inequality such that reductions in under-five mortality are linked to higher levels of social inequality in health; and (2) institutional interventions that improve under-five mortality, like the expansion of educational systems and public health expenditure, are associated with reductions in inequalities. We test these hypotheses using data on 1,369,050 births in 34 low-income countries in the Demographic and Health Surveys from 1995 to 2012. The results show little evidence of a health-for-equality trade-off and instead support the institutional hypothesis.
In silico evolution of biochemical networks
NASA Astrophysics Data System (ADS)
Francois, Paul
2010-03-01
We use computational evolution to select models of genetic networks that can be built from a predefined set of parts to achieve a certain behavior. Selection is made with the help of a fitness defining biological functions in a quantitative way. This fitness has to be specific to a process, but general enough to find processes common to many species. Computational evolution favors models that can be built by incremental improvements in fitness rather than via multiple neutral steps or transitions through less fit intermediates. With the help of these simulations, we propose a kinetic view of evolution, where networks are rapidly selected along a fitness gradient. This mathematics recapitulates Darwin's original insight that small changes in fitness can rapidly lead to the evolution of complex structures such as the eye, and explain the phenomenon of convergent/parallel evolution of similar structures in independent lineages. We will illustrate these ideas with networks implicated in embryonic development and patterning of vertebrates and primitive insects.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Slosman, D.; Susskind, H.; Bossuyt, A.
1986-03-01
Ventilation imaging can be improved by gating scintigraphic data with the respiratory cycle using temporal Fourier analysis (TFA) to quantify the temporal behavior of the ventilation. Sixteen consecutive images, representing equal-time increments of an average respiratory cycle, were produced by TFA in the posterior view on a pixel-by-pixel basis. An Efficiency Index (EFF), defined as the ratio of the summation of all the differences between maximum and minimum counts for each pixel to that for the entire lung during the respiratory cycle, was derived to describe the pattern of ventilation. The gated ventilation studies were carried out with Xe-127 inmore » 12 subjects: normal lung function (4), small airway disease (2), COPD (5), and restrictive disease (1). EFF for the first three harmonics correlated linearly with FEV1 (r = 0.701, p< 0.01). This approach is suggested as a very sensitive method to quantify the extent and regional distribution of airway obstruction.« less
Multi-point optimization of recirculation flow type casing treatment in centrifugal compressors
NASA Astrophysics Data System (ADS)
Tun, Min Thaw; Sakaguchi, Daisaku
2016-06-01
High-pressure ratio and wide operating range are highly required for a turbocharger in diesel engines. A recirculation flow type casing treatment is effective for flow range enhancement of centrifugal compressors. Two ring grooves on a suction pipe and a shroud casing wall are connected by means of an annular passage and stable recirculation flow is formed at small flow rates from the downstream groove toward the upstream groove through the annular bypass. The shape of baseline recirculation flow type casing is modified and optimized by using a multi-point optimization code with a metamodel assisted evolutionary algorithm embedding a commercial CFD code CFX from ANSYS. The numerical optimization results give the optimized design of casing with improving adiabatic efficiency in wide operating flow rate range. Sensitivity analysis of design parameters as a function of efficiency has been performed. It is found that the optimized casing design provides optimized recirculation flow rate, in which an increment of entropy rise is minimized at grooves and passages of the rotating impeller.
A 3-D enlarged cell technique (ECT) for elastic wave modelling of a curved free surface
NASA Astrophysics Data System (ADS)
Wei, Songlin; Zhou, Jianyang; Zhuang, Mingwei; Liu, Qing Huo
2016-09-01
The conventional finite-difference time-domain (FDTD) method for elastic waves suffers from the staircasing error when applied to model a curved free surface because of its structured grid. In this work, an improved, stable and accurate 3-D FDTD method for elastic wave modelling on a curved free surface is developed based on the finite volume method and enlarged cell technique (ECT). To achieve a sufficiently accurate implementation, a finite volume scheme is applied to the curved free surface to remove the staircasing error; in the mean time, to achieve the same stability as the FDTD method without reducing the time step increment, the ECT is introduced to preserve the solution stability by enlarging small irregular cells into adjacent cells under the condition of conservation of force. This method is verified by several 3-D numerical examples. Results show that the method is stable at the Courant stability limit for a regular FDTD grid, and has much higher accuracy than the conventional FDTD method.
Stability considerations of packed multi-planet systems
NASA Astrophysics Data System (ADS)
Gratia, Pierre; Lissauer, Jack
2018-04-01
I will present our first results of the outcomes of five packed, Earth-mass planetary simulations around a Sun-like star, whose initial separations in terms of their semi-major axes is determined by a multiple of their mutual Hill radius, the parameter beta. In our simulations, we will vary beta between 3.5 and and 9, with a special emphasis on the region around 8.5, where stability times are wildly different for small increments of beta. While the zero initial eccentricity case has been investigated before, we expand on it by allowing for initial nonzero eccentricities of one or more planets. Furthermore, we increase the simulated time by up to one order of magnitude reaching billions of orbits. This of course will determine more accurately the fate of systems that take a long time to go unstable. Both of these investigations have not been done before, thus our findings improve our understanding of the stabilities of closely-spaced planetary systems.
Robot-based additive manufacturing for flexible die-modelling in incremental sheet forming
NASA Astrophysics Data System (ADS)
Rieger, Michael; Störkle, Denis Daniel; Thyssen, Lars; Kuhlenkötter, Bernd
2017-10-01
The paper describes the application concept of additive manufactured dies to support the robot-based incremental sheet metal forming process (`Roboforming') for the production of sheet metal components in small batch sizes. Compared to the dieless kinematic-based generation of a shape by means of two cooperating industrial robots, the supporting robot models a die on the back of the metal sheet by using the robot-based fused layer manufacturing process (FLM). This tool chain is software-defined and preserves the high geometrical form flexibility of Roboforming while flexibly generating support structures adapted to the final part's geometry. Test series serve to confirm the feasibility of the concept by investigating the process challenges of the adhesion to the sheet surface and the general stability as well as the influence on the geometric accuracy compared to the well-known forming strategies.
Recent advances in the modelling of crack growth under fatigue loading conditions
NASA Technical Reports Server (NTRS)
Dekoning, A. U.; Tenhoeve, H. J.; Henriksen, T. K.
1994-01-01
Fatigue crack growth associated with cyclic (secondary) plastic flow near a crack front is modelled using an incremental formulation. A new description of threshold behaviour under small load cycles is included. Quasi-static crack extension under high load excursions is described using an incremental formulation of the R-(crack growth resistance)- curve concept. The integration of the equations is discussed. For constant amplitude load cycles the results will be compared with existing crack growth laws. It will be shown that the model also properly describes interaction effects of fatigue crack growth and quasi-static crack extension. To evaluate the more general applicability the model is included in the NASGRO computer code for damage tolerance analysis. For this purpose the NASGRO program was provided with the CORPUS and the STRIP-YIELD models for computation of the crack opening load levels. The implementation is discussed and recent results of the verification are presented.
NASA Astrophysics Data System (ADS)
Schwegler, Eric; Challacombe, Matt; Head-Gordon, Martin
1997-06-01
A new linear scaling method for computation of the Cartesian Gaussian-based Hartree-Fock exchange matrix is described, which employs a method numerically equivalent to standard direct SCF, and which does not enforce locality of the density matrix. With a previously described method for computing the Coulomb matrix [J. Chem. Phys. 106, 5526 (1997)], linear scaling incremental Fock builds are demonstrated for the first time. Microhartree accuracy and linear scaling are achieved for restricted Hartree-Fock calculations on sequences of water clusters and polyglycine α-helices with the 3-21G and 6-31G basis sets. Eightfold speedups are found relative to our previous method. For systems with a small ionization potential, such as graphitic sheets, the method naturally reverts to the expected quadratic behavior. Also, benchmark 3-21G calculations attaining microhartree accuracy are reported for the P53 tetramerization monomer involving 698 atoms and 3836 basis functions.
Schiffmann, L M; Brunold, M; Liwschitz, M; Goede, V; Loges, S; Wroblewski, M; Quaas, A; Alakus, H; Stippel, D; Bruns, C J; Hallek, M; Kashkar, H; Hacker, U T; Coutelle, O
2017-02-28
Vascular endothelial growth factor (VEGF)-targeting drugs normalise the tumour vasculature and improve access for chemotherapy. However, excessive VEGF inhibition fails to improve clinical outcome, and successive treatment cycles lead to incremental extracellular matrix (ECM) deposition, which limits perfusion and drug delivery. We show here, that low-dose VEGF inhibition augmented with PDGF-R inhibition leads to superior vascular normalisation without incremental ECM deposition thus maintaining access for therapy. Collagen IV expression was analysed in response to VEGF inhibition in liver metastasis of colorectal cancer (CRC) patients, in syngeneic (Panc02) and xenograft tumours of human colorectal cancer cells (LS174T). The xenograft tumours were treated with low (0.5 mg kg -1 body weight) or high (5 mg kg -1 body weight) doses of the anti-VEGF antibody bevacizumab with or without the tyrosine kinase inhibitor imatinib. Changes in tumour growth, and vascular parameters, including microvessel density, pericyte coverage, leakiness, hypoxia, perfusion, fraction of vessels with an open lumen, and type IV collagen deposition were compared. ECM deposition was increased after standard VEGF inhibition in patients and tumour models. In contrast, treatment with low-dose bevacizumab and imatinib produced similar growth inhibition without inducing detrimental collagen IV deposition, leading to superior vascular normalisation, reduced leakiness, improved oxygenation, more open vessels that permit perfusion and access for therapy. Low-dose bevacizumab augmented by imatinib selects a mature, highly normalised and well perfused tumour vasculature without inducing incremental ECM deposition that normally limits the effectiveness of VEGF targeting drugs.
An advanced artificial intelligence tool for menu design.
Khan, Abdus Salam; Hoffmann, Achim
2003-01-01
The computer-assisted menu design still remains a difficult task. Usually knowledge that aids in menu design by a computer is hard-coded and because of that a computerised menu planner cannot handle the menu design problem for an unanticipated client. To address this problem we developed a menu design tool, MIKAS (menu construction using incremental knowledge acquisition system), an artificial intelligence system that allows the incremental development of a knowledge-base for menu design. We allow an incremental knowledge acquisition process in which the expert is only required to provide hints to the system in the context of actual problem instances during menu design using menus stored in a so-called Case Base. Our system incorporates Case-Based Reasoning (CBR), an Artificial Intelligence (AI) technique developed to mimic human problem solving behaviour. Ripple Down Rules (RDR) are a proven technique for the acquisition of classification knowledge from expert directly while they are using the system, which complement CBR in a very fruitful way. This combination allows the incremental improvement of the menu design system while it is already in routine use. We believe MIKAS allows better dietary practice by leveraging a dietitian's skills and expertise. As such MIKAS has the potential to be helpful for any institution where dietary advice is practised.
NASA Astrophysics Data System (ADS)
Hamedon, Zamzuri; Kuang, Shea Cheng; Jaafar, Hasnulhadi; Azhari, Azmir
2018-03-01
Incremental sheet forming is a versatile sheet metal forming process where a sheet metal is formed into its final shape by a series of localized deformation without a specialised die. However, it still has many shortcomings that need to be overcome such as geometric accuracy, surface roughness, formability, forming speed, and so on. This project focus on minimising the surface roughness of aluminium sheet and improving its thickness uniformity in incremental sheet forming via optimisation of wall angle, feed rate, and step size. Besides, the effect of wall angle, feed rate, and step size to the surface roughness and thickness uniformity of aluminium sheet was investigated in this project. From the results, it was observed that surface roughness and thickness uniformity were inversely varied due to the formation of surface waviness. Increase in feed rate and decrease in step size will produce a lower surface roughness, while uniform thickness reduction was obtained by reducing the wall angle and step size. By using Taguchi analysis, the optimum parameters for minimum surface roughness and uniform thickness reduction of aluminium sheet were determined. The finding of this project helps to reduce the time in optimising the surface roughness and thickness uniformity in incremental sheet forming.
The virtue of innovation: innovation through the lenses of biological evolution.
Kell, Douglas B; Lurie-Luke, Elena
2015-02-06
We rehearse the processes of innovation and discovery in general terms, using as our main metaphor the biological concept of an evolutionary fitness landscape. Incremental and disruptive innovations are seen, respectively, as successful searches carried out locally or more widely. They may also be understood as reflecting evolution by mutation (incremental) versus recombination (disruptive). We also bring a platonic view, focusing on virtue and memory. We use 'virtue' as a measure of efforts, including the knowledge required to come up with disruptive and incremental innovations, and 'memory' as a measure of their lifespan, i.e. how long they are remembered. Fostering innovation, in the evolutionary metaphor, means providing the wherewithal to promote novelty, good objective functions that one is trying to optimize, and means to improve one's knowledge of, and ability to navigate, the landscape one is searching. Recombination necessarily implies multi- or inter-disciplinarity. These principles are generic to all kinds of creativity, novel ideas formation and the development of new products and technologies.
SfM with MRFs: discrete-continuous optimization for large-scale structure from motion.
Crandall, David J; Owens, Andrew; Snavely, Noah; Huttenlocher, Daniel P
2013-12-01
Recent work in structure from motion (SfM) has built 3D models from large collections of images downloaded from the Internet. Many approaches to this problem use incremental algorithms that solve progressively larger bundle adjustment problems. These incremental techniques scale poorly as the image collection grows, and can suffer from drift or local minima. We present an alternative framework for SfM based on finding a coarse initial solution using hybrid discrete-continuous optimization and then improving that solution using bundle adjustment. The initial optimization step uses a discrete Markov random field (MRF) formulation, coupled with a continuous Levenberg-Marquardt refinement. The formulation naturally incorporates various sources of information about both the cameras and points, including noisy geotags and vanishing point (VP) estimates. We test our method on several large-scale photo collections, including one with measured camera positions, and show that it produces models that are similar to or better than those produced by incremental bundle adjustment, but more robustly and in a fraction of the time.
The virtue of innovation: innovation through the lenses of biological evolution
Kell, Douglas B.; Lurie-Luke, Elena
2015-01-01
We rehearse the processes of innovation and discovery in general terms, using as our main metaphor the biological concept of an evolutionary fitness landscape. Incremental and disruptive innovations are seen, respectively, as successful searches carried out locally or more widely. They may also be understood as reflecting evolution by mutation (incremental) versus recombination (disruptive). We also bring a platonic view, focusing on virtue and memory. We use ‘virtue’ as a measure of efforts, including the knowledge required to come up with disruptive and incremental innovations, and ‘memory’ as a measure of their lifespan, i.e. how long they are remembered. Fostering innovation, in the evolutionary metaphor, means providing the wherewithal to promote novelty, good objective functions that one is trying to optimize, and means to improve one's knowledge of, and ability to navigate, the landscape one is searching. Recombination necessarily implies multi- or inter-disciplinarity. These principles are generic to all kinds of creativity, novel ideas formation and the development of new products and technologies. PMID:25505138
Zolpidem efficacy and safety in disorders of consciousness.
Machado, Calixto; Estévez, Mario; Rodriguez-Rojas, Rafael
2018-01-01
Sutton and Clauss presented a detailed review about the effectiveness of zolpidem, discussing recoveries from brain damage due to strokes, trauma and hypoxia. A significant finding has been the unexpected and paradoxical increment of brain activity in vegetative state/unresponsive wakefulness syndrome (VS/UWS). On the contrary, zolpidem is considered one of the best sleep inducers in normal subjects. We have studied series of VS/UWS cases after zolpidem intake. We have demonstrated EEG activation, increment of BOLD signal in different brain regions, and an autonomic influence, mainly characterized by a vagolytic chronotropic effect without a significant increment of the vasomotor sympathetic tone. As this autonomic imbalance might induce cardio- circulatory complications, which we didn't find in any of our patients, we suggest developing future trials under control of physiological indices by bedside monitoring. However, considering that the paradoxical arousing zolpidem effect might be certainly related to brain function improvement, we agree with Sutton and Clauss that future multicentre and multinational clinical trials should be developed, but under control of physiological indices.
Considerations for Using an Incremental Scheduler for Human Exploration Task Scheduling
NASA Technical Reports Server (NTRS)
Jaap, John; Phillips, Shaun
2005-01-01
As humankind embarks on longer space missions farther from home, the requirements and environments for scheduling the activities performed on these missions are changing. As we begin to prepare for these missions it is appropriate to evaluate the merits and applicability of the different types of scheduling engines. Scheduling engines temporally arrange tasks onto a timeline so that all constraints and objectives are met and resources are not overbooked. Scheduling engines used to schedule space missions fall into three general categories: batch, mixed-initiative, and incremental. This paper presents an assessment of the engine types, a discussion of the impact of human exploration of the moon and Mars on planning and scheduling, and the applicability of the different types of scheduling engines. This paper will pursue the hypothesis that incremental scheduling engines may have a place in the new environment; they have the potential to reduce cost, to improve the satisfaction of those who execute or benefit from a particular timeline (the customers), and to allow astronauts to plan their own tasks.
1977-01-01
are capable of adapting to turbid conditions will probably be the dominant fish in the oxbows. The stream bottom dwelling population will not be much...the structure of the benthic conmunity. Snails ( gastropods ) and bivalve mollusks (pelecypods) are most abundant in the shallows areas. Stable gravel
ERIC Educational Resources Information Center
Dang, Thanh-Dung; Chen, Gwo-Dong; Dang, Giao; Li, Liang-Yi; Nurkhamid
2013-01-01
Dictionary use can improve reading comprehension and incidental vocabulary learning. Nevertheless, great extraneous cognitive load imposed by the search process may reduce or even prevent the improvement. With the help of technology, dictionary users can now instantly access the meaning list of a searched word using a mouse click. However, they…
Incremental improvements to the trout S9 biotransformation assay
In vitro substrate depletion methods have been used in conjunction with computational models to predict biotransformation impacts on chemical accumulation by fish. There is a consistent trend, however, toward overestimation of measured chemical residues resulting from controlled...
van Rossum, Peter S N; Fried, David V; Zhang, Lifei; Hofstetter, Wayne L; van Vulpen, Marco; Meijer, Gert J; Court, Laurence E; Lin, Steven H
2016-05-01
A reliable prediction of a pathologic complete response (pathCR) to chemoradiotherapy before surgery for esophageal cancer would enable investigators to study the feasibility and outcome of an organ-preserving strategy after chemoradiotherapy. So far no clinical parameters or diagnostic studies are able to accurately predict which patients will achieve a pathCR. The aim of this study was to determine whether subjective and quantitative assessment of baseline and postchemoradiation (18)F-FDG PET can improve the accuracy of predicting pathCR to preoperative chemoradiotherapy in esophageal cancer beyond clinical predictors. This retrospective study was approved by the institutional review board, and the need for written informed consent was waived. Clinical parameters along with subjective and quantitative parameters from baseline and postchemoradiation (18)F-FDG PET were derived from 217 esophageal adenocarcinoma patients who underwent chemoradiotherapy followed by surgery. The associations between these parameters and pathCR were studied in univariable and multivariable logistic regression analysis. Four prediction models were constructed and internally validated using bootstrapping to study the incremental predictive values of subjective assessment of (18)F-FDG PET, conventional quantitative metabolic features, and comprehensive (18)F-FDG PET texture/geometry features, respectively. The clinical benefit of (18)F-FDG PET was determined using decision-curve analysis. A pathCR was found in 59 (27%) patients. A clinical prediction model (corrected c-index, 0.67) was improved by adding (18)F-FDG PET-based subjective assessment of response (corrected c-index, 0.72). This latter model was slightly improved by the addition of 1 conventional quantitative metabolic feature only (i.e., postchemoradiation total lesion glycolysis; corrected c-index, 0.73), and even more by subsequently adding 4 comprehensive (18)F-FDG PET texture/geometry features (corrected c-index, 0.77). However, at a decision threshold of 0.9 or higher, representing a clinically relevant predictive value for pathCR at which one may be willing to omit surgery, there was no clear incremental value. Subjective and quantitative assessment of (18)F-FDG PET provides statistical incremental value for predicting pathCR after preoperative chemoradiotherapy in esophageal cancer. However, the discriminatory improvement beyond clinical predictors does not translate into a clinically relevant benefit that could change decision making. © 2016 by the Society of Nuclear Medicine and Molecular Imaging, Inc.
1993-11-30
dependent field to the main toroidal field, which provides an effective increment to the acceleration rate if it has a negative time derivative during...regions, non- uniformities in the beam develop in the drift region, scattering in the foils affects the beam entering the laser, effects due to a second...faster destroyed by a small perturbation. Note that this analogy is adequate only when the global RT mode cannot develop - otherwise, it is the rigid pen
Sexual Assault Prevention and Response Website Analysis
2014-09-01
IRR inter-rater reliability KPI key performance indicator N17 U.S Navy 21st Century Sailor Office NASASV National Association of Services Against...determine if progress is being made to achieve the desired goal; this is typically done by establishing key performance indicators 22 ( KPI ). After...defining the KPIs , organizations must prioritize the potential solutions and devise a plan for making small incremental changes to accurately assess the
Nonlinear Optics and Organic Materials
1989-10-01
incrementally by making small changes in the generating optical harmonics. However, deficiencies in backbone or substituents. In this way the chemist can...experimental determination of Otx.l = 4.5 X 10-32 esu. ability of polymeric molecules to generate third Key parameters extracted from the UV and visible...solubility of most active organics in negative charge at the other end, thus generating a the polymer and their tendency to segregate or migrate out
2016-11-01
systems engineering had better outcomes. For example, the Small Diameter Bomb Increment I program, which delivered within cost and schedule estimates ...its current portfolio. This portfolio has experienced cost growth of 48 percent since first full estimates and average delays in delivering initial...stable design, building and testing of prototypes, and demonstration of mature production processes. • Realistic cost estimate : Sound cost estimates
Defense AT&L (Volume 37, Number 2, March-April 2008)
2008-04-01
environment. Operational suit- ability is the degree to which a system can be satisfactorily placed in field use, with consideration given to reliability...devise the most effective test-and-evaluation strategy. Whenever possible, the program should be developed and fielded in small increments and provided... ability to control access to design-related informa- tion and availability of technology, and it will raise grave security considerations. Do you develop
Second-degree atrioventricular block.
Zipes, D P
1979-09-01
1) While it is possible only one type of second-degree AV block exists electrophysiologically, the available data do not justify such a conclusion and it would seem more appropriate to remain a "splitter," and advocate separation and definition of multiple mechanisms, than to be a "lumper," and embrace a unitary concept. 2) The clinical classification of type I and type II AV block, based on present scalar electrocardiographic criteria, for the most part accurately differentiates clinically important categories of patients. Such a classification is descriptive, but serves a useful function and should be preserved, taking into account the caveats mentioned above. The site of block generally determines the clinical course for the patient. For most examples of AV block, the type I and type II classification in present use is based on the site of block. Because block in the His-Purkinje system is preceded by small or nonmeasurable increments, it is called type II AV block; but the very fact that it is preceded by small increments is because it occurs in the His-Purkinje system. Similar logic can be applied to type I AV block in the AV node. Exceptions do occur. If the site of AV block cannot be distinguished with certainity from the scalar ECG, an electrophysiologic study will generally reveal the answer.
Hong, Shaodong; Fang, Wenfeng; Hu, Zhihuang; Zhou, Ting; Yan, Yue; Qin, Tao; Tang, Yanna; Ma, Yuxiang; Zhao, Yuanyuan; Xue, Cong; Huang, Yan; Zhao, Hongyun; Zhang, Li
2014-01-01
The predictive power of age at diagnosis and smoking history for ALK rearrangements and EGFR mutations in non-small-cell lung cancer (NSCLC) remains not fully understood. In this cross-sectional study, 1160 NSCLC patients were prospectively enrolled and genotyped for EML4-ALK rearrangements and EGFR mutations. Multivariate logistic regression analysis was performed to explore the association between clinicopathological features and these two genetic aberrations. Receiver operating characteristic (ROC) curves methodology was applied to evaluate the predictive value. We showed that younger age at diagnosis was the only independent variable associated with EML4-ALK rearrangements (odds ratio (OR) per 5 years' increment, 0.68; p < 0.001), while lower tobacco exposure (OR per 5 pack-years' increment, 0.88; p < 0.001), adenocarcinoma (OR, 6.61; p < 0.001), and moderate to high differentiation (OR, 2.05; p < 0.001) were independently associated with EGFR mutations. Age at diagnosis was a very strong predictor of ALK rearrangements but poorly predicted EGFR mutations, while smoking pack-years may predict the presence of EGFR mutations and ALK rearrangements but with rather limited power. These findings should assist clinicians in assessing the likelihood of EML4-ALK rearrangements and EGFR mutations and understanding their biological implications in NSCLC. PMID:25434695
Supplemental fructose attenuates postprandial glycemia in Zucker fatty fa/fa rats.
Wolf, Bryan W; Humphrey, Phillip M; Hadley, Craig W; Maharry, Kati S; Garleb, Keith A; Firkins, Jeffrey L
2002-06-01
Experiments were conducted to evaluate the effects of supplemental fructose on postprandial glycemia. After overnight food deprivation, Zucker fatty fa/fa rats were given a meal glucose tolerance test. Plasma glucose response was determined for 180 min postprandially. At a dose of 0.16 g/kg body, fructose reduced (P < 0.05) the incremental area under the curve (AUC) by 34% when supplemented to a glucose challenge and by 32% when supplemented to a maltodextrin (a rapidly digested starch) challenge. Similarly, sucrose reduced (P = 0.0575) the incremental AUC for plasma glucose when rats were challenged with maltodextrin. Second-meal glycemic response was not affected by fructose supplementation to the first meal, and fructose supplementation to the second meal reduced (P < 0.05) postprandial glycemia when fructose had been supplemented to the first meal. In a dose-response study (0.1, 0.2, and 0.5 g/kg body), supplemental fructose reduced (P < 0.01) the peak rise in plasma glucose (linear and quadratic effects). In the final experiment, a low dose of fructose (0.075 g/kg body) reduced (P < 0.05) the incremental AUC by 18%. These data support the hypothesis that small amounts of oral fructose or sucrose may be useful in lowering the postprandial blood glucose response.
Land, K C; Guralnik, J M; Blazer, D G
1994-05-01
A fundamental limitation of current multistate life table methodology-evident in recent estimates of active life expectancy for the elderly-is the inability to estimate tables from data on small longitudinal panels in the presence of multiple covariates (such as sex, race, and socioeconomic status). This paper presents an approach to such an estimation based on an isomorphism between the structure of the stochastic model underlying a conventional specification of the increment-decrement life table and that of Markov panel regression models for simple state spaces. We argue that Markov panel regression procedures can be used to provide smoothed or graduated group-specific estimates of transition probabilities that are more stable across short age intervals than those computed directly from sample data. We then join these estimates with increment-decrement life table methods to compute group-specific total, active, and dependent life expectancy estimates. To illustrate the methods, we describe an empirical application to the estimation of such life expectancies specific to sex, race, and education (years of school completed) for a longitudinal panel of elderly persons. We find that education extends both total life expectancy and active life expectancy. Education thus may serve as a powerful social protective mechanism delaying the onset of health problems at older ages.
Evolutionary Optimization of a Quadrifilar Helical Antenna
NASA Technical Reports Server (NTRS)
Lohn, Jason D.; Kraus, William F.; Linden, Derek S.; Clancy, Daniel (Technical Monitor)
2002-01-01
Automated antenna synthesis via evolutionary design has recently garnered much attention in the research literature. Evolutionary algorithms show promise because, among search algorithms, they are able to effectively search large, unknown design spaces. NASA's Mars Odyssey spacecraft is due to reach final Martian orbit insertion in January, 2002. Onboard the spacecraft is a quadrifilar helical antenna that provides telecommunications in the UHF band with landed assets, such as robotic rovers. Each helix is driven by the same signal which is phase-delayed in 90 deg increments. A small ground plane is provided at the base. It is designed to operate in the frequency band of 400-438 MHz. Based on encouraging previous results in automated antenna design using evolutionary search, we wanted to see whether such techniques could improve upon Mars Odyssey antenna design. Specifically, a co-evolutionary genetic algorithm is applied to optimize the gain and size of the quadrifilar helical antenna. The optimization was performed in-situ in the presence of a neighboring spacecraft structure. On the spacecraft, a large aluminum fuel tank is adjacent to the antenna. Since this fuel tank can dramatically affect the antenna's performance, we leave it to the evolutionary process to see if it can exploit the fuel tank's properties advantageously. Optimizing in the presence of surrounding structures would be quite difficult for human antenna designers, and thus the actual antenna was designed for free space (with a small ground plane). In fact, when flying on the spacecraft, surrounding structures that are moveable (e.g., solar panels) may be moved during the mission in order to improve the antenna's performance.
NASA Astrophysics Data System (ADS)
Riker, J.; Watson, M.; Liu, E. J.; Chigna, G.; Purvis, M.; Naismith, A.
2016-12-01
For over ten years, the University of Bristol (U.K.) has run a field trip for masters students in Natural Hazards in the volcanically active areas of southern Guatemala, home to more than 13 million people. This trip has obvious benefits to its participants - it serves as an immersive and formative experience for students studying volcanic hazard, as well as a springboard for the work of the researchers who lead it. Over the years, it has helped to build strong collaborative ties between academic researchers at Bristol and Guatemala's geologic survey (INSIVUMEH) and emergency management agency (CONRED), facilitating the sharing of data, expertise, and monitoring equipment. The students' regular presence has also enabled infrastructure improvements at Fuego Volcano Observatory, which is itself hosted and partly staffed by the residents of Panimache, a small village just a few miles from the volcano's summit. This field trip does raise challenges, however - an influx of foreign students can draw questions from community members for whom the benefits are indirect (i.e., local job creation or infrastructure improvement) or intangible (i.e., incremental contributions to the body of knowledge regarding volcanic hazard). In this presentation, we'll share stories of our experiences of effective community collaboration in Guatemala. In the spirit of discussion, we would also like to explore the opportunities that exist to better utilise this trip, along with the energy and expertise of its participants, to maximise the positive impact on (and resilience of) local communities, particularly those in the small and largely indigenous villages that populate Fuego Volcano's flanks.
Philbin, E F; Rocco, T A; Lindenmuth, N W; Ulrich, K; McCall, M; Jenkins, P L
2000-10-15
Quality improvement and disease management programs for heart failure have improved quality of care and patient outcomes at large tertiary care hospitals. The purpose of this study was to measure the effects of a regional, multihospital, collaborative quality improvement intervention on care and outcomes in heart failure in community hospitals. This randomized controlled study included 10 acute care community hospitals in upstate New York. After a baseline period, 5 hospitals were randomly assigned to receive a multifaceted quality improvement intervention (n = 762 patients during the baseline period; n = 840 patients postintervention), while 5 were assigned to a "usual care" control (n = 640 patients during the baseline period; n = 664 patients postintervention). Quality of care was determined using explicit criteria by reviewing the charts of consecutive patients hospitalized with the primary diagnosis of heart failure during the baseline period and again in the postintervention period. Clinical outcomes included hospital length of stay and charges, in-hospital and 6-month mortality, hospital readmission, and quality of life measured after discharge. Patients had similar characteristics in the baseline and postintervention phases in the intervention and control groups. Using hospital-level analyses, the intervention had mixed effects on 5 quality-of-care markers that were not statistically significant. The mean of the average length of stay among hospitals decreased from 8.0 to 6.2 days in the intervention group, with a smaller decline in mean length of stay in the control group (7.7 to 7.0 days). The net effects of the intervention were nonsignificant changes in length of stay of -1.1 days (95% confidence interval [CI]: -2.9 to 0.7 days, P = 0.18) and in hospital charges of -$817 (95% CI: -$2560 to $926, P = 0.31). There were small and nonsignificant effects on mortality, hospital readmission, and quality of life. The incremental effect of regional collaboration among peer community hospitals toward the goal of quality improvement was small and limited to a slightly, but not significantly, shorter length of stay.
Air Quality Modeling Using the NASA GEOS-5 Multispecies Data Assimilation System
NASA Technical Reports Server (NTRS)
Keller, Christoph A.; Pawson, Steven; Wargan, Krzysztof; Weir, Brad
2018-01-01
The NASA Goddard Earth Observing System (GEOS) data assimilation system (DAS) has been expanded to include chemically reactive tropospheric trace gases including ozone (O3), nitrogen dioxide (NO2), and carbon monoxide (CO). This system combines model analyses from the GEOS-5 model with detailed atmospheric chemistry and observations from MLS (O3), OMI (O3 and NO2), and MOPITT (CO). We show results from a variety of assimilation test experiments, highlighting the improvements in the representation of model species concentrations by up to 50% compared to an assimilation-free control experiment. Taking into account the rapid chemical cycling of NO2 when applying the assimilation increments greatly improves assimilation skills for NO2 and provides large benefits for model concentrations near the surface. Analysis of the geospatial distribution of the assimilation increments suggest that the free-running model overestimates biomass burning emissions but underestimates lightning NOx emissions by 5-20%. We discuss the capability of the chemical data assimilation system to improve atmospheric composition forecasts through improved initial value and boundary condition inputs, particularly during air pollution events. We find that the current assimilation system meaningfully improves short-term forecasts (1-3 day). For longer-term forecasts more emphasis on updating the emissions instead of initial concentration fields is needed.
Triplett, Patrick; Dearholt, Sandra; Cooper, Mary; Herzke, John; Johnson, Erin; Parks, Joyce; Sullivan, Patricia; Taylor, Karin F; Rohde, Judith
Rising acuity levels in inpatient settings have led to growing reliance on observers and increased the cost of care. Minimizing use of observers, maintaining quality and safety of care, and improving bed access, without increasing cost. Nursing staff on two inpatient psychiatric units at an academic medical center pilot-tested the use of a "milieu manager" to address rising patient acuity and growing reliance on observers. Nursing cost, occupancy, discharge volume, unit closures, observer expense, and incremental nursing costs were tracked. Staff satisfaction and reported patient behavioral/safety events were assessed. The pilot initiatives ran for 8 months. Unit/bed closures fell to zero on both units. Occupancy, patient days, and discharges increased. Incremental nursing cost was offset by reduction in observer expense and by revenue from increases in occupancy and patient days. Staff work satisfaction improved and measures of patient safety were unchanged. The intervention was effective in reducing observation expense and improved occupancy and patient days while maintaining patient safety, representing a cost-effective and safe approach for management of acuity on inpatient psychiatric units.
Wallis, Thomas S. A.; Dorr, Michael; Bex, Peter J.
2015-01-01
Sensitivity to luminance contrast is a prerequisite for all but the simplest visual systems. To examine contrast increment detection performance in a way that approximates the natural environmental input of the human visual system, we presented contrast increments gaze-contingently within naturalistic video freely viewed by observers. A band-limited contrast increment was applied to a local region of the video relative to the observer's current gaze point, and the observer made a forced-choice response to the location of the target (≈25,000 trials across five observers). We present exploratory analyses showing that performance improved as a function of the magnitude of the increment and depended on the direction of eye movements relative to the target location, the timing of eye movements relative to target presentation, and the spatiotemporal image structure at the target location. Contrast discrimination performance can be modeled by assuming that the underlying contrast response is an accelerating nonlinearity (arising from a nonlinear transducer or gain control). We implemented one such model and examined the posterior over model parameters, estimated using Markov-chain Monte Carlo methods. The parameters were poorly constrained by our data; parameters constrained using strong priors taken from previous research showed poor cross-validated prediction performance. Atheoretical logistic regression models were better constrained and provided similar prediction performance to the nonlinear transducer model. Finally, we explored the properties of an extended logistic regression that incorporates both eye movement and image content features. Models of contrast transduction may be better constrained by incorporating data from both artificial and natural contrast perception settings. PMID:26057546
Riu, Marta; Chiarello, Pietro; Terradas, Roser; Sala, Maria; Garcia-Alzorriz, Enric; Castells, Xavier; Grau, Santiago; Cots, Francesc
2017-04-01
To estimate the incremental cost of nosocomial bacteremia according to the causative focus and classified by the antibiotic sensitivity of the microorganism.Patients admitted to Hospital del Mar in Barcelona from 2005 to 2012 were included. We analyzed the total hospital costs of patients with nosocomial bacteremia caused by microorganisms with a high prevalence and, often, with multidrug-resistance. A control group was defined by selecting patients without bacteremia in the same diagnosis-related group.Our hospital has a cost accounting system (full-costing) that uses activity-based criteria to estimate per-patient costs. A logistic regression was fitted to estimate the probability of developing bacteremia (propensity score) and was used for propensity-score matching adjustment. This propensity score was included in an econometric model to adjust the incremental cost of patients with bacteremia with differentiation of the causative focus and antibiotic sensitivity.The mean incremental cost was estimated at &OV0556;15,526. The lowest incremental cost corresponded to bacteremia caused by multidrug-sensitive urinary infection (&OV0556;6786) and the highest to primary or unknown sources of bacteremia caused by multidrug-resistant microorganisms (&OV0556;29,186).This is one of the first analyses to include all episodes of bacteremia produced during hospital stays in a single study. The study included accurate information about the focus and antibiotic sensitivity of the causative organism and actual hospital costs. It provides information that could be useful to improve, establish, and prioritize prevention strategies for nosocomial infections.
Parker, Matthew D; Jones, Lynette A; Hunter, Ian W; Taberner, A J; Nash, M P; Nielsen, P M F
2017-01-01
A triaxial force-sensitive microrobot was developed to dynamically perturb skin in multiple deformation modes, in vivo. Wiener static nonlinear identification was used to extract the linear dynamics and static nonlinearity of the force-displacement behavior of skin. Stochastic input forces were applied to the volar forearm and thenar eminence of the hand, producing probe tip perturbations in indentation and tangential extension. Wiener static nonlinear approaches reproduced the resulting displacements with variances accounted for (VAF) ranging 94-97%, indicating a good fit to the data. These approaches provided VAF improvements of 0.1-3.4% over linear models. Thenar eminence stiffness measures were approximately twice those measured on the forearm. Damping was shown to be significantly higher on the palm, whereas the perturbed mass typically was lower. Coefficients of variation (CVs) for nonlinear parameters were assessed within and across individuals. Individual CVs ranged from 2% to 11% for indentation and from 2% to 19% for extension. Stochastic perturbations with incrementally increasing mean amplitudes were applied to the same test areas. Differences between full-scale and incremental reduced-scale perturbations were investigated. Different incremental preloading schemes were investigated. However, no significant difference in parameters was found between different incremental preloading schemes. Incremental schemes provided depth-dependent estimates of stiffness and damping, ranging from 300 N/m and 2 Ns/m, respectively, at the surface to 5 kN/m and 50 Ns/m at greater depths. The device and techniques used in this research have potential applications in areas, such as evaluating skincare products, assessing skin hydration, or analyzing wound healing.
Bevacizumab in Treatment of High-Risk Ovarian Cancer—A Cost-Effectiveness Analysis
Herzog, Thomas J.; Hu, Lilian; Monk, Bradley J.; Kiet, Tuyen; Blansit, Kevin; Kapp, Daniel S.; Yu, Xinhua
2014-01-01
Objective. The objective of this study was to evaluate a cost-effectiveness strategy of bevacizumab in a subset of high-risk advanced ovarian cancer patients with survival benefit. Methods. A subset analysis of the International Collaboration on Ovarian Neoplasms 7 trial showed that additions of bevacizumab (B) and maintenance bevacizumab (mB) to paclitaxel (P) and carboplatin (C) improved the overall survival (OS) of high-risk advanced cancer patients. Actual and estimated costs of treatment were determined from Medicare payment. Incremental cost-effectiveness ratio per life-year saved was established. Results. The estimated cost of PC is $535 per cycle; PCB + mB (7.5 mg/kg) is $3,760 per cycle for the first 6 cycles and then $3,225 per cycle for 12 mB cycles. Of 465 high-risk stage IIIC (>1 cm residual) or stage IV patients, the previously reported OS after PC was 28.8 months versus 36.6 months in those who underwent PCB + mB. With an estimated 8-month improvement in OS, the incremental cost-effectiveness ratio of B was $167,771 per life-year saved. Conclusion. In this clinically relevant subset of women with high-risk advanced ovarian cancer with overall survival benefit after bevacizumab, our economic model suggests that the incremental cost of bevacizumab was approximately $170,000. PMID:24721817
Soh, Harold; Demiris, Yiannis
2014-01-01
Human beings not only possess the remarkable ability to distinguish objects through tactile feedback but are further able to improve upon recognition competence through experience. In this work, we explore tactile-based object recognition with learners capable of incremental learning. Using the sparse online infinite Echo-State Gaussian process (OIESGP), we propose and compare two novel discriminative and generative tactile learners that produce probability distributions over objects during object grasping/palpation. To enable iterative improvement, our online methods incorporate training samples as they become available. We also describe incremental unsupervised learning mechanisms, based on novelty scores and extreme value theory, when teacher labels are not available. We present experimental results for both supervised and unsupervised learning tasks using the iCub humanoid, with tactile sensors on its five-fingered anthropomorphic hand, and 10 different object classes. Our classifiers perform comparably to state-of-the-art methods (C4.5 and SVM classifiers) and findings indicate that tactile signals are highly relevant for making accurate object classifications. We also show that accurate "early" classifications are possible using only 20-30 percent of the grasp sequence. For unsupervised learning, our methods generate high quality clusterings relative to the widely-used sequential k-means and self-organising map (SOM), and we present analyses into the differences between the approaches.
Dynamics of catalytic tubular microjet engines: Dependence on geometry and chemical environment
NASA Astrophysics Data System (ADS)
Li
2011-12-01
Strain-engineered tubular microjet engines with various geometric dimensions hold interesting autonomous motions in an aqueous fuel solution when propelled by catalytic decomposition of hydrogen peroxide to oxygen and water. The catalytically-generated oxygen bubbles expelled from microtubular cavities propel the microjet step by step in discrete increments. We focus on the dynamics of our tubular microjets in one step and build up a body deformation model to elucidate the interaction between tubular microjets and the bubbles they produce. The average microjet velocity is calculated analytically based on our model and the obtained results demonstrate that the velocity of the microjet increases linearly with the concentration of hydrogen peroxide. The geometric dimensions of the microjet, such as length and radius, also influence its dynamic characteristics significantly. A close consistency between experimental and calculated results is achieved despite a small deviation due to the existence of an approximation in the model. The results presented in this work improve our understanding regarding catalytic motions of tubular microjets and demonstrate the controllability of the microjet which may have potential applications in drug delivery and biology.Strain-engineered tubular microjet engines with various geometric dimensions hold interesting autonomous motions in an aqueous fuel solution when propelled by catalytic decomposition of hydrogen peroxide to oxygen and water. The catalytically-generated oxygen bubbles expelled from microtubular cavities propel the microjet step by step in discrete increments. We focus on the dynamics of our tubular microjets in one step and build up a body deformation model to elucidate the interaction between tubular microjets and the bubbles they produce. The average microjet velocity is calculated analytically based on our model and the obtained results demonstrate that the velocity of the microjet increases linearly with the concentration of hydrogen peroxide. The geometric dimensions of the microjet, such as length and radius, also influence its dynamic characteristics significantly. A close consistency between experimental and calculated results is achieved despite a small deviation due to the existence of an approximation in the model. The results presented in this work improve our understanding regarding catalytic motions of tubular microjets and demonstrate the controllability of the microjet which may have potential applications in drug delivery and biology. Electronic supplementary information (ESI) available: I. Video of the catalytic motion of a typical microjet moving in a linear way. II. Detailed numerical analyses: Reynolds number calculation, displacement of the microjet and the bubble after separation, and example of experimental velocity calculation. See DOI: 10.1039/c1nr10840a
Estrada-Urbina, Juan; Cruz-Alonso, Alejandro; Santander-González, Martha; Vázquez-Durán, Alma
2018-01-01
In this research, quasi-spherical-shaped zinc oxide nanoparticles (ZnO NPs) were synthesized by a simple cost-competitive aqueous precipitation method. The engineered NPs were characterized using several validation methodologies: UV–Vis spectroscopy, diffuse reflection UV–Vis, spectrofluorometry, transmission electron microscopy (TEM), nanoparticle tracking analysis (NTA), and Fourier transform infrared (FTIR) spectroscopy with attenuated total reflection (ATR). A procedure was established to coat a landrace of red maize using gelatinized maize starch. Each maize seed was treated with 0.16 mg ZnO NPs (~7.7 × 109 particles). The standard germination (SG) and accelerated aging (AA) tests indicated that ZnO NP-treated maize seeds presented better physiological quality (higher percentage of normal seedlings) and sanitary quality (lower percentage of seeds contaminated by microorganisms) as compared to controls. The application of ZnO NPs also improved seedling vigor, correlated to shoot length, shoot diameter, root length, and number of secondary roots. Furthermore, shoots and roots of the ZnO NP-treated maize seeds showed a marked increment in the main active FTIR band areas, most notably for the vibrations associated with peptide-protein, lipid, lignin, polysaccharide, hemicellulose, cellulose, and carbohydrate. From these results, it is concluded that ZnO NPs have potential for applications in peasant agriculture to improve the quality of small-scale farmers’ seeds and, as a result, preserve germplasm resources. PMID:29673162
Corso, Phaedra S.; Ingels, Justin B.; Kogan, Steven M.; Foster, E. Michael; Chen, Yi-Fu; Brody, Gene H.
2013-01-01
Programmatic cost analyses of preventive interventions commonly have a number of methodological difficulties. To determine the mean total costs and properly characterize variability, one often has to deal with small sample sizes, skewed distributions, and especially missing data. Standard approaches for dealing with missing data such as multiple imputation may suffer from a small sample size, a lack of appropriate covariates, or too few details around the method used to handle the missing data. In this study, we estimate total programmatic costs for a prevention trial evaluating the Strong African American Families-Teen program. This intervention focuses on the prevention of substance abuse and risky sexual behavior. To account for missing data in the assessment of programmatic costs we compare multiple imputation to probabilistic sensitivity analysis. The latter approach uses collected cost data to create a distribution around each input parameter. We found that with the multiple imputation approach, the mean (95% confidence interval) incremental difference was $2149 ($397, $3901). With the probabilistic sensitivity analysis approach, the incremental difference was $2583 ($778, $4346). Although the true cost of the program is unknown, probabilistic sensitivity analysis may be a more viable alternative for capturing variability in estimates of programmatic costs when dealing with missing data, particularly with small sample sizes and the lack of strong predictor variables. Further, the larger standard errors produced by the probabilistic sensitivity analysis method may signal its ability to capture more of the variability in the data, thus better informing policymakers on the potentially true cost of the intervention. PMID:23299559
Corso, Phaedra S; Ingels, Justin B; Kogan, Steven M; Foster, E Michael; Chen, Yi-Fu; Brody, Gene H
2013-10-01
Programmatic cost analyses of preventive interventions commonly have a number of methodological difficulties. To determine the mean total costs and properly characterize variability, one often has to deal with small sample sizes, skewed distributions, and especially missing data. Standard approaches for dealing with missing data such as multiple imputation may suffer from a small sample size, a lack of appropriate covariates, or too few details around the method used to handle the missing data. In this study, we estimate total programmatic costs for a prevention trial evaluating the Strong African American Families-Teen program. This intervention focuses on the prevention of substance abuse and risky sexual behavior. To account for missing data in the assessment of programmatic costs we compare multiple imputation to probabilistic sensitivity analysis. The latter approach uses collected cost data to create a distribution around each input parameter. We found that with the multiple imputation approach, the mean (95 % confidence interval) incremental difference was $2,149 ($397, $3,901). With the probabilistic sensitivity analysis approach, the incremental difference was $2,583 ($778, $4,346). Although the true cost of the program is unknown, probabilistic sensitivity analysis may be a more viable alternative for capturing variability in estimates of programmatic costs when dealing with missing data, particularly with small sample sizes and the lack of strong predictor variables. Further, the larger standard errors produced by the probabilistic sensitivity analysis method may signal its ability to capture more of the variability in the data, thus better informing policymakers on the potentially true cost of the intervention.
Net Reclassification Indices for Evaluating Risk-Prediction Instruments: A Critical Review
Kerr, Kathleen F.; Wang, Zheyu; Janes, Holly; McClelland, Robyn L.; Psaty, Bruce M.; Pepe, Margaret S.
2014-01-01
Net reclassification indices have recently become popular statistics for measuring the prediction increment of new biomarkers. We review the various types of net reclassification indices and their correct interpretations. We evaluate the advantages and disadvantages of quantifying the prediction increment with these indices. For pre-defined risk categories, we relate net reclassification indices to existing measures of the prediction increment. We also consider statistical methodology for constructing confidence intervals for net reclassification indices and evaluate the merits of hypothesis testing based on such indices. We recommend that investigators using net reclassification indices should report them separately for events (cases) and nonevents (controls). When there are two risk categories, the components of net reclassification indices are the same as the changes in the true-positive and false-positive rates. We advocate use of true- and false-positive rates and suggest it is more useful for investigators to retain the existing, descriptive terms. When there are three or more risk categories, we recommend against net reclassification indices because they do not adequately account for clinically important differences in shifts among risk categories. The category-free net reclassification index is a new descriptive device designed to avoid pre-defined risk categories. However, it suffers from many of the same problems as other measures such as the area under the receiver operating characteristic curve. In addition, the category-free index can mislead investigators by overstating the incremental value of a biomarker, even in independent validation data. When investigators want to test a null hypothesis of no prediction increment, the well-established tests for coefficients in the regression model are superior to the net reclassification index. If investigators want to use net reclassification indices, confidence intervals should be calculated using bootstrap methods rather than published variance formulas. The preferred single-number summary of the prediction increment is the improvement in net benefit. PMID:24240655
Beyond Making the Case, Creating the Space for Innovation.
Boston-Fleischhauer, Carol
2016-06-01
Disruptive innovation results in something different from incremental change. Instead of focusing on improving an existing process, product, or service through performance improvement, disruptive innovation disregards status quo service or work and uses very different approaches to change, seeking to design the product, process, or service according to the consumer's perspective versus the providers' By its very nature, disruptive innovation provokes organizational, professional, and cultural controversy.
Method and apparatus for continuous electrophoresis
Watson, Jack S.
1992-01-01
A method and apparatus for conducting continuous separation of substances by electrophoresis are disclosed. The process involves electrophoretic separation combined with couette flow in a thin volume defined by opposing surfaces. By alternating the polarity of the applied potential and producing reciprocating short rotations of at least one of the surfaces relative to the other, small increments of separation accumulate to cause substantial, useful segregation of electrophoretically separable components in a continuous flow system.
NASA Astrophysics Data System (ADS)
Khodaei, Mohammad; Fathi, Mohammadhossein; Meratian, Mahmood; Savabi, Omid
2018-05-01
Reducing the elastic modulus and also improving biological fixation to the bone is possible by using porous scaffolds. In the present study, porous titanium scaffolds containing different porosities were fabricated using the space holder method. Pore distribution, formed phases and mechanical properties of titanium scaffolds were studied by Scanning Electron Microscope (SEM), x-ray diffraction (XRD) and cold compression test. Then the results of compression test were compared to the Gibson-Ashby model. Both experimentally measured and analytically calculated elastic modulus of porous titanium scaffolds decreased by porosity increment. The compliance between experimentally measured and analytically calculated elastic modulus of titanium scaffolds are also increased by porosity increment.
Parent Praise to 1-3 Year-Olds Predicts Children’s Motivational Frameworks 5 Years Later
Gunderson, Elizabeth A.; Gripshover, Sarah J.; Romero, Carissa; Dweck, Carol S.; Goldin-Meadow, Susan; Levine, Susan C.
2013-01-01
In laboratory studies, praising children’s effort encourages them to adopt incremental motivational frameworks—they believe ability is malleable, attribute success to hard work, enjoy challenges, and generate strategies for improvement. In contrast, praising children’s inherent abilities encourages them to adopt fixed-ability frameworks. Does the praise parents spontaneously give children at home show the same effects? Although parents’ early praise of inherent characteristics was not associated with children’s later fixed-ability frameworks, parents’ praise of children’s effort at 14-38 months (N=53) did predict incremental frameworks at 7-8 years, suggesting that causal mechanisms identified in experimental work may be operating in home environments. PMID:23397904
Incremental Costs and Performance Benefits of Various Features of Concrete Pavements
DOT National Transportation Integrated Search
2004-04-01
Various design features (such as dowel bars, tied shoulders, or drainable bases) may be added to a portland cement concrete (PCC) pavement design to improve its overall performance by maintaining a higher level of serviceability or by extending its s...
Administration Planning for Tomorrow.
ERIC Educational Resources Information Center
Murray, Albert
After defining administrative planning and outlining deficits and gains of the past 20 years in American schooling, this address underlines the necessity for educational restructuring. Specifically, educational leaders need to: (1) gather data determining the status quo and suggest incremental improvements; (2) address new solvable challenges and…
Dong, Hengjin; Buxton, Martin
2006-01-01
The objective of this study is to apply a Markov model to compare cost-effectiveness of total knee replacement (TKR) using computer-assisted surgery (CAS) with that of TKR using a conventional manual method in the absence of formal clinical trial evidence. A structured search was carried out to identify evidence relating to the clinical outcome, cost, and effectiveness of TKR. Nine Markov states were identified based on the progress of the disease after TKR. Effectiveness was expressed by quality-adjusted life years (QALYs). The simulation was carried out initially for 120 cycles of a month each, starting with 1,000 TKRs. A discount rate of 3.5 percent was used for both cost and effectiveness in the incremental cost-effectiveness analysis. Then, a probabilistic sensitivity analysis was carried out using a Monte Carlo approach with 10,000 iterations. Computer-assisted TKR was a long-term cost-effective technology, but the QALYs gained were small. After the first 2 years, the incremental cost per QALY of computer-assisted TKR was dominant because of cheaper and more QALYs. The incremental cost-effectiveness ratio (ICER) was sensitive to the "effect of CAS," to the CAS extra cost, and to the utility of the state "Normal health after primary TKR," but it was not sensitive to utilities of other Markov states. Both probabilistic and deterministic analyses produced similar cumulative serious or minor complication rates and complex or simple revision rates. They also produced similar ICERs. Compared with conventional TKR, computer-assisted TKR is a cost-saving technology in the long-term and may offer small additional QALYs. The "effect of CAS" is to reduce revision rates and complications through more accurate and precise alignment, and although the conclusions from the model, even when allowing for a full probabilistic analysis of uncertainty, are clear, the "effect of CAS" on the rate of revisions awaits long-term clinical evidence.
Beetroot juice does not enhance altitude running performance in well-trained athletes.
Arnold, Josh Timothy; Oliver, Samuel James; Lewis-Jones, Tammy Maria; Wylie, Lee John; Macdonald, Jamie Hugo
2015-06-01
We hypothesized that acute dietary nitrate (NO3(-)) provided as concentrated beetroot juice supplement would improve endurance running performance of well-trained runners in normobaric hypoxia. Ten male runners (mean (SD): sea level maximal oxygen uptake, 66 (7) mL·kg(-1)·min(-1); 10 km personal best, 36 (2) min) completed incremental exercise to exhaustion at 4000 m and a 10-km treadmill time-trial at 2500 m simulated altitude on separate days after supplementation with ∼7 mmol NO3(-) and a placebo at 2.5 h before exercise. Oxygen cost, arterial oxygen saturation, heart rate, and ratings of perceived exertion (RPE) were determined during the incremental exercise test. Differences between treatments were determined using means [95% confidence intervals], paired sample t tests, and a probability of individual response analysis. NO3(-) supplementation increased plasma nitrite concentration (NO3(-), 473 (226) nmol·L(-1) vs. placebo, 61 (37) nmol·L(-1), P < 0.001) but did not alter time to exhaustion during the incremental test (NO3(-), 402 (80) s vs. placebo 393 (62) s, P = 0.5) or time to complete the 10-km time-trial (NO3(-), 2862 (233) s vs. placebo, 2874 (265) s, P = 0.6). Further, no practically meaningful beneficial effect on time-trial performance was observed as the 11 [-60 to 38] s improvement was less than the a priori determined minimum important difference (51 s), and only 3 runners experienced a "likely, probable" performance improvement. NO3(-) also did not alter oxygen cost, arterial oxygen saturation, heart rate, or RPE. Acute dietary NO3(-) supplementation did not consistently enhance running performance of well-trained athletes in normobaric hypoxia.
Opportunities for improving the efficiency of paediatric HIV treatment programmes
Revill, Paul A.; Walker, Simon; Mabugu, Travor; Nathoo, Kusum J.; Mugyenyi, Peter; Kekitinwa, Adeodata; Munderi, Paula; Bwakura-Dangarembizi, Mutsawashe; Musiime, Victor; Bakeera-Kitaka, Sabrina; Nahirya-Ntege, Patricia; Walker, A. Sarah; Sculpher, Mark J.; Gibb, Diana M.
2015-01-01
Objectives: To conduct two economic analyses addressing whether to: routinely monitor HIV-infected children on antiretroviral therapy (ART) clinically or with laboratory tests; continue or stop cotrimoxazole prophylaxis when children become stabilized on ART. Design and methods: The ARROW randomized trial investigated alternative strategies to deliver paediatric ART and cotrimoxazole prophylaxis in 1206 Ugandan/Zimbabwean children. Incremental cost-effectiveness and value of implementation analyses were undertaken. Scenario analyses investigated whether laboratory monitoring (CD4+ tests for efficacy monitoring; haematology/biochemistry for toxicity) could be tailored and targeted to be delivered cost-effectively. Cotrimoxazole use was examined in malaria-endemic and non-endemic settings. Results: Using all trial data, clinical monitoring delivered similar health outcomes to routine laboratory monitoring, but at a reduced cost, so was cost-effective. Continuing cotrimoxazole improved health outcomes at reduced costs. Restricting routine CD4+ monitoring to after 52 weeks following ART initiation and removing toxicity testing was associated with an incremental cost-effectiveness ratio of $6084 per quality-adjusted life-year (QALY) across all age groups, but was much lower for older children (12+ years at initiation; incremental cost-effectiveness ratio = $769/QALY). Committing resources to improve cotrimoxazole implementation appears cost-effective. A healthcare system that could pay $600/QALY should be willing to spend up to $12.0 per patient-year to ensure continued provision of cotrimoxazole. Conclusion: Clinically driven monitoring of ART is cost-effective in most circumstances. Routine laboratory monitoring is generally not cost-effective at current prices, except possibly CD4+ testing amongst adolescents initiating ART. Committing resources to ensure continued provision of cotrimoxazole in health facilities is more likely to represent an efficient use of resources. PMID:25396263
Bertoldi, Eduardo G; Stella, Steffan F; Rohde, Luis E; Polanczyk, Carisi A
2016-05-01
Several tests exist for diagnosing coronary artery disease, with varying accuracy and cost. We sought to provide cost-effectiveness information to aid physicians and decision-makers in selecting the most appropriate testing strategy. We used the state-transitions (Markov) model from the Brazilian public health system perspective with a lifetime horizon. Diagnostic strategies were based on exercise electrocardiography (Ex-ECG), stress echocardiography (ECHO), single-photon emission computed tomography (SPECT), computed tomography coronary angiography (CTA), or stress cardiac magnetic resonance imaging (C-MRI) as the initial test. Systematic review provided input data for test accuracy and long-term prognosis. Cost data were derived from the Brazilian public health system. Diagnostic test strategy had a small but measurable impact in quality-adjusted life-years gained. Switching from Ex-ECG to CTA-based strategies improved outcomes at an incremental cost-effectiveness ratio of 3100 international dollars per quality-adjusted life-year. ECHO-based strategies resulted in cost and effectiveness almost identical to CTA, and SPECT-based strategies were dominated because of their much higher cost. Strategies based on stress C-MRI were most effective, but the incremental cost-effectiveness ratio vs CTA was higher than the proposed willingness-to-pay threshold. Invasive strategies were dominant in the high pretest probability setting. Sensitivity analysis showed that results were sensitive to costs of CTA, ECHO, and C-MRI. Coronary CT is cost-effective for the diagnosis of coronary artery disease and should be included in the Brazilian public health system. Stress ECHO has a similar performance and is an acceptable alternative for most patients, but invasive strategies should be reserved for patients at high risk. © 2016 Wiley Periodicals, Inc.
SST algorithms in ACSPO reanalysis of AVHRR GAC data from 2002-2013
NASA Astrophysics Data System (ADS)
Petrenko, B.; Ignatov, A.; Kihai, Y.; Zhou, X.; Stroup, J.
2014-05-01
In response to a request from the NOAA Coral Reef Watch Program, NOAA SST Team initiated reprocessing of 4 km resolution GAC data from AVHRRs flown onboard NOAA and MetOp satellites. The objective is to create a longterm Level 2 Advanced Clear-Sky Processor for Oceans (ACSPO) SST product, consistent with NOAA operations. ACSPO-Reanalysis (RAN) is used as input in the NOAA geo-polar blended Level 4 SST and potentially other Level 4 SST products. In the first stage of reprocessing (reanalysis 1, or RAN1), data from NOAA-15, -16, -17, -18, -19, and Metop-A and -B, from 2002-present have been processed with ACSPO v2.20, and matched up with quality controlled in situ data from in situ Quality Monitor (iQuam) version 1. The ~12 years time series of matchups were used to develop and explore the SST retrieval algorithms, with emphasis on minimizing spatial biases in retrieved SSTs, close reproduction of the magnitudes of true SST variations, and maximizing temporal, spatial and inter-platform stability of retrieval metrics. Two types of SST algorithms were considered: conventional SST regressions, and recently developed incremental regressions. The conventional equations were adopted in the EUMETSAT OSI-SAF formulation, which, according to our previous analyses, provide relatively small regional biases and well-balanced combination of precision and sensitivity, in its class. Incremental regression equations were specifically elaborated to automatically correct for model minus observation biases, always present when RTM simulations are employed. Improved temporal stability was achieved by recalculation of SST coefficients from matchups on a daily basis, with a +/-45 day window around the current date. This presentation describes the candidate SST algorithms considered for the next round of ACSPO reanalysis, RAN2.
Constructing Space-Time Views from Fixed Size Statistical Data: Getting the Best of both Worlds
NASA Technical Reports Server (NTRS)
Schmidt, Melisa; Yan, Jerry C.
1997-01-01
Many performance monitoring tools are currently available to the super-computing community. The performance data gathered and analyzed by these tools fall under two categories: statistics and event traces. Statistical data is much more compact but lacks the probative power event traces offer. Event traces, on the other hand, can easily fill up the entire file system during execution such that the instrumented execution may have to be terminated half way through. In this paper, we propose an innovative methodology for performance data gathering and representation that offers a middle ground. The user can trade-off tracing overhead, trace data size vs. data quality incrementally. In other words, the user will be able to limit the amount of trace collected and, at the same time, carry out some of the analysis event traces offer using space-time views for the entire execution. Two basic ideas arc employed: the use of averages to replace recording data for each instance and formulae to represent sequences associated with communication and control flow. With the help of a few simple examples, we illustrate the use of these techniques in performance tuning and compare the quality of the traces we collected vs. event traces. We found that the trace files thus obtained are, in deed, small, bounded and predictable before program execution and that the quality of the space time views generated from these statistical data are excellent. Furthermore, experimental results showed that the formulae proposed were able to capture 100% of all the sequences associated with 11 of the 15 applications tested. The performance of the formulae can be incrementally improved by allocating more memory at run-time to learn longer sequences.
Constructing Space-Time Views from Fixed Size Statistical Data: Getting the Best of Both Worlds
NASA Technical Reports Server (NTRS)
Schmidt, Melisa; Yan, Jerry C.; Bailey, David (Technical Monitor)
1996-01-01
Many performance monitoring tools are currently available to the super-computing community. The performance data gathered and analyzed by these tools fall under two categories: statistics and event traces. Statistical data is much more compact but lacks the probative power event traces offer. Event traces, on the other hand, can easily fill up the entire file system during execution such that the instrumented execution may have to be terminated half way through. In this paper, we propose an innovative methodology for performance data gathering and representation that offers a middle ground. The user can trade-off tracing overhead, trace data size vs. data quality incrementally. In other words, the user will be able to limit the amount of trace collected and, at the same time, carry out some of the analysis event traces offer using spacetime views for the entire execution. Two basic ideas are employed: the use of averages to replace recording data for each instance and "formulae" to represent sequences associated with communication and control flow. With the help of a few simple examples, we illustrate the use of these techniques in performance tuning and compare the quality of the traces we collected vs. event traces. We found that the trace files thus obtained are, in deed, small, bounded and predictable before program execution and that the quality of the space time views generated from these statistical data are excellent. Furthermore, experimental results showed that the formulae proposed were able to capture 100% of all the sequences associated with 11 of the 15 applications tested. The performance of the formulae can be incrementally improved by allocating more memory at run-time to learn longer sequences.
Jarbol, Dorte Ejg; Bech, Mickael; Kragstrup, Jakob; Havelund, Troels; Schaffalitzky de Muckadell, Ove B
2006-01-01
An economic evaluation was performed of empirical antisecretory therapy versus test for Helicobacter pylori in the management of dyspepsia patients presenting in primary care. A randomized trial in 106 general practices in the County of Funen, Denmark, was designed to include prospective collection of clinical outcome measures and resource utilization data. Dyspepsia patients (n = 722) presenting in general practice with more than 2 weeks of epigastric pain or discomfort were managed according to one of three initial management strategies: (i) empirical antisecretory therapy, (ii) testing for Helicobacter pylori, or (iii) empirical antisecretory therapy, followed by Helicobacter pylori testing if symptoms improved. Cost-effectiveness and incremental cost-effectiveness ratios of the strategies were determined. The mean proportion of days without dyspeptic symptoms during the 1-year follow-up was 0.59 in the group treated with empirical antisecretory therapy, 0.57 in the H. pylori test-and-eradicate group, and 0.53 in the combination group. After 1 year, 23 percent, 26 percent, and 22 percent, respectively, were symptom-free. Applying the proportion of days without dyspeptic symptoms, the cost-effectiveness for empirical treatment, H. pylori test and the combination were 12,131 Danish kroner (DKK), 9,576 DKK, and 7,301 DKK, respectively. The incremental cost-effectiveness going from the combination strategy to empirical antisecretory treatment or H. pylori test alone was 54,783 DKK and 39,700 DKK per additional proportion of days without dyspeptic symptoms. Empirical antisecretory therapy confers a small insignificant benefit but costs more than strategies based on test for H. pylori and is probably not a cost-effective strategy for the management of dyspepsia in primary care.
Cost-effectiveness of a quality improvement collaborative for obstetric and newborn care in Niger.
Broughton, Edward; Saley, Zakari; Boucar, Maina; Alagane, Dondi; Hill, Kathleen; Marafa, Aicha; Asma, Yaroh; Sani, Karimou
2013-01-01
The purpose of this paper is to describe a quality improvement collaborative conducted in 33 Nigerian facilities to improve maternal and newborn care outcomes by increasing compliance with high-impact, evidence-based care standards. Intervention costs and cost-effectiveness were examined and costs to the Niger Health Ministry (MoH) were estimated if they were to scale-up the intervention to additional sites. Facility-based maternal care outcomes and costs from pre-quality improvement collaborative baseline monitoring data in participating facilities from January to May 2006 were compared with outcomes and costs from the same facilities from June 2008 to September 2008. Cost data were collected from project accounting records. The MoH costs were determined from interviews with clinic managers and quality improvement teams. Effectiveness data were obtained from facilities' records. The average delivery-cost decreased from $35 before to $28 after the collaborative. The USAID/HCI project's incremental cost was $2.43/delivery. The collaborative incremental cost-effectiveness was $147/disability-adjusted life year averted. If the MoH spread the intervention to other facilities, substantive cost-savings and improved health outcomes can be predicted. The intervention achieved significant positive health benefits for a low cost. The Niger MoH can expect approximately 50 per cent return on its investment if it implements the collaborative in new facilities. The improvement collaborative approach can improve health and save health care resources. This is one of the first studies known to examine collaborative quality improvement and economic efficiency in a developing country.
Online Bimanual Manipulation Using Surface Electromyography and Incremental Learning.
Strazzulla, Ilaria; Nowak, Markus; Controzzi, Marco; Cipriani, Christian; Castellini, Claudio
2017-03-01
The paradigm of simultaneous and proportional myocontrol of hand prostheses is gaining momentum in the rehabilitation robotics community. As opposed to the traditional surface electromyography classification schema, in simultaneous and proportional control the desired force/torque at each degree of freedom of the hand/wrist is predicted in real-time, giving to the individual a more natural experience, reducing the cognitive effort and improving his dexterity in daily-life activities. In this study we apply such an approach in a realistic manipulation scenario, using 10 non-linear incremental regression machines to predict the desired torques for each motor of two robotic hands. The prediction is enforced using two sets of surface electromyography electrodes and an incremental, non-linear machine learning technique called Incremental Ridge Regression with Random Fourier Features. Nine able-bodied subjects were engaged in a functional test with the aim to evaluate the performance of the system. The robotic hands were mounted on two hand/wrist orthopedic splints worn by healthy subjects and controlled online. An average completion rate of more than 95% was achieved in single-handed tasks and 84% in bimanual tasks. On average, 5 min of retraining were necessary on a total session duration of about 1 h and 40 min. This work sets a beginning in the study of bimanual manipulation with prostheses and will be carried on through experiments in unilateral and bilateral upper limb amputees thus increasing its scientific value.
Incremental Support Vector Machine Framework for Visual Sensor Networks
NASA Astrophysics Data System (ADS)
Awad, Mariette; Jiang, Xianhua; Motai, Yuichi
2006-12-01
Motivated by the emerging requirements of surveillance networks, we present in this paper an incremental multiclassification support vector machine (SVM) technique as a new framework for action classification based on real-time multivideo collected by homogeneous sites. The technique is based on an adaptation of least square SVM (LS-SVM) formulation but extends beyond the static image-based learning of current SVM methodologies. In applying the technique, an initial supervised offline learning phase is followed by a visual behavior data acquisition and an online learning phase during which the cluster head performs an ensemble of model aggregations based on the sensor nodes inputs. The cluster head then selectively switches on designated sensor nodes for future incremental learning. Combining sensor data offers an improvement over single camera sensing especially when the latter has an occluded view of the target object. The optimization involved alleviates the burdens of power consumption and communication bandwidth requirements. The resulting misclassification error rate, the iterative error reduction rate of the proposed incremental learning, and the decision fusion technique prove its validity when applied to visual sensor networks. Furthermore, the enabled online learning allows an adaptive domain knowledge insertion and offers the advantage of reducing both the model training time and the information storage requirements of the overall system which makes it even more attractive for distributed sensor networks communication.
An economic analysis of midwifery training programmes in South Kalimantan, Indonesia.
Walker, Damian; McDermott, Jeanne M; Fox-Rushby, Julia; Tanjung, Marwan; Nadjib, Mardiati; Widiatmoko, Dono; Achadi, Endang
2002-01-01
In order to improve the knowledge and skills of midwives at health facilities and those based in villages in South Kalimantan, Indonesia, three in-service training programmes were carried out during 1995-98. A scheme used for both facility and village midwives included training at training centres, peer review and continuing education. One restricted to village midwives involved an internship programme in district hospitals. The incremental cost-effectiveness of these programmes was assessed from the standpoint of the health care provider. It was estimated that the first scheme could be expanded to increase the number of competent midwives based in facilities and villages in South Kalimantan by 1% at incremental costs of US$ 764.6 and US$ 1175.7 respectively, and that replication beyond South Kalimantan could increase the number of competent midwives based in facilities and villages by 1% at incremental costs of US$ 1225.5 and US$ 1786.4 per midwife respectively. It was also estimated that the number of competent village midwives could be increased by 1% at an incremental cost of US$ 898.1 per intern if replicated elsewhere, and at a cost of US$ 146.2 per intern for expanding the scheme in South Kalimantan. It was not clear whether the training programmes were more or less cost-effective than other safe motherhood interventions because the nature of the outcome measures hindered comparison.
NASA Technical Reports Server (NTRS)
Taylor, Arthur C., III; Hou, Gene W.
1996-01-01
An incremental iterative formulation together with the well-known spatially split approximate-factorization algorithm, is presented for solving the large, sparse systems of linear equations that are associated with aerodynamic sensitivity analysis. This formulation is also known as the 'delta' or 'correction' form. For the smaller two dimensional problems, a direct method can be applied to solve these linear equations in either the standard or the incremental form, in which case the two are equivalent. However, iterative methods are needed for larger two-dimensional and three dimensional applications because direct methods require more computer memory than is currently available. Iterative methods for solving these equations in the standard form are generally unsatisfactory due to an ill-conditioned coefficient matrix; this problem is overcome when these equations are cast in the incremental form. The methodology is successfully implemented and tested using an upwind cell-centered finite-volume formulation applied in two dimensions to the thin-layer Navier-Stokes equations for external flow over an airfoil. In three dimensions this methodology is demonstrated with a marching-solution algorithm for the Euler equations to calculate supersonic flow over the High-Speed Civil Transport configuration (HSCT 24E). The sensitivity derivatives obtained with the incremental iterative method from a marching Euler code are used in a design-improvement study of the HSCT configuration that involves thickness. camber, and planform design variables.
Huang, Jui-Tzu; Cheng, Hao-Min; Yu, Wen-Chung; Lin, Yao-Ping; Sung, Shih-Hsien; Wang, Jiun-Jr; Wu, Chung-Li; Chen, Chen-Huan
2017-11-29
The excess pressure integral (XSPI), derived from analysis of the arterial pressure curve, may be a significant predictor of cardiovascular events in high-risk patients. We comprehensively investigated the prognostic value of XSPI for predicting long-term mortality in end-stage renal disease patients undergoing regular hemodialysis. A total of 267 uremic patients (50.2% female; mean age 54.2±14.9 years) receiving regular hemodialysis for more than 6 months were enrolled. Cardiovascular parameters were obtained by echocardiography and applanation tonometry. Calibrated carotid arterial pressure waveforms were analyzed according to the wave-transmission and reservoir-wave theories. Multivariable Cox proportional hazard models were constructed to account for age, sex, diabetes mellitus, albumin, body mass index, and hemodialysis treatment adequacy. Incremental utility of the parameters to risk stratification was assessed by net reclassification improvement. During a median follow-up of 15.3 years, 124 deaths (46.4%) incurred. Baseline XSPI was significantly predictive of all-cause (hazard ratio per 1 SD 1.4, 95% confidence interval 1.15-1.70, P =0.0006) and cardiovascular mortalities (1.47, 1.18-1.84, P =0.0006) after accounting for the covariates. The addition of XSPI to the base prognostic model significantly improved prediction of both all-cause mortality (net reclassification improvement=0.1549, P =0.0012) and cardiovascular mortality (net reclassification improvement=0.1535, P =0.0033). XSPI was superior to carotid-pulse wave velocity, forward and backward wave amplitudes, and left ventricular ejection fraction in consideration of overall independent and incremental prognostics values. In end-stage renal disease patients undergoing regular hemodialysis, XSPI was significantly predictive of long-term mortality and demonstrated an incremental value to conventional prognostic factors. © 2017 The Authors. Published on behalf of the American Heart Association, Inc., by Wiley.
Finite element solutions for crack-tip behavior in small-scale yielding
NASA Technical Reports Server (NTRS)
Tracey, D. M.
1976-01-01
The subject considered is the stress and deformation fields in a cracked elastic-plastic power law hardening material under plane strain tensile loading. An incremental plasticity finite element formulation is developed for accurate analysis of the complete field problem including the extensively deformed near tip region, the elastic-plastic region, and the remote elastic region. The formulation has general applicability and was used to solve the small scale yielding problem for a set of material hardening exponents. Distributions of stress, strain, and crack opening displacement at the crack tip and through the elastic-plastic zone are presented as a function of the elastic stress intensity factor and material properties.
Betts, Keith A; Griffith, Jenny; Friedman, Alan; Zhou, Zheng-Yi; Signorovitch, James E; Ganguli, Arijit
2016-01-01
Apremilast was recently approved for the treatment of active psoriatic arthritis (PsA). However, no studies compare apremilast with methotrexate or biologic therapies, so its relative comparative efficacy remains unknown. This study compared the response rates and incremental costs per responder associated with methotrexate, apremilast, and biologics for the treatment of active PsA. A systematic literature review was performed to identify phase 3 randomized controlled clinical trials of approved biologics, methotrexate, and apremilast in the methotrexate-naïve PsA population. Using Bayesian methods, a network meta-analysis was conducted to indirectly compare rates of achieving a ≥20% improvement in American College of Rheumatology component scores (ACR20). The number needed to treat (NNT) and the incremental costs per ACR20 responder (2014 US$) relative to placebo were estimated for each of the therapies. Three trials (MIPA for methotrexate, PALACE-4 for apremilast, and ADEPT for adalimumab) met all inclusion criteria. The NNTs relative to placebo were 2.63 for adalimumab, 6.69 for apremilast, and 8.31 for methotrexate. Among methotrexate-naïve PsA patients, the 16 week incremental costs per ACR20 responder were $3622 for methotrexate, $26,316 for adalimumab, and $45,808 for apremilast. The incremental costs per ACR20 responder were $222,488 for apremilast vs. methotrexate. Among methotrexate-naive PsA patients, adalimumab was found to have the lowest NNT for one additional ACR20 response and methotrexate was found to have the lowest incremental costs per ACR20 responder. There was no statistical evidence of greater efficacy for apremilast vs. methotrexate. A head-to-head trial between apremilast and methotrexate is recommended to confirm this finding.
Lee, We-Kang; Su, Yi-An; Song, Tzu-Jiun; Chiu, Yao-Chu; Lin, Ching-Hung
2014-01-01
The Iowa Gambling Task (IGT) developed by Bechara et al. in 1994 is used to diagnose patients with Ventromedial Medial Prefrontal Cortex (VMPFC) lesions, and it has become a landmark in research on decision making. According to Bechara et al., the manipulation of progressive increments of monetary value can normalize the performance of patients with VMPFC lesions; thus, they developed a computerized version of the IGT. However, the empirical results showed that patients' performances did not improve as a result of this manipulation, which suggested that patients with VMPFC lesions performed myopically for future consequences. Using the original version of the IGT, some IGT studies have demonstrated that increments of monetary value significantly influence the performance of normal subjects in the IGT. However, other research has resulted in inconsistent findings. In this study, we used the computerized IGT (1X-IGT) and manipulated the value contrast of progressive increments (i.e., by designing the 10X-IGT, which contained 10 times of progressive increment) to investigate the influence of value contrast on the performance of normal subjects. The resulting empirical observations indicated that the value contrast (1X- vs. 10X-IGT) of the progressive increment had no effect on the performance of normal subjects. This study also provides a discussion of the issue of value in IGT-related studies. Moreover, we found the "prominent deck B phenomenon" in both versions of the IGT, which indicated that the normal subjects were guided mostly by the gain-loss frequency, rather than by the monetary value contrast. In sum, the behavioral performance of normal subjects demonstrated a low correlation with changes in monetary value, even in the 10X-IGT.
[Cost-effectiveness ratio of using rapid tests for malaria diagnosis in the Peruvian Amazon].
Rosas Aguirre, Angel Martín; Llanos Zavalaga, Luis Fernando; Trelles de Belaunde, Miguel
2009-05-01
To determine the cost-effectiveness ratios of three options for diagnosing malaria at the local health provider in 50 communities near the Peruvian Amazon. Calculation of the incremental cost-effectiveness ratios of three options for diagnosing malaria-not using rapid tests, using rapid tests, and accessing microscopy-in patients presenting with fever in 50 communities near Iquitos in the Peruvian Amazon, communities with limited access to microscopy that depend on a network of local health providers. The incremental costs and effects of the two latter options were calculated and compared with the first option (currently in use). By dividing the incremental costs among the incremental effects, the incremental cost-effectiveness ratio was calculated. Using rapid tests would save the Ministry of Health of Peru: US$191 for each new case of Plasmodium falciparum malaria treated promptly and appropriately; US$31 per new case of P. vivax malaria treated promptly and appropriately; US$1,051 per case of acute malaria averted; and US$17,655 for each death avoided. Access to microscopy by all the communities would generate an additional cost of: US$198 per new case of P. falciparum malaria treated promptly and appropriately; US$31 per new case of P. vivax malaria treated promptly and appropriately; US$1,086 per case of acute malaria averted; and US$18,255 for each death avoided. The use of rapid tests by local health providers can improve the effectiveness of malaria diagnosis in patients with fever in the 50 communities studied, at a cost lower than the current method. The recommendation is to expand the use of rapid tests among the health providers in communities similar to those studied.
Philips, Zoë; Whynes, David K; Avis, Mark
2006-02-01
This paper describes an experiment to test the construct validity of contingent valuation, by eliciting women's valuations for the NHS cervical cancer screening programme. It is known that, owing to low levels of knowledge of cancer and screening in the general population, women both over-estimate the risk of disease and the efficacy of screening. The study is constructed as a randomised experiment, in which one group is provided with accurate information about cervical cancer screening, whilst the other is not. The first hypothesis supporting construct validity, that controls who perceive greater benefits from screening will offer higher valuations, is substantiated. Both groups are then provided with objective information on an improvement to the screening programme, and are asked to value the improvement as an increment to their original valuations. The second hypothesis supporting construct validity, that controls who perceive the benefits of the programme to be high already will offer lower incremental valuations, is also substantiated. Copyright 2005 John Wiley & Sons, Ltd.
The p-version of the finite element method in incremental elasto-plastic analysis
NASA Technical Reports Server (NTRS)
Holzer, Stefan M.; Yosibash, Zohar
1993-01-01
Whereas the higher-order versions of the finite elements method (the p- and hp-version) are fairly well established as highly efficient methods for monitoring and controlling the discretization error in linear problems, little has been done to exploit their benefits in elasto-plastic structural analysis. Aspects of incremental elasto-plastic finite element analysis which are particularly amenable to improvements by the p-version is discussed. These theoretical considerations are supported by several numerical experiments. First, an example for which an analytical solution is available is studied. It is demonstrated that the p-version performs very well even in cycles of elasto-plastic loading and unloading, not only as compared to the traditional h-version but also in respect to the exact solution. Finally, an example of considerable practical importance - the analysis of a cold-worked lug - is presented which demonstrates how the modeling tools offered by higher-order finite element techniques can contribute to an improved approximation of practical problems.
Hospital economics of the hospitalist.
Gregory, Douglas; Baigelman, Walter; Wilson, Ira B
2003-06-01
To determine the economic impact on the hospital of a hospitalist program and to develop insights into the relative economic importance of variables such as reductions in mean length of stay and cost, improvements in throughput (patients discharged per unit time), payer methods of reimbursement, and the cost of the hospitalist program. The primary data source was Tufts-New England Medical Center in Boston. Patient demographics, utilization, cost, and revenue data were obtained from the hospital's cost accounting system and medical records. The hospitalist admitted and managed all patients during a six-week period on the general medical unit of Tufts-New England Medical Center. Reimbursement, cost, length of stay, and throughput outcomes during this period were contrasted with patients admitted to the unit in the same period in the prior year, in the preceding period, and in the following period. The hospitalist group compared with the control group demonstrated: length of stay reduced to 2.19 days from 3.45 days (p<.001); total hospital costs per admission reduced to 1,775 dollars from 2,332 dollars (p<.001); costs per day increased to 811 dollars from 679 dollars (p<.001); no differences for readmission within 30 days of discharge to extended care facilities. The hospital's expected incremental profitability with the hospitalist was -1.44 dollars per admission excluding incremental throughput effects, and it was most sensitive to changes in the ratio of per diem to case rate reimbursement. Incremental throughput with the hospitalist was estimated at 266 patients annually with an associated incremental profitability of 1.3 million dollars. Hospital interventions designed to reduce length of stay, such as the hospitalist, should be evaluated in terms of cost, throughput, and reimbursement effects. Excluding throughput effects, the hospitalist program was not economically viable due to the influence of per diem reimbursement. Throughput improvements occasioned by the hospitalist program with high baseline occupancy levels are substantial and tend to favor a hospitalist program.
LaPar, Damien J; Crosby, Ivan K; Rich, Jeffrey B; Fonner, Edwin; Kron, Irving L; Ailawadi, Gorav; Speir, Alan M
2013-11-01
The financial burden of postoperative morbidity after cardiac operations remains ill defined. This study evaluated the costs associated with the performance of coronary artery bypass grafting (CABG) with and without aortic valve replacement (AVR) and determined the incremental costs associated with major postoperative complications. A total of 65,534 regional patients undergoing CABG (n = 55,167) ± AVR (n = 10,367) were evaluated from 2001 to 2011. Patient-related, hospital-related, and procedure-related cost data were analyzed by use of Medicare-based cost reports. Hierarchical multivariable regression modeling was used to estimate risk-adjusted incremental cost differences in postoperative complications. The mean age was 64 years, and women accounted for 31% of patients. CABG + AVR patients had higher rates of overall complication (40% vs 35%, p < 0.001) and operative mortality (5% vs 3%, p < 0.001) than did CABG patients. CABG + AVR patients also accrued increased median postoperative lengths of stay (7 vs 5 days, p < 0.001) and total costs ($26,527 vs $24,475, p < 0.001). After mortality risk adjustment, significant positive relationships existed between total costs and major postoperative complications. Interestingly, the highest incremental costs among CABG patients included newly instituted hemodialysis ($71,833), deep sternal wound infection ($56,003), and pneumonia ($50,025). Among CABG + AVR patients, these complications along with perioperative myocardial infarction ($68,917) dominated costs. Postoperative complications after CABG ± AVR are associated with significantly increased incremental costs. The most costly complications include newly instituted hemodialysis, infectious complications, and perioperative myocardial infarction. Identification of the most common and the most costly complications provides opportunities to target improvement in patient quality and the delivery of cost-effective care. Copyright © 2013 The Society of Thoracic Surgeons. Published by Elsevier Inc. All rights reserved.
Brazier, Peter; Schauer, Uwe; Hamelmann, Eckard; Holmes, Steve; Pritchard, Clive; Warner, John O
2016-01-01
Chronic asthma is a significant burden for individual sufferers, adversely impacting their quality of working and social life, as well as being a major cost to the National Health Service (NHS). Temperature-controlled laminar airflow (TLA) therapy provides asthma patients at BTS/SIGN step 4/5 an add-on treatment option that is non-invasive and has been shown in clinical studies to improve quality of life for patients with poorly controlled allergic asthma. The objective of this study was to quantify the cost-effectiveness of TLA (Airsonett AB) technology as an add-on to standard asthma management drug therapy in the UK. The main performance measure of interest is the incremental cost per quality-adjusted life year (QALY) for patients using TLA in addition to usual care versus usual care alone. The incremental cost of TLA use is based on an observational clinical study monitoring the incidence of exacerbations with treatment valued using NHS cost data. The clinical effectiveness, used to derive the incremental QALY data, is based on a randomised double-blind placebo-controlled clinical trial comprising participants with an equivalent asthma condition. For a clinical cohort of asthma patients as a whole, the incremental cost-effectiveness ratio (ICER) is £8998 per QALY gained, that is, within the £20 000/QALY cost-effectiveness benchmark used by the National Institute for Health and Care Excellence (NICE). Sensitivity analysis indicates that ICER values range from £18 883/QALY for the least severe patients through to TLA being dominant, that is, cost saving as well as improving quality of life, for individuals with the most severe and poorly controlled asthma. Based on our results, Airsonett TLA is a cost-effective addition to treatment options for stage 4/5 patients. For high-risk individuals with more severe and less well controlled asthma, the use of TLA therapy to reduce incidence of hospitalisation would be a cost saving to the NHS.
Huang, Jian-Wen; Cheng, Ya-Shan; Ko, Tzu-Ping; Lin, Cheng-Yen; Lai, Hui-Lin; Chen, Chun-Chi; Ma, Yanhe; Zheng, Yingying; Huang, Chun-Hsiang; Zou, Peijian; Liu, Je-Ruei; Guo, Rey-Ting
2012-04-01
1,3-1,4-β-D-Glucanase has been widely used as a feed additive to help non-ruminant animals digest plant fibers, with potential in increasing nutrition turnover rate and reducing sanitary problems. Engineering of enzymes for better thermostability is of great importance because it not only can broaden their industrial applications, but also facilitate exploring the mechanism of enzyme stability from structural point of view. To obtain enzyme with higher thermostability and specific activity, structure-based rational design was carried out in this study. Eleven mutants of Fibrobacter succinogenes 1,3-1,4-β-D-glucanase were constructed in attempt to improve the enzyme properties. In particular, the crude proteins expressed in Pichia pastoris were examined firstly to ensure that the protein productions meet the need for industrial fermentation. The crude protein of V18Y mutant showed a 2 °C increment of Tm and W203Y showed ∼30% increment of the specific activity. To further investigate the structure-function relationship, some mutants were expressed and purified from P. pastoris and Escherichia coli. Notably, the specific activity of purified W203Y which was expressed in E. coli was 63% higher than the wild-type protein. The double mutant V18Y/W203Y showed the same increments of Tm and specific activity as the single mutants did. When expressed and purified from E. coli, V18Y/W203Y showed similar pattern of thermostability increment and 75% higher specific activity. Furthermore, the apo-form and substrate complex structures of V18Y/W203Y were solved by X-ray crystallography. Analyzing protein structure of V18Y/W203Y helps elucidate how the mutations could enhance the protein stability and enzyme activity.
Tucker, F. Lee
2012-01-01
Modern breast imaging, including magnetic resonance imaging, provides an increasingly clear depiction of breast cancer extent, often with suboptimal pathologic confirmation. Pathologic findings guide management decisions, and small increments in reported tumor characteristics may rationalize significant changes in therapy and staging. Pathologic techniques to grossly examine resected breast tissue have changed little during this era of improved breast imaging and still rely primarily on the techniques of gross inspection and specimen palpation. Only limited imaging information is typically conveyed to pathologists, typically in the form of wire-localization images from breast-conserving procedures. Conventional techniques of specimen dissection and section submission destroy the three-dimensional integrity of the breast anatomy and tumor distribution. These traditional methods of breast specimen examination impose unnecessary limitations on correlation with imaging studies, measurement of cancer extent, multifocality, and margin distance. Improvements in pathologic diagnosis, reporting, and correlation of breast cancer characteristics can be achieved by integrating breast imagers into the specimen examination process and the use of large-format sections which preserve local anatomy. This paper describes the successful creation of a large-format pathology program to routinely serve all patients in a busy interdisciplinary breast center associated with a community-based nonprofit health system in the United States. PMID:23316372
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hoeschele, Marc; Weitzel, Elizabeth; Backman, Christine
This project completed a modeling evaluation of a hybrid gas water heater that combines a reduced capacity tankless unit with a downsized storage tank. This product would meet a significant market need by providing a higher efficiency gas water heater solution for retrofit applications while maintaining compatibility with the 1/2 inch gas lines and standard B vents found in most homes. The TRNSYS simulation tool was used to model a base case 0.60 EF atmospheric gas storage water, a 0.82 EF non-condensing gas tankless water heater, an existing (high capacity) hybrid unit on the market, and an alternative hybrid unitmore » with lower storage volume and reduced gas input requirements. Simulations were completed under a 'peak day' sizing scenario with 183 gpd hot water loads in a Minnesota winter climate case. Full-year simulations were then completed in three climates (ranging from Phoenix to Minneapolis) for three hot water load scenarios (36, 57, and 96 gpd). Model projections indicate that the alternative hybrid offers an average 4.5% efficiency improvement relative to the 0.60 EF gas storage unit across all scenarios modeled. The alternative hybrid water heater evaluated does show promise, but the current low cost of natural gas across much of the country and the relatively small incremental efficiency improvement poses challenges in initially building a market demand for the product.« less
Suggested Best Practice for seismic monitoring and characterization of non-conventional reservoirs
NASA Astrophysics Data System (ADS)
Malin, P. E.; Bohnhoff, M.; terHeege, J. H.; Deflandre, J. P.; Sicking, C.
2017-12-01
High rates of induced seismicity and gas leakage in non-conventional production have become a growing issue of public concern. It has resulted in calls for independent monitoring before, during and after reservoir production. To date no uniform practice for it exists and few reservoirs are locally monitored at all. Nonetheless, local seismic monitoring is a pre-requisite for detecting small earthquakes, increases of which can foreshadow damaging ones and indicate gas leaks. Appropriately designed networks, including seismic reflection studies, can be used to collect these and Seismic Emission Tomography (SET) data, the latter significantly helping reservoir characterization and exploitation. We suggest a Step-by-Step procedure for implementing such networks. We describe various field kits, installations, and workflows, all aimed at avoiding damaging seismicity, as indicators of well stability, and improving reservoir exploitation. In Step 1, a single downhole seismograph is recommended for establishing baseline seismicity before development. Subsequent Steps are used to decide cost-effective ways of monitoring treatments, production, and abandonment. We include suggestions for monitoring of disposal and underground storage. We also describe how repeated SET observations improve reservoir management as well as regulatory monitoring. Moreover, SET acquisition can be included at incremental cost in active surveys or temporary passive deployments.
Jain, Anil K; Feng, Jianjiang
2011-01-01
Latent fingerprint identification is of critical importance to law enforcement agencies in identifying suspects: Latent fingerprints are inadvertent impressions left by fingers on surfaces of objects. While tremendous progress has been made in plain and rolled fingerprint matching, latent fingerprint matching continues to be a difficult problem. Poor quality of ridge impressions, small finger area, and large nonlinear distortion are the main difficulties in latent fingerprint matching compared to plain or rolled fingerprint matching. We propose a system for matching latent fingerprints found at crime scenes to rolled fingerprints enrolled in law enforcement databases. In addition to minutiae, we also use extended features, including singularity, ridge quality map, ridge flow map, ridge wavelength map, and skeleton. We tested our system by matching 258 latents in the NIST SD27 database against a background database of 29,257 rolled fingerprints obtained by combining the NIST SD4, SD14, and SD27 databases. The minutiae-based baseline rank-1 identification rate of 34.9 percent was improved to 74 percent when extended features were used. In order to evaluate the relative importance of each extended feature, these features were incrementally used in the order of their cost in marking by latent experts. The experimental results indicate that singularity, ridge quality map, and ridge flow map are the most effective features in improving the matching accuracy.
Loukriz, Abdelhamid; Haddadi, Mourad; Messalti, Sabir
2016-05-01
Improvement of the efficiency of photovoltaic system based on new maximum power point tracking (MPPT) algorithms is the most promising solution due to its low cost and its easy implementation without equipment updating. Many MPPT methods with fixed step size have been developed. However, when atmospheric conditions change rapidly , the performance of conventional algorithms is reduced. In this paper, a new variable step size Incremental Conductance IC MPPT algorithm has been proposed. Modeling and simulation of different operational conditions of conventional Incremental Conductance IC and proposed methods are presented. The proposed method was developed and tested successfully on a photovoltaic system based on Flyback converter and control circuit using dsPIC30F4011. Both, simulation and experimental design are provided in several aspects. A comparative study between the proposed variable step size and fixed step size IC MPPT method under similar operating conditions is presented. The obtained results demonstrate the efficiency of the proposed MPPT algorithm in terms of speed in MPP tracking and accuracy. Copyright © 2015 ISA. Published by Elsevier Ltd. All rights reserved.
Real-time blind deconvolution of retinal images in adaptive optics scanning laser ophthalmoscopy
NASA Astrophysics Data System (ADS)
Li, Hao; Lu, Jing; Shi, Guohua; Zhang, Yudong
2011-06-01
With the use of adaptive optics (AO), the ocular aberrations can be compensated to get high-resolution image of living human retina. However, the wavefront correction is not perfect due to the wavefront measure error and hardware restrictions. Thus, it is necessary to use a deconvolution algorithm to recover the retinal images. In this paper, a blind deconvolution technique called Incremental Wiener filter is used to restore the adaptive optics confocal scanning laser ophthalmoscope (AOSLO) images. The point-spread function (PSF) measured by wavefront sensor is only used as an initial value of our algorithm. We also realize the Incremental Wiener filter on graphics processing unit (GPU) in real-time. When the image size is 512 × 480 pixels, six iterations of our algorithm only spend about 10 ms. Retinal blood vessels as well as cells in retinal images are restored by our algorithm, and the PSFs are also revised. Retinal images with and without adaptive optics are both restored. The results show that Incremental Wiener filter reduces the noises and improve the image quality.
Incremental fuzzy C medoids clustering of time series data using dynamic time warping distance
Chen, Jingli; Wu, Shuai; Liu, Zhizhong; Chao, Hao
2018-01-01
Clustering time series data is of great significance since it could extract meaningful statistics and other characteristics. Especially in biomedical engineering, outstanding clustering algorithms for time series may help improve the health level of people. Considering data scale and time shifts of time series, in this paper, we introduce two incremental fuzzy clustering algorithms based on a Dynamic Time Warping (DTW) distance. For recruiting Single-Pass and Online patterns, our algorithms could handle large-scale time series data by splitting it into a set of chunks which are processed sequentially. Besides, our algorithms select DTW to measure distance of pair-wise time series and encourage higher clustering accuracy because DTW could determine an optimal match between any two time series by stretching or compressing segments of temporal data. Our new algorithms are compared to some existing prominent incremental fuzzy clustering algorithms on 12 benchmark time series datasets. The experimental results show that the proposed approaches could yield high quality clusters and were better than all the competitors in terms of clustering accuracy. PMID:29795600
Incremental Scheduling Engines for Human Exploration of the Cosmos
NASA Technical Reports Server (NTRS)
Jaap, John; Phillips, Shaun
2005-01-01
As humankind embarks on longer space missions farther from home, the requirements and environments for scheduling the activities performed on these missions are changing. As we begin to prepare for these missions it is appropriate to evaluate the merits and applicability of the different types of scheduling engines. Scheduling engines temporally arrange tasks onto a timeline so that all constraints and objectives are met and resources are not overbooked. Scheduling engines used to schedule space missions fall into three general categories: batch, mixed-initiative, and incremental. This paper presents an assessment of the engine types, a discussion of the impact of human exploration of the moon and Mars on planning and scheduling, and the applicability of the different types of scheduling engines. This paper will pursue the hypothesis that incremental scheduling engines may have a place in the new environment; they have the potential to reduce cost, to improve the satisfaction of those who execute or benefit from a particular timeline (the customers), and to allow astronauts to plan their own tasks and those of their companion robots.
Incremental Scheduling Engines: Cost Savings through Automation
NASA Technical Reports Server (NTRS)
Jaap, John; Phillips, Shaun
2005-01-01
As humankind embarks on longer space missions farther from home, the requirements and environments for scheduling the activities performed on these missions are changing. As we begin to prepare for these missions it is appropriate to evaluate the merits and applicability of the different types of scheduling engines. Scheduling engines temporally arrange tasks onto a timeline so that all constraints and ob.jectives are met and resources are not over-booked. Scheduling engines used to schedule space missions fall into three general categories: batch, mixed-initiative, and incremental. This paper, presents an assessment of the engine types, a discussion of the impact of human exploration of the moon and Mars on planning and scheduling, and the applicability of the different types of scheduling engines. This paper will pursue the hypothesis that incremental scheduling engines may have a place in the new environment; they have the potential to reduce cost, to improve the satisfaction of those who execute or benefit from a particular timeline (the customers), and to allow astronauts to plan their own tasks and those of their companion robots.
Incremental fuzzy C medoids clustering of time series data using dynamic time warping distance.
Liu, Yongli; Chen, Jingli; Wu, Shuai; Liu, Zhizhong; Chao, Hao
2018-01-01
Clustering time series data is of great significance since it could extract meaningful statistics and other characteristics. Especially in biomedical engineering, outstanding clustering algorithms for time series may help improve the health level of people. Considering data scale and time shifts of time series, in this paper, we introduce two incremental fuzzy clustering algorithms based on a Dynamic Time Warping (DTW) distance. For recruiting Single-Pass and Online patterns, our algorithms could handle large-scale time series data by splitting it into a set of chunks which are processed sequentially. Besides, our algorithms select DTW to measure distance of pair-wise time series and encourage higher clustering accuracy because DTW could determine an optimal match between any two time series by stretching or compressing segments of temporal data. Our new algorithms are compared to some existing prominent incremental fuzzy clustering algorithms on 12 benchmark time series datasets. The experimental results show that the proposed approaches could yield high quality clusters and were better than all the competitors in terms of clustering accuracy.
Life Adaptation Skills Training (LAST) for persons with depression: A randomized controlled study.
Chen, Yun-Ling; Pan, Ay-Woan; Hsiung, Ping-Chuan; Chung, Lyinn; Lai, Jin-Shei; Shur-Fen Gau, Susan; Chen, Tsyr-Jang
2015-10-01
To investigate the efficacy of the "Life Adaptation Skills Training (LAST)" program for persons with depression. Sixty-eight subjects with depressive disorder were recruited from psychiatric outpatient clinics in Taipei city and were randomly assigned to either an intervention group (N=33), or a control group (N=35). The intervention group received 24-sessions of the LAST program, as well as phone contact mainly related to support for a total of 24 times. The control group only received phone contact 24 times. The primary outcome measure utilized was the World Health Organization Quality of Life-BREF-Taiwan version. Secondary outcome measures included the Occupational self-assessment, the Mastery scale, the Social support questionnaire, the Beck anxiety inventory, the Beck depression inventory-II, and the Beck scale for suicide ideation. The mixed-effects linear model was applied to analyze the incremental efficacy of the LAST program, and the partial eta squared (ηp(2)) was used to examine the within- and between- group effect size. The subjects who participated in the LAST program showed significant incremental improvements with moderate to large between-group effect sizes on their level of anxiety (-5.45±2.34, p<0.05; ηp(2)=0.083) and level of suicidal ideation (-3.09±1.11, p<0.01; ηp(2)=0.157) when compared to the control group. The reduction of suicidal ideations had a maintenance effect for three months after the end of intervention (-3.44±1.09, p<0.01), with moderate between-group effect sizes (ηp(2)=0.101). Both groups showed significant improvement on overall QOL, overall health, physical QOL, psychological QOL, level of anxiety, and level of depression. The within-group effect sizes achieved large effects in the intervention group (ηp(2)=0.328-0.544), and were larger than that of the control group. A small sample size in the study, a high dropout rate, lower compliance rates for the intervention group, and lacking of true control group. The occupation-based LAST program, which focuses on lifestyle rearrangement and coping skills enhancement, could significantly improve the level of anxiety and suicidal ideations for persons with depression. Copyright © 2015 Elsevier B.V. All rights reserved.
Repp, B H
1999-04-01
The detectability of a deviation from metronomic timing--of a small local increment in interonset interval (IOI) duration--in a musical excerpt is subject to positional biases, or "timing expectations," that are closely related to the expressive timing (sequence of IOI durations) typically produced by musicians in performance (Repp, 1992b, 1998c, 1998d). Experiment 1 replicated this finding with some changes in procedure and showed that the perception-performance correlation is not the result of formal musical training or availability of a musical score. Experiments 2 and 3 used a synchronization task to examine the hypothesis that participants' perceptual timing expectations are due to systematic modulations in the period of a mental timekeeper that also controls perceptual-motor coordination. Indeed, there was systematic variation in the asynchronies between taps and metronomically timed musical event onsets, and this variation was correlated both with the variations in IOI increment detectability (Experiment 1) and with the typical expressive timing pattern in performance. When the music contained local IOI increments (Experiment 2), they were almost perfectly compensated for on the next tap, regardless of their detectability in Experiment 1, which suggests a perceptual-motor feedback mechanism that is sensitive to subthreshold timing deviations. Overall, the results suggest that aspects of perceived musical structure influence the predictions of mental timekeeping mechanisms, thereby creating a subliminal warping of experienced time.
Numerical simulation of high speed incremental forming of aluminum alloy
NASA Astrophysics Data System (ADS)
Giuseppina, Ambrogio; Teresa, Citrea; Luigino, Filice; Francesco, Gagliardi
2013-12-01
In this study, an innovative process is analyzed with the aim to satisfy the industrial requirements, such as process flexibility, differentiation and customizing of products, cost reduction, minimization of execution time, sustainable production, etc. The attention is focused on incremental forming process, nowadays used in different fields such as: rapid prototyping, medical sector, architectural industry, aerospace and marine, in the production of molds and dies. Incremental forming consists in deforming only a small region of the workspace through a punch driven by a NC machine. SPIF is the considered variant of the process, in which the punch gives local deformation without dies and molds; consequently, the final product geometry can be changed by the control of an actuator without requiring a set of different tools. The drawback of this process is its slowness. The aim of this study is to assess the IF feasibility at high speeds. An experimental campaign will be performed by a CNC lathe with high speed to test process feasibility and the influence on materials formability mainly on aluminum alloys. The first results show how the material presents the same performance than in conventional speed IF and, in some cases, better material behavior due to the temperature field. An accurate numerical simulation has been performed to investigate the material behavior during the high speed process substantially confirming experimental evidence.
Is incremental hemodialysis ready to return on the scene? From empiricism to kinetic modelling.
Basile, Carlo; Casino, Francesco Gaetano; Kalantar-Zadeh, Kamyar
2017-08-01
Most people who make the transition to maintenance dialysis therapy are treated with a fixed dose thrice-weekly hemodialysis regimen without considering their residual kidney function (RKF). The RKF provides effective and naturally continuous clearance of both small and middle molecules, plays a major role in metabolic homeostasis, nutritional status, and cardiovascular health, and aids in fluid management. The RKF is associated with better patient survival and greater health-related quality of life, although these effects may be confounded by patient comorbidities. Preservation of the RKF requires a careful approach, including regular monitoring, avoidance of nephrotoxins, gentle control of blood pressure to avoid intradialytic hypotension, and an individualized dialysis prescription including the consideration of incremental hemodialysis. There is currently no standardized method for applying incremental hemodialysis in practice. Infrequent (once- to twice-weekly) hemodialysis regimens are often used arbitrarily, without knowing which patients would benefit the most from them or how to escalate the dialysis dose as RKF declines over time. The recently heightened interest in incremental hemodialysis has been hindered by the current limitations of the urea kinetic models (UKM) which tend to overestimate the dialysis dose required in the presence of substantial RKF. This is due to an erroneous extrapolation of the equivalence between renal urea clearance (Kru) and dialyser urea clearance (Kd), correctly assumed by the UKM, to the clinical domain. In this context, each ml/min of Kd clears the urea from the blood just as 1 ml/min of Kru does. By no means should such kinetic equivalence imply that 1 ml/min of Kd is clinically equivalent to 1 ml/min of urea clearance provided by the native kidneys. A recent paper by Casino and Basile suggested a variable target model (VTM) as opposed to the fixed model, because the VTM gives more clinical weight to the RKF and allows less frequent hemodialysis treatments at lower RKF. The potentially important clinical and financial implications of incremental hemodialysis render it highly promising and warrant randomized controlled trials.
Rapidly-Indexing Incremental-Angle Encoder
NASA Technical Reports Server (NTRS)
Christon, Philip R.; Meyer, Wallace W.
1989-01-01
Optoelectronic system measures relative angular position of shaft or other device to be turned, also measures absolute angular position after device turned through small angle. Relative angular position measured with fine resolution by optoelectronically counting finely- and uniformly-spaced light and dark areas on encoder disk as disk turns past position-sensing device. Also includes track containing coarsely- and nonuniformly-spaced light and dark areas, angular widths varying in proportion to absolute angular position. This second track provides gating and indexing signal.
Solution of elastic-plastic stress analysis problems by the p-version of the finite element method
NASA Technical Reports Server (NTRS)
Szabo, Barna A.; Actis, Ricardo L.; Holzer, Stefan M.
1993-01-01
The solution of small strain elastic-plastic stress analysis problems by the p-version of the finite element method is discussed. The formulation is based on the deformation theory of plasticity and the displacement method. Practical realization of controlling discretization errors for elastic-plastic problems is the main focus. Numerical examples which include comparisons between the deformation and incremental theories of plasticity under tight control of discretization errors are presented.
Edwardson, Nicholas; Bolin, Jane N; McClellan, David A; Nash, Philip P; Helduser, Janet W
2016-04-01
Demand for a wide array of colorectal cancer screening strategies continues to outpace supply. One strategy to reduce this deficit is to dramatically increase the number of primary care physicians who are trained and supportive of performing office-based colonoscopies or flexible sigmoidoscopies. This study evaluates the clinical and economic implications of training primary care physicians via family medicine residency programs to offer colorectal cancer screening services as an in-office procedure. Using previously established clinical and economic assumptions from existing literature and budget data from a local grant (2013), incremental cost-effectiveness ratios are calculated that incorporate the costs of a proposed national training program and subsequent improvements in patient compliance. Sensitivity analyses are also conducted. Baseline assumptions suggest that the intervention would produce 2394 newly trained residents who could perform 71,820 additional colonoscopies or 119,700 additional flexible sigmoidoscopies after ten years. Despite high costs associated with the national training program, incremental cost-effectiveness ratios remain well below standard willingness-to-pay thresholds under base case assumptions. Interestingly, the status quo hierarchy of preferred screening strategies is disrupted by the proposed intervention. A national overhaul of family medicine residency programs offering training for colorectal cancer screening yields satisfactory incremental cost-effectiveness ratios. However, the model places high expectations on primary care physicians to improve current compliance levels in the US. Copyright © 2016 Elsevier Inc. All rights reserved.
Karamagi, Esther; Kigonya, Angella; Lawino, Anna; Marquez, Lani; Lunsford, Sarah Smith; Twinomugisha, Albert
2018-01-01
Background Uganda is working to increase voluntary medical male circumcision (VMMC) to prevent HIV infection. To support VMMC quality improvement, this study compared three methods of disseminating information to facilities on how to improve VMMC quality: M—providing a written manual; MH—providing the manual plus a handover meeting in which clinicians shared advice on implementing key changes and participated in group discussion; and MHC—manual, handover meeting, and three site visits to the facility in which a coach provided individualized guidance and mentoring on improvement. We determined the different effects these had on compliance with indicators of quality of care. Methods This controlled pre-post intervention study randomized health facility groups to receive M, MH, or MHC. Observations of VMMCs performance determined compliance with quality indicators. Intervention costs per patient receiving VMMC were used in a decision-tree cost-effectiveness model to calculate the incremental cost per additional patient treated to compliance with indicators of informed consent, history taking, anesthesia administration, and post-operative instructions. Results The most intensive method (MHC) cost $28.83 per patient and produced the biggest gains in history taking (35% improvement), anesthesia administration (20% improvement), and post-operative instructions (37% improvement). The least intensive method (M; $1.13 per patient) was most efficient because it produced small gains for a very low cost. The handover meeting (MH) was the most expensive among the three interventions but did not have a corresponding positive effect on quality. Conclusion Health workers in facilities that received the VMMC improvement manual and participated in the handover meeting and coaching visits showed more improvement in VMMC quality indicators than those in the other two intervention groups. Providing the manual alone cost the least but was also the least effective in achieving improvements. The MHC intervention is recommended for broader implementation to improve VMMC quality in Uganda. PMID:29672578
Systems Thinking for Transformational Change in Health
ERIC Educational Resources Information Center
Willis, Cameron D.; Best, Allan; Riley, Barbara; Herbert, Carol P.; Millar, John; Howland, David
2014-01-01
Incremental approaches to introducing change in Canada's health systems have not sufficiently improved the quality of services and outcomes. Further progress requires 'large system transformation', considered to be the systematic effort to generate coordinated change across organisations sharing a common vision and goal. This essay draws on…
Cycle-Based Budgeting Toolkit: A Primer
ERIC Educational Resources Information Center
Yan, Bo
2016-01-01
At the core, budgeting is about distributing and redistributing limited financial resources for continuous improvement. Incremental budgeting is limited in achieving the goal due to lack of connection between outcomes and budget decisions. Zero-based budgeting fills the gap, but is cumbersome to implement, especially for large urban school…
DOT National Transportation Integrated Search
1980-06-01
Volume 3 contains the application of the three-dimensional (3-D) finite element program, Automatic Dynamic Incremental Nonlinear Analysis (ADINA), which was designed to replace the traditional 2-D plane strain analysis, to a specific location. The lo...
Introduction to the newly released GRIN-Global
USDA-ARS?s Scientific Manuscript database
The Germplasm Resources Information Network (GRIN) is an information management system that curates data for the USDA-ARS genetic resource collections. GRIN was developed in the late 1980s and has been incrementally improved over the past 35 years. A major revision was recently deployed to service t...
Merit Goods, Education Public Policy--India at Cross Roads
ERIC Educational Resources Information Center
Misra, Satya Narayan; Ghadai, Sanjaya Ku.
2015-01-01
Merit Goods have always received handsome attention and allocation from countries which have witnessed a congruence between high significant economic growth and Human Development Index (HDI). The Emerging Market Economies (EMEs) have become significant manufacturing hubs by universalizing education and improving their Incremental Capital Output…
NASA Astrophysics Data System (ADS)
Dong, Zhengcheng; Fang, Yanjun; Tian, Meng; Kong, Zhengmin
The hierarchical structure, k-core, is common in various complex networks, and the actual network always has successive layers from 1-core layer (the peripheral layer) to km-core layer (the core layer). The nodes within the core layer have been proved to be the most influential spreaders, but there is few work about how the depth of k-core layers (the value of km) can affect the robustness against cascading failures, rather than the interdependent networks. First, following the preferential attachment, a novel method is proposed to generate the scale-free network with successive k-core layers (KCBA network), and the KCBA network is validated more realistic than the traditional BA network. Then, with KCBA interdependent networks, the effect of the depth of k-core layers is investigated. Considering the load-based model, the loss of capacity on nodes is adopted to quantify the robustness instead of the number of functional nodes in the end. We conduct two attacking strategies, i.e. the RO-attack (Randomly remove only one node) and the RF-attack (Randomly remove a fraction of nodes). Results show that the robustness of KCBA networks not only depends on the depth of k-core layers, but also is slightly influenced by the initial load. With RO-attack, the networks with less k-core layers are more robust when the initial load is small. With RF-attack, the robustness improves with small km, but the improvement is getting weaker with the increment of the initial load. In a word, the lower the depth is, the more robust the networks will be.
Ngabonziza, Jean Claude Semuto; Ssengooba, Willy; Mutua, Florence; Torrea, Gabriela; Dushime, Augustin; Gasana, Michel; Andre, Emmanuel; Uwamungu, Schifra; Nyaruhirira, Alaine Umubyeyi; Mwaengo, Dufton; Muvunyi, Claude Mambo
2016-11-08
Tuberculosis control program of Rwanda is currently phasing in light emitting diode-fluorescent microscopy (LED-FM) as an alternative to Ziehl-Neelsen (ZN) smear microscopy. This, alongside the newly introduced Xpert (Cepheid, Sunnyvale, CA, USA) is expected to improve diagnosis of tuberculosis and detection of rifampicin resistance in patients at health facilities. We assessed the accuracy of smear microscopy and the incremental sensitivity of Xpert at tuberculosis laboratories in Rwanda. This was a cross-sectional study involving four laboratories performing ZN and four laboratories performing LED-FM microscopy. The laboratories include four intermediate (ILs) and four peripheral (PLs) laboratories. After smear microscopy, the left-over of samples, of a single early-morning sputum from 648 participants, were tested using Xpert and mycobacterial culture as a reference standard. Sensitivity of each test was compared and the incremental sensitivity of Xpert after a negative smear was assessed. A total of 96 presumptive pulmonary tuberculosis participants were culture positive for M. tuberculosis. The overall sensitivity in PL of ZN was 55.1 % (40.2-69.3 %), LED-FM was 37 % (19.4-57.6 %) and Xpert was 77.6 % (66.6-86.4 %) whereas in ILs the same value for ZN was 58.3 % (27.7-84.8 %), LED-FM was 62.5 % (24.5-91.5 %) and Xpert was 90 (68.3-98.8 %). The sensitivity for all tests was significantly higher among HIV-negative individuals (all test p <0.05). The overall incremental sensitivity of Xpert over smear microscopy was 32.3 %; p < 0.0001. The incremental sensitivity of Xpert was statistically significant for both smear methods at PL (32.9 %; p = 0.001) but not at the ILs (30 %; p = 0.125) for both smear methods. Our study findings of the early implementation of the LED-FM did not reveal significant increment in sensitivity compared to the method being phased out (ZN). This study showed a significant incremental sensitivity for Xpert from both smear methods at peripheral centers where majority of TB patients are diagnosed. Overall our findings support the recommendation for Xpert as an initial diagnostic test in adults and children presumed to have TB.
Pinder, Margaret; Conteh, Lesong; Jeffries, David; Jones, Caroline; Knudsen, Jakob; Kandeh, Balla; Jawara, Musa; Sicuri, Elisa; D'Alessandro, Umberto; Lindsay, Steve W
2016-06-03
In malaria-endemic areas, residents of modern houses have less malaria than those living in traditional houses. This study will determine if modern housing provides incremental protection against clinical malaria over the current best practice of long-lasting insecticidal nets (LLINs) and prompt treatment in The Gambia, determine the incremental cost-effectiveness of the interventions, and analyze the housing market in The Gambia. A two-armed, household, cluster-randomized, controlled study will be conducted to assess whether improved housing and LLINs combine to provide better protection against clinical malaria in children than LLINs alone in The Gambia. The unit of randomization will be the household, defined as a house and its occupants. A total of 800 households will be enrolled and will receive LLINs, and 400 will receive improved housing before clinical follow-up. One child aged 6 months to 13 years will be enrolled from each household and followed for clinical malaria using active case detection to estimate malaria incidence for two malaria transmission seasons. Episodes of clinical malaria will be the primary endpoint. Study children will be surveyed at the end of each transmission season to estimate the prevalence of Plasmodium falciparum infection, parasite density, and the prevalence of anemia. Exposure to malaria parasites will be assessed using light traps, followed by detection of Anopheles gambiae species and sporozoite infection. Ancillary economic and social science studies will undertake a cost-effectiveness analysis and use qualitative and participatory methods to explore the acceptability of the housing modifications and to design strategies for scaling-up housing interventions. The study is the first of its kind to measure the efficacy of housing on reducing clinical malaria, assess the incremental cost-effectiveness of improved housing, and identify mechanisms for scaling up housing interventions. Trial findings will help inform policy makers on improved housing for malaria control in sub-Saharan Africa. ISRCTN Registry, ISRCTN02622179 . Registered on 23 September 2014.
Sawada, Takahiro; Tsubata, Hideo; Hashimoto, Naoko; Takabe, Michinori; Miyata, Taishi; Aoki, Kosuke; Yamashita, Soichiro; Oishi, Shogo; Osue, Tsuyoshi; Yokoi, Kiminobu; Tsukishiro, Yasue; Onishi, Tetsuari; Shimane, Akira; Taniguchi, Yasuyo; Yasaka, Yoshinori; Ohara, Takeshi; Kawai, Hiroya; Yokoyama, Mitsuhiro
2016-08-26
Recent experimental studies have revealed that n-3 fatty acids, such as eicosapentaenoic acid (EPA) regulate postprandial insulin secretion, and correct postprandial glucose and lipid abnormalities. However, the effects of 6-month EPA treatment on postprandial hyperglycemia and hyperlipidemia, insulin secretion, and concomitant endothelial dysfunction remain unknown in patients with impaired glucose metabolism (IGM) and coronary artery disease (CAD). We randomized 107 newly diagnosed IGM patients with CAD to receive either 1800 mg/day of EPA (EPA group, n = 53) or no EPA (n = 54). Cookie meal testing (carbohydrates: 75 g, fat: 28.5 g) and endothelial function testing using fasting-state flow-mediated dilatation (FMD) were performed before and after 6 months of treatment. The primary outcome of this study was changes in postprandial glycemic and triglyceridemic control and secondary outcomes were improvement of insulin secretion and endothelial dysfunction. After 6 months, the EPA group exhibited significant improvements in EPA/arachidonic acid, fasting triglyceride (TG), and high-density lipoprotein cholesterol (HDL-C). The EPA group also exhibited significant decreases in the incremental TG peak, area under the curve (AUC) for postprandial TG, incremental glucose peak, AUC for postprandial glucose, and improvements in glycometabolism categorization. No significant changes were observed for hemoglobin A1c and fasting plasma glucose levels. The EPA group exhibited a significant increase in AUC-immune reactive insulin/AUC-plasma glucose ratio (which indicates postprandial insulin secretory ability) and significant improvements in FMD. Multiple regression analysis revealed that decreases in the TG/HDL-C ratio and incremental TG peak were independent predictors of FMD improvement in the EPA group. EPA corrected postprandial hypertriglyceridemia, hyperglycemia and insulin secretion ability. This amelioration of several metabolic abnormalities was accompanied by recovery of concomitant endothelial dysfunction in newly diagnosed IGM patients with CAD. Clinical Trial Registration UMIN Registry number: UMIN000011265 ( https://www.upload.umin.ac.jp/cgi-open-bin/ctr/ctr.cgi?function=brows&action=brows&type=summary&recptno=R000013200&language=E ).
Percolator: Scalable Pattern Discovery in Dynamic Graphs
DOE Office of Scientific and Technical Information (OSTI.GOV)
Choudhury, Sutanay; Purohit, Sumit; Lin, Peng
We demonstrate Percolator, a distributed system for graph pattern discovery in dynamic graphs. In contrast to conventional mining systems, Percolator advocates efficient pattern mining schemes that (1) support pattern detection with keywords; (2) integrate incremental and parallel pattern mining; and (3) support analytical queries such as trend analysis. The core idea of Percolator is to dynamically decide and verify a small fraction of patterns and their in- stances that must be inspected in response to buffered updates in dynamic graphs, with a total mining cost independent of graph size. We demonstrate a) the feasibility of incremental pattern mining by walkingmore » through each component of Percolator, b) the efficiency and scalability of Percolator over the sheer size of real-world dynamic graphs, and c) how the user-friendly GUI of Percolator inter- acts with users to support keyword-based queries that detect, browse and inspect trending patterns. We also demonstrate two user cases of Percolator, in social media trend analysis and academic collaboration analysis, respectively.« less
Xiao, Hui-Jie; Wei, Zi-Gang; Wang, Qing; Zhu, Xiao-Bo
2012-12-01
Based on the theory of harmonious development of ecological economy, a total of 13 evaluation indices were selected from the ecological, economic, and social sub-systems of Yanqi River watershed in Huairou District of Beijing. The selected evaluation indices were normalized by using trapezoid functions, and the weights of the evaluation indices were determined by analytic hierarchy process. Then, the eco-economic benefits of the watershed were evaluated with weighted composite index method. From 2004 to 2011, the ecological, economic, and social benefits of Yanqi River watershed all had somewhat increase, among which, ecological benefit increased most, with the value changed from 0.210 in 2004 to 0.255 in 2011 and an increment of 21.5%. The eco-economic benefits of the watershed increased from 0.734 in 2004 to 0.840 in 2011, with an increment of 14.2%. At present, the watershed reached the stage of advanced ecosystem, being in beneficial circulation and harmonious development of ecology, economy, and society.
Characteristics of a Sensitive Well Showing Pre-Earthquake Water-Level Changes
NASA Astrophysics Data System (ADS)
King, Chi-Yu
2018-04-01
Water-level data recorded at a sensitive well next to a fault in central Japan between 1989 and 1998 showed many coseismic water-level drops and a large (60 cm) and long (6-month) pre-earthquake drop before a rare local earthquake of magnitude 5.8 on 17 March 1997, as well as 5 smaller pre-earthquake drops during a 7-year period prior to this earthquake. The pre-earthquake changes were previously attributed to leakage through the fault-gouge zone caused by small but broad-scaled crustal-stress increments. These increments now seem to be induced by some large slow-slip events. The coseismic changes are attributed to seismic shaking-induced fissures in the adjacent aquitards, in addition to leakage through the fault. The well's high-sensitivity is attributed to its tapping a highly permeable aquifer, which is connected to the fractured side of the fault, and its near-critical condition for leakage, especially during the 7 years before the magnitude 5.8 earthquake.
NASA Astrophysics Data System (ADS)
Danesh-Yazdi, Mohammad; Botter, Gianluca; Foufoula-Georgiou, Efi
2017-05-01
Lack of hydro-bio-chemical data at subcatchment scales necessitates adopting an aggregated system approach for estimating water and solute transport properties, such as residence and travel time distributions, at the catchment scale. In this work, we show that within-catchment spatial heterogeneity, as expressed in spatially variable discharge-storage relationships, can be appropriately encapsulated within a lumped time-varying stochastic Lagrangian formulation of transport. This time (variability) for space (heterogeneity) substitution yields mean travel times (MTTs) that are not significantly biased to the aggregation of spatial heterogeneity. Despite the significant variability of MTT at small spatial scales, there exists a characteristic scale above which the MTT is not impacted by the aggregation of spatial heterogeneity. Extensive simulations of randomly generated river networks reveal that the ratio between the characteristic scale and the mean incremental area is on average independent of river network topology and the spatial arrangement of incremental areas.
NASA Astrophysics Data System (ADS)
Sgambitterra, Emanuele; Piccininni, Antonio; Guglielmi, Pasquale; Ambrogio, Giuseppina; Fragomeni, Gionata; Villa, Tomaso; Palumbo, Gianfranco
2018-05-01
Cranial implants are custom prostheses characterized by quite high geometrical complexity and small thickness; at the same time aesthetic and mechanical requirements have to be met. Titanium alloys are largely adopted for such prostheses, as they can be processed via different manufacturing technologies. In the present work cranial prostheses have been manufactured by Super Plastic Forming (SPF) and Single Point Incremental Forming (SPIF). In order to assess the mechanical performance of the cranial prostheses, drop tests under different load conditions were conducted on flat samples to investigate the effect of the blank thickness. Numerical simulations were also run for comparison purposes. The mechanical performance of the cranial implants manufactured by SPF and SPIF could be predicted using drop test data and information about the thickness evolution of the formed parts: the SPIFed prosthesis revealed to have a lower maximum deflection and a higher maximum force, while the SPFed prostheses showed a lower absorbed energy.
Micromagnetic simulation study of magnetization reversal in torus-shaped permalloy nanorings
NASA Astrophysics Data System (ADS)
Mishra, Amaresh Chandra; Giri, R.
2017-09-01
Using micromagnetic simulation, the magnetization reversal of soft permalloy rings of torus shape with major radius R varying within 20-100 nm has been investigated. The minor radius r of the torus rings was increased from 5 nm up to a maximum value rmax such that R- rmax = 10 nm. Micromagnetic simulation of in-plane hysteresis curve of these nanorings revealed that in the case of very thin rings (r ≤ 10 nm), the remanent state is found to be an onion state, whereas for all other rings, the remanent state is a vortex state. The area of the hysteresis loop was found to be decreasing gradually with the increment of r. The normalized area under the hysteresis loops (AN) increases initially with increment of r. It attains a maximum for a certain value of r = r0 and again decreases thereafter. This value r0 increases as we decrease R and as a result, this peak feature is hardly visible in the case of smaller rings (rings having small R).
Improving BP control through electronic communications: an economic evaluation.
Fishman, Paul A; Cook, Andrea J; Anderson, Melissa L; Ralston, James D; Catz, Sheryl L; Carrell, David; Carlson, James; Green, Beverly B
2013-09-01
Web-based collaborative approaches to managing chronic illness show promise for both improving health outcomes and increasing the efficiency of the healthcare system. Analyze the cost-effectiveness of the Electronic Communications and Home Blood Pressure Monitoring to Improve Blood Pressure Control (e-BP) study, a randomized controlled trial that used a patient-shared electronic medical record, home blood pressure (BP) monitoring, and web-based pharmacist care to improve BP control (<140/90 mm Hg). Incremental cost-effectiveness analysis conducted from a health plan perspective. Cost-effectiveness of home BP monitoring and web-based pharmacist care estimated for percent change in patients with controlled BP and cost per mm Hg in diastolic and systolic BP relative to usual care and home BP monitoring alone. A 1% improvement in number of patients with controlled BP using home BP monitoring and web-based pharmacist care-the e-BP program-costs $16.65 (95% confidence interval: 15.37- 17.94) relative to home BP monitoring and web training alone. Each mm HG reduction in systolic and diastolic BP achieved through the e-BP program costs $65.29 (59.91-70.67) relativeto home BP monitoring and web tools only. Life expectancy was increased at an incremental cost of $1850 (1635-2064) and $2220 (1745-2694) per year of life saved for men and women, respectively. Web-based collaborative care can be used to achieve BP control at a relatively low cost. Future research should examine the cost impact of potential long-term clinical improvements.
Single Point Incremental Forming to increase material knowledge and production flexibility
NASA Astrophysics Data System (ADS)
Habraken, A. M.
2016-08-01
Nowadays, manufactured pieces can be divided into two groups: mass production and production of low volume number of parts. Within the second group (prototyping or small batch production), an emerging solution relies on Incremental Sheet Forming or ISF. ISF refers to processes where the plastic deformation occurs by repeated contact with a relatively small tool. More specifically, many publications over the past decade investigate Single Point Incremental Forming (SPIF) where the final shape is determined only by the tool movement. This manufacturing process is characterized by the forming of sheets by means of a CNC controlled generic tool stylus, with the sheets clamped by means of a non-workpiece-specific clamping system and in absence of a partial or a full die. The advantage is no tooling requirements and often enhanced formability, however it poses a challenge in term of process control and accuracy assurance. Note that the most commonly used materials in incremental forming are aluminum and steel alloys however other alloys are also used especially for medical industry applications, such as cobalt and chromium alloys, stainless steel and titanium alloys. Some scientists have applied incremental forming on PVC plates and other on sandwich panels composed of propylene with mild steel and aluminum metallic foams with aluminum sheet metal. Micro incremental forming of thin foils has also been developed. Starting from the scattering of the results of Finite Element (FE) simulations, when one tries to predict the tool force (see SPIF benchmark of 2014 Numisheet conference), we will see how SPIF and even micro SPIF (process applied on thin metallic sheet with a few grains within the thickness) allow investigating the material behavior. This lecture will focus on the identification of constitutive laws, on the SPIF forming mechanisms and formability as well as the failure mechanism. Different hypotheses have been proposed to explain SPIF formability, they will be listed however the lecture will be more focused on the use of SPIF to identify material parameters of well-chosen constitutive law. Results of FE simulations with damage models will be investigated to better understand the relation between the particular stress and strain states in the material during SPIF and the material degradation leading to localization or fracture. Last but not least, as industrial world does not wait that academic scientists provide a deep and total understanding on how it works, to use interesting processes, the lecture will review some applications. Examples in fields as different as automotive guard, engine heat shield, gas turbine, electronic sensor, shower basin, medical component (patient-fitted organic shapes) and architecture demonstrate that the integration of SPIF within the industry is more and more a reality. Note that this plenary lecture is the result of the research performed by the author in the University of Liege (Belgium) and in Aveiro (Portugal) with the team of R. de Souza during PhD theses of C. Henrard, J. Sena and C. Guzman and different research projects. It is also a synthesis of the knowledge gathered during her interactions with many research teams such as the ones of J.R. Duflou from KU Leuven in Belgium, J. Cao from Northwestern University in USA, M. Bambach in BTU Cottbus-Senftenberg in Germany, J. Jeswiet from Queen's University, Kingston, Canada who are currently working together on a state-of-the-art paper. The micro SPIF knowledge relies on contacts with S. Thibaud from the University of Franche Comte.
MARS Gravity-Assist to Improve Missions towards Main-Belt Asteroids
NASA Astrophysics Data System (ADS)
Casalino, Lorenzo; Colasurdo, Guido
fain-belt asteroids are one of the keys to the investigation of the processes that lead to the solar electric propulsion (SEP) with ion thrusters is a mature technology for the exploration of the bolar system. NASA is currently planning the DAWN mission towards two asteroids of the main s with Vesta in 2010 and Ceres in 2014. A mission to an asteroid of the main belt requires a large velocity increment (V) and the use of high-specific-impulse thrusters, such as ion thrusters, p m ovides a large improvement of the payload and, consequently, of the scientific return of the of this kind of trajectory is a non-trivial task, since many local optima exist and performance can be improved by increasing the trip-time and the number of revolutions around the sun, in order to use t the propellant only in the most favorable positions (namely, perihelia, aphelia and nodes) along the Mars is midway between the Earth and the main belt; even though its gravity is quite small, a gravity assist from Mars can remarkably improve the trajectory performance and is considered in this paper. p he authors use an indirect optimization procedure based on the theory of optimal control. The Mars) spheres of influence is neglected; the equations of motion are therefore integrated only in the heliocentric reference frame, whereas the flyby is treated as a discontinuity of the spacecraft's velocity. The paper analyzes trajectories, which exploit chemical propulsion to escape from the E variable-power, constant-specific-impulse propulsion system is assumed. The optimization procedure provides departure, flyby and arrival dates, the hyperbolic excess velocity on leaving the t arth's sphere of influence, which must be provided by the chemical propulsion system, and the E e ass at rendezvous, when the trip time is assigned. As far as the thrust magnitude is concerned, m either full-thrust arcs or coast arcs are required, and the procedure provides the times to switch the g low and the spacecraft would pass too close to (or even inside) the planet's surface to obtain a is c A comparison of direct flight and Mars-Gravity-Assist trajectories is carried out in the paper. The oharacteristics and theoretical performance of the optimal trajectories are determined as a function Numerical results show that in many cases the energy and inclination increment, which is provided b f y Mars' gravity, can significantly reduce the propellant requirements and increase the spacecraft
48 CFR 3452.232-71 - Incremental funding.
Code of Federal Regulations, 2013 CFR
2013-10-01
... 48 Federal Acquisition Regulations System 7 2013-10-01 2012-10-01 true Incremental funding. 3452....232-71 Incremental funding. As prescribed in 3432.705-2, insert the following provision in solicitations if a cost-reimbursement contract using incremental funding is contemplated: Incremental Funding...
48 CFR 3452.232-71 - Incremental funding.
Code of Federal Regulations, 2014 CFR
2014-10-01
... 48 Federal Acquisition Regulations System 7 2014-10-01 2014-10-01 false Incremental funding. 3452....232-71 Incremental funding. As prescribed in 3432.705-2, insert the following provision in solicitations if a cost-reimbursement contract using incremental funding is contemplated: Incremental Funding...
48 CFR 3452.232-71 - Incremental funding.
Code of Federal Regulations, 2012 CFR
2012-10-01
... 48 Federal Acquisition Regulations System 7 2012-10-01 2012-10-01 false Incremental funding. 3452....232-71 Incremental funding. As prescribed in 3432.705-2, insert the following provision in solicitations if a cost-reimbursement contract using incremental funding is contemplated: Incremental Funding...
48 CFR 3452.232-71 - Incremental funding.
Code of Federal Regulations, 2011 CFR
2011-10-01
... 48 Federal Acquisition Regulations System 7 2011-10-01 2011-10-01 false Incremental funding. 3452....232-71 Incremental funding. As prescribed in 3432.705-2, insert the following provision in solicitations if a cost-reimbursement contract using incremental funding is contemplated: Incremental Funding...
Microstructure and mechanical properties of plasma sprayed HA/YSZ/Ti-6Al-4V composite coatings.
Khor, K A; Gu, Y W; Pan, D; Cheang, P
2004-08-01
Plasma sprayed hydroxyapatite (HA) coatings on titanium alloy substrate have been used extensively due to their excellent biocompatibility and osteoconductivity. However, the erratic bond strength between HA and Ti alloy has raised concern over the long-term reliability of the implant. In this paper, HA/yttria stabilized zirconia (YSZ)/Ti-6Al-4V composite coatings that possess superior mechanical properties to conventional plasma sprayed HA coatings were developed. Ti-6Al-4V powders coated with fine YSZ and HA particles were prepared through a unique ceramic slurry mixing method. The so-formed composite powder was employed as feedstock for plasma spraying of the HA/YSZ/Ti-6Al-4V coatings. The influence of net plasma energy, plasma spray standoff distance, and post-spray heat treatment on microstructure, phase composition and mechanical properties were investigated. Results showed that coatings prepared with the optimum plasma sprayed condition showed a well-defined splat structure. HA/YSZ/Ti-6Al-4V solid solution was formed during plasma spraying which was beneficial for the improvement of mechanical properties. There was no evidence of Ti oxidation from the successful processing of YSZ and HA coated Ti-6Al-4V composite powders. Small amount of CaO apart from HA, ZrO(2) and Ti was present in the composite coatings. The microhardness, Young's modulus, fracture toughness, and bond strength increased significantly with the addition of YSZ. Post-spray heat treatment at 600 degrees C and 700 degrees C for up to 12h was found to further improve the mechanical properties of coatings. After the post-spray heat treatment, 17.6% increment in Young's modulus (E) and 16.3% increment in Vicker's hardness were achieved. The strengthening mechanisms of HA/YSZ/Ti-6Al-4V composite coatings were related to the dispersion strengthening by homogeneous distribution of YSZ particles in the matrix, the good mechanical properties of Ti-6Al-4V and the formation of solid solution among HA, Ti alloy and YSZ components.
Menon, J; Mishra, P
2018-04-01
We determined incremental health care resource utilization, incremental health care expenditures, incremental absenteeism, and incremental absenteeism costs associated with osteoarthritis. Medical Expenditure Panel Survey (MEPS) for 2011 was used as data source. Individuals 18 years or older and employed during 2011 were eligible for inclusion in the sample for analyses. Individuals with osteoarthritis were identified based on ICD-9-CM codes. Incremental health care resource utilization included annual hospitalization, hospital days, emergency room visits and outpatient visits. Incremental health expenditures included annual inpatient, outpatient, emergency room, medications, miscellaneous and annual total expenditures. Of the total sample, 1354 were diagnosed with osteoarthritis, and compared to non osteoarthritis individuals. Incremental resource utilization, expenditures, absenteeism and absenteeism costs were estimated using regression models, adjusting for age, gender, sex, region, marital status, insurance coverage, comorbidities, anxiety, asthma, hypertension and hyperlipidemia. Regression models revealed incremental mean annual resource use associated with osteoarthritis of 0.07 hospitalizations, equal to 70 additional hospitalizations per 100 osteoarthritic patients annually, and 3.63 outpatient visits, equal to 363 additional visits per 100 osteoarthritic patients annually. Mean annual incremental total expenditures associated with osteoarthritis were $2046. Annually, mean incremental expenditures were largest for inpatient expenditures at $826, followed by mean incremental outpatient expenditures of $659, and mean incremental medication expenditures of $325. Mean annual incremental absenteeism was 2.2 days and mean annual incremental absenteeism costs were $715.74. Total direct expenditures were estimated at $41.7 billion. Osteoarthritis was associated with significant incremental health care resource utilization, expenditures, absenteeism and absenteeism costs. Copyright © 2017 Osteoarthritis Research Society International. Published by Elsevier Ltd. All rights reserved.
Wilker, Elissa H; Martinez-Ramirez, Sergi; Kloog, Itai; Schwartz, Joel; Mostofsky, Elizabeth; Koutrakis, Petros; Mittleman, Murray A; Viswanathan, Anand
2016-06-30
Long-term exposure to ambient air pollution has been associated with impaired cognitive function and vascular disease in older adults, but little is known about these associations among people with concerns about memory loss. To examine associations between exposures to fine particulate matter and residential proximity to major roads and markers of small vessel disease. From 2004-2010, 236 participants in the Massachusetts Alzheimer's Disease Research Center Longitudinal Cohort participated in neuroimaging studies. Residential proximity to major roads and estimated 2003 residential annual average of fine particulate air pollution (PM2.5) were linked to measures of brain parenchymal fraction (BPF), white matter hyperintensities (WMH), and cerebral microbleeds. Associations were modeled using linear and logistic regression and adjusted for clinical and lifestyle factors. In this population (median age [interquartile range] = 74 [12], 57% female) living in a region with median 2003 PM2.5 annual average below the current Environmental Protection Agency (EPA) standard, there were no associations between living closer to a major roadway or for a 2μg/m3 increment in PM2.5 and smaller BPF, greater WMH volume, or a higher odds of microbleeds. However, a 2μg/m3 increment in PM2.5 was associated with -0.19 (95% Confidence Interval (CI): -0.37, -0.005) lower natural log-transformed WMH volume. Other associations had wide confidence intervals. In this population, where median 2003 estimated PM2.5 levels were below the current EPA standard, we observed no pattern of association between residential proximity to major roads or 2003 average PM2.5 and greater burden of small vessel disease or neurodegeneration.
Wilker, Elissa H.; Martinez-Ramirez, Sergi; Kloog, Itai; Schwartz, Joel; Mostofsky, Elizabeth; Koutrakis, Petros; Mittleman, Murray A.; Viswanathan, Anand
2016-01-01
Background Long-term exposure to ambient air pollution has been associated with impaired cognitive function and vascular disease in older adults, but little is known about these associations among people with concerns about memory loss. Objective To examine associations between exposures to fine particulate matter and residential proximity to major roads and markers of small vessel disease. Methods From 2004—2010, 236 participants in the Massachusetts Alzheimer’s Disease Research Center Longitudinal Cohort participated in neuroimaging studies. Residential proximity to major roads and estimated 2003 residential annual average of fine particulate air pollution (PM2.5) were linked to measures of brain parenchymal fraction (BPF), white matter hyperintensities (WMH), and cerebral microbleeds. Associations were modeled using linear and logistic regression and adjusted for clinical and lifestyle factors. Results In this population (median age [interquartile range]=74[12], 57% female) living in a region with median 2003 PM2.5 annual average below the current Environmental Protection Agency (EPA) standard, there were no associations between living closer to a major roadway or for a 2 μg/m3 increment in PM2.5 and smaller BPF, greater WMH volume, or a higher odds of microbleeds. However, a 2 μg/m3 increment in PM2.5 was associated with −0.19 (95% Confidence Interval (CI): −0.37, −0.005) lower natural log-transformed WMH volume. Other associations had wide confidence intervals. Conclusions In this population, where median 2003 estimated PM2.5 levels were below the current EPA standard, we observed no pattern of association between residential proximity to major roads or 2003 average PM2.5 and greater burden of small vessel disease or neurodegeneration. PMID:27372639
Filament wound data base development, revision 1, appendix A
NASA Technical Reports Server (NTRS)
Sharp, R. Scott; Braddock, William F.
1985-01-01
Data are presented in tabular form for the High Performance Nozzle Increments, Filament Wound Case (FWC) Systems Tunnel Increments, Steel Case Systems Tunnel Increments, FWC Stiffener Rings Increments, Steel Case Stiffener Rings Increments, FWC External Tank (ET) Attach Ring Increments, Steel Case ET Attach Ring Increments, and Data Tape 8. The High Performance Nozzle are also presented in graphical form. The tabular data consist of six-component force and moment coefficients as they vary with angle of attack at a specific Mach number and roll angle. The six coefficients are normal force, pitching moment, side force, yawing moment, axial force, and rolling moment. The graphical data for the High Performance Nozzle Increments consist of a plot of a coefficient increment as a function of angle of attack at a specific Mach number and at a roll angle of 0 deg.
Graph Based Models for Unsupervised High Dimensional Data Clustering and Network Analysis
2015-01-01
ApprovedOMB No. 0704-0188 Public reporting burden for the collection of information is estimated to average 1 hour per response, including the time for...algorithms we proposed improve the time e ciency signi cantly for large scale datasets. In the last chapter, we also propose an incremental reseeding...plume detection in hyper-spectral video data. These graph based clustering algorithms we proposed improve the time efficiency significantly for large
Colbourn, Tim; Pulkki-Brännström, Anni-Maria; Nambiar, Bejoy; Kim, Sungwook; Bondo, Austin; Banda, Lumbani; Makwenda, Charles; Batura, Neha; Haghparast-Bidgoli, Hassan; Hunter, Rachael; Costello, Anthony; Baio, Gianluca; Skordis-Worrall, Jolene
2015-01-01
Understanding the cost-effectiveness and affordability of interventions to reduce maternal and newborn deaths is critical to persuading policymakers and donors to implement at scale. The effectiveness of community mobilisation through women's groups and health facility quality improvement, both aiming to reduce maternal and neonatal mortality, was assessed by a cluster randomised controlled trial conducted in rural Malawi in 2008-2010. In this paper, we calculate intervention cost-effectiveness and model the affordability of the interventions at scale. Bayesian methods are used to estimate the incremental cost-effectiveness of the community and facility interventions on their own (CI, FI), and together (FICI), compared to current practice in rural Malawi. Effects are estimated with Monte Carlo simulation using the combined full probability distributions of intervention effects on stillbirths, neonatal deaths and maternal deaths. Cost data was collected prospectively from a provider perspective using an ingredients approach and disaggregated at the intervention (not cluster or individual) level. Expected Incremental Benefit, Cost-effectiveness Acceptability Curves and Expected Value of Information (EVI) were calculated using a threshold of $780 per disability-adjusted life-year (DALY) averted, the per capita gross domestic product of Malawi in 2013 international $. The incremental cost-effectiveness of CI, FI, and combined FICI was $79, $281, and $146 per DALY averted respectively, compared to current practice. FI is dominated by CI and FICI. Taking into account uncertainty, both CI and combined FICI are highly likely to be cost effective (probability 98% and 93%, EVI $210,423 and $598,177 respectively). Combined FICI is incrementally cost effective compared to either intervention individually (probability 60%, ICER $292, EIB $9,334,580 compared to CI). Future scenarios also found FICI to be the optimal decision. Scaling-up to the whole of Malawi, CI is of greatest value for money, potentially averting 13.0% of remaining annual DALYs from stillbirths, neonatal and maternal deaths for the equivalent of 6.8% of current annual expenditure on maternal and neonatal health in Malawi. Community mobilisation through women's groups is a highly cost-effective and affordable strategy to reduce maternal and neonatal mortality in Malawi. Combining community mobilisation with health facility quality improvement is more effective, more costly, but also highly cost-effective and potentially affordable in this context.
Renewable Electricity Futures Study | Energy Analysis | NREL
reductions in electric sector greenhouse gas emissions and water use. The direct incremental cost associated with high renewable generation is comparable to published cost estimates of other clean energy scenarios. Improvement in the cost and performance of renewable technologies is the most impactful lever for
ERIC Educational Resources Information Center
Wisconsin State Dept. of Public Instruction, Madison.
To demonstrate career education methods and procedures, seven Wisconsin school districts were chosen to participate in an incremental improvement project. Project activities were conducted district-wide and by individual districts. District-wide dissemination activities involved promoting demonstration packets and assessment/evaluation (AE)…
Expanding Instructional Horizons: A Case Study of Teacher Team-Outside Expert Partnerships
ERIC Educational Resources Information Center
Ermeling, Bradley A.; Yarbo, Jessica
2016-01-01
Background: Despite increasing popularity and mounting evidence for teacher collaboration as a lever for school improvement, reported changes in teaching associated with collaboration are often subtle and incremental, rarely involving substantial shifts in instructional practice called for by advocates of deeper learning and next-generation…
ERIC Educational Resources Information Center
Boettcher, Judith V.
2009-01-01
When it comes to remote collaboration in eLearning, it's all about where people are now vs. where people are going. A 1999 innovation model presented by R.S. Rosenbloom describes three stages of development for any new technology: (1) imitation; (2) incremental improvement; and (3) transformation. Mapping these stages onto the current state of…
Characterizing Semiconductor Alloys for Infrared Sensors
NASA Technical Reports Server (NTRS)
Lehoczky, B. S. L.; Szofran, F. R.; Martin, B. G.
1986-01-01
Report presents results of continuing program aimed at characterizing mercury/cadmium/tellurium alloys and eventually developing improved methods of preparing alloys for use as infrared sensors. Work covered by report includes series of differential thermal analysis (DTA) measurements of alloy compositions with x varied from 9 to 1 in 0.1 increments.
Business Model Innovation: A Blueprint for Higher Education
ERIC Educational Resources Information Center
Flanagan, Christine
2012-01-01
Business model innovation is one of the most challenging components of 21st-century leadership. Making incremental improvements to a business model--creating new efficiencies, expanding into adjacent markets--is hard enough. Developing and experimenting with new business models that truly transform how an institution delivers value (while…
Integrated Strategic Planning and Analysis Network Increment 5 (ISPAN Inc 5)
2016-03-01
Defense Acquisition Executive DoD - Department of Defense DoDAF - DoD Architecture Framework FD - Full Deployment FDD - Full Deployment Decision...achieve FDD in August 2018. ISPAN Inc 5 is envisioned as a follow-on to ISPAN Inc 4 in order to respond to USSTRATCOM requirements for improved
Choice in Schooling: A Case for Tuition Vouchers.
ERIC Educational Resources Information Center
Kirkpatrick, David W.
The educational reform movement produced only incremental improvements in student achievement, prompting a need for greater focus on structural and cultural aspects of school organization. Parental choice is the necessary element for successful school reform in the future. The public educational system that has evolved in America is widely…
Improvement of the Power Control Unit for Ion Thruster to Cope with Milli-Newton Range RIT
NASA Astrophysics Data System (ADS)
Ceruti, Luca; Polli, Aldo; Galantini, Paolo
2014-08-01
The recent development and testing activities of a miniaturized Radio-Frequency Ion Thruster, with relevant ancillary elements, in the range of 10 to 100 micro-Newtons, joined with past flight heritage in the milli-Newton range (RIT-10 for Artemis), shows an appealing capability of such an electrical propulsion technology to support thrust in a wide range of space applications from very fine attitude control up to deorbiting of small-medium satellites. As expectable, this implies that the mentioned ancillary elements (mainly Radio-Frequency Generator and Power Control Unit) require adaptation to the different requirements imposed to different missions and thrust ranges. Regarding the Power Control Unit different power levels, both the controllability requirements and the spacecraft interfaces impose non negligible adaptation leading to significant increase of development activities and associated cost (nonrecurring) increase. From that and with the main purpose to minimize such impacts and provide reliable equipments, Selex ES since a few years is devoting maximum attention in the incremental innovation of the existing design in order to maximize their reuse.
Advances in Gene Therapy for Hemophilia.
Nathwani, Amit C; Davidoff, Andrew M; Tuddenham, Edward G D
2017-11-01
Gene therapy provides hope for a cure for patients with hemophilia by establishing continuous endogenous expression of factor VIII or factor IX following transfer of a functional gene copy to replace the hemophilic patient's own defective gene. Hemophilia may be considered a "low-hanging fruit" for gene therapy because a small increment in blood factor levels (≥2% of normal) significantly improves the bleeding tendency from severe to moderate, eliminating most spontaneous bleeds. After decades of research, the first trial to provide clear evidence of efficiency after gene transfer in patients with hemophilia B using adeno-associated virus vectors was reported by the authors' group in 2011. This has been followed by unprecedented activity in this area, with the commencement of seven new early-phase trials involving >55 patients with hemophilia A or hemophilia B. These studies have, in large part, generated promising clinical data that lay a strong foundation for gene therapy to move forward rapidly to market authorization. This review discusses the data from the authors' studies and emerging results from other gene therapy trials in both hemophilia A and B.
Novel Synthetic Antimicrobial Peptides against Streptococcus mutans▿
He, Jian; Eckert, Randal; Pharm, Thanh; Simanian, Maurice D.; Hu, Chuhong; Yarbrough, Daniel K.; Qi, Fengxia; Anderson, Maxwell H.; Shi, Wenyuan
2007-01-01
Streptococcus mutans, a common oral pathogen and the causative agent of dental caries, has persisted and even thrived on the tooth surface despite constant removal and eradication efforts. In this study, we generated a number of synthetic antimicrobial peptides against this bacterium via construction and screening of several structurally diverse peptide libraries where the hydrophobicity and charge within each library was varied incrementally in order to generate a collection of peptides with different biochemical characteristics. From these libraries, we identified multiple peptides with robust killing activity against S. mutans. To further improve their effectiveness, the most bactericidal peptides from each library were synthesized together as one molecule, in various combinations, with and without a flexible peptide linker between each antimicrobial region. Many of these “fusion” peptides had enhanced killing activities in comparison with those of the original nonconjoined molecules. The results presented here illustrate that small libraries of biochemically constrained peptides can be used to generate antimicrobial peptides against S. mutans, several of which may be likely candidates for the development of anticaries agents. PMID:17296741
Cheng, Jiang; Dai, Zhongjun; Chen, Bing; Ji, Ran; Yang, Xin; Hu, Rong; Zhu, Jiang; Li, Lu
2016-12-01
In this work, we report on a simple non-injection synthesis routine for the preparation of well-dispersed monocrystalline Cu 2 ZnSnS 4 (CZTS) nanoparticles (NPs). The nanocrystal morphology was investigated by scanning and transmission electron microscopy, and its phase composition was studied by X-ray diffraction and Raman analyses. Cu 2 ZnSnS 4 nanoparticles prepared using ethanolamine and diethanolamine as chemical stabilizers showed a high purity and a suitable size for polymer solar cell applications. The fabricated CZTS NPs are shown to be easily dispersed in a polymer/fullerene aromatic solution as well as the hybrid photovoltaic active layer. Thanks to the increment in the light absorption and electrical conductivity of the active layer, solar cells with a small amount of CZTS nanoparticles resulted in a clear enhancement of the photovoltaic performance. The short-circuit current density is increased from 9.90 up to 10.67 mA/cm 2 , corresponding to an improvement in the power conversion efficiency (PCE) from 3.30 to 3.65%.
Value of coronary computed tomography as a prognostic tool.
Contractor, Tahmeed; Parekh, Maansi; Ahmed, Shameer; Martinez, Matthew W
2012-08-01
Coronary computed tomography angiography (CCTA) has become an important part of our armamentarium for noninvasive diagnosis of coronary artery disease (CAD). Emerging technologies have produced lower radiation dose, improved spatial and temporal resolution, as well as information about coronary physiology. Although the prognostic role of coronary artery calcium scoring is known, similar evidence for CCTA has only recently emerged. Initial, small studies in various patient populations have indicated that CCTA-identified CAD may have a prognostic value. These findings were confirmed in a recent analysis of the international, prospective Coronary CT Angiography Evaluation For Clinical Outcomes: An International Multicenter (CONFIRM) registry. An incremental increase in mortality was found with a worse severity of CAD on a per-patient, per-vessel, and per-segment basis. In addition, age-, sex-, and ethnicity-based differences in mortality were also found. Whether changing our management algorithms based on these findings will affect outcomes is unclear. Large prospective studies utilizing targeted management strategies for obstructive and nonobstructive CAD are required to incorporate these recent findings into our daily practice. © 2012 Wiley Periodicals, Inc.
Son, Dong-Jin; Kim, Woo-Yeol; Yun, Chan-Young; Kim, Dae-Gun; Chang, Duk; Sunwoo, Young; Hong, Ki-Ho
2017-07-05
The electrolysis process adopting copper electrodes and ceramic membrane with pore sizes of 0.1-0.2 μm were consisted to a system for the treatment of sewage from decentralized small communities. The system was operated under an HRT of 0.1 hour, voltage of 24 V, and TMP of 0.05 MPa. The system showed average removals of organics, nitrogen, phosphorus, and solids of up to 80%, 52%, 92%, and 100%, respectively. Removal of organics and nitrogen dramatically increased in proportion to increment of influent loading. Phosphorus and solids were remarkably eliminated by both electro-coagulation and membrane filtration. The residual particulate constituents could also be removed successfully through membrane process. A system composed of electrolysis process with ceramic membrane would be a compact, reliable, and flexible option for the treatment of sewage from decentralized small communities.
Integrating small satellite communication in an autonomous vehicle network: A case for oceanography
NASA Astrophysics Data System (ADS)
Guerra, André G. C.; Ferreira, António Sérgio; Costa, Maria; Nodar-López, Diego; Aguado Agelet, Fernando
2018-04-01
Small satellites and autonomous vehicles have greatly evolved in the last few decades. Hundreds of small satellites have been launched with increasing functionalities, in the last few years. Likewise, numerous autonomous vehicles have been built, with decreasing costs and form-factor payloads. Here we focus on combining these two multifaceted assets in an incremental way, with an ultimate goal of alleviating the logistical expenses in remote oceanographic operations. The first goal is to create a highly reliable and constantly available communication link for a network of autonomous vehicles, taking advantage of the small satellite lower cost, with respect to conventional spacecraft, and its higher flexibility. We have developed a test platform as a proving ground for this network, by integrating a satellite software defined radio on an unmanned air vehicle, creating a system of systems, and several tests have been run successfully, over land. As soon as the satellite is fully operational, we will start to move towards a cooperative network of autonomous vehicles and small satellites, with application in maritime operations, both in-situ and remote sensing.
The Arctic Regional Communications Small SATellite (ARCSAT)
NASA Technical Reports Server (NTRS)
Casas, Joseph; Kress, Martin; Sims, William; Spehn, Stephen; Jaeger, Talbot; Sanders, Devon
2013-01-01
Traditional satellite missions are extremely complex and expensive to design, build, test, launch and operate. Consequently many complementary operational, exploration and research satellite missions are being formulated as a growing part of the future space community capabilities using formations of small, distributed, simple to launch and inexpensive highly capable small scale satellites. The Arctic Regional Communications small SATellite (ARCSAT) initiative would launch a Mini-Satellite "Mothership" into Polar or Sun Sync low-earth-orbit (LEO). Once on orbit, the Mothership would perform orbital insertion of four internally stored independently maneuverable nanosatellites, each containing electronically steerable antennas and reconfigurable software-defined radios. Unlike the traditional geostationary larger complex satellite communication systems, this LEO communications system will be comprised of initially a five small satellite formation that can be later incrementally increased in the total number of satellites for additional data coverage. ARCSAT will provide significant enabling capabilities in the Arctic for autonomous voice and data communications relay, Maritime Domain Awareness (MDA), data-extraction from unattended sensors, and terrestrial Search & Rescue (SAR) beacon detection missions throughout the "data starved desert" of the Arctic Region.
Houchen, Linzy; Watt, Amye; Boyce, Sally; Singh, Sally
2012-07-01
People with chronic heart failure (CHF) experience acute exacerbations of their symptoms. These episodes are costly to patients and the health service. The study was a single group, pretest and posttest design. Seventeen patients with left ventricular systolic dysfunction (LVSD) started rehabilitation within 4 weeks of hospital discharge. The 6 week rehabilitation programme included exercise and self-management education. The hospital anxiety and depression scale (HADS), the incremental and endurance shuttle walking tests (ISWT/ESWT) were assessed at baseline and after rehabilitation. The number and duration of any CHF admissions in the year before and the year after rehabilitation were also recorded. Improvements in the ISWT, ESWT, and depression were, mean (95% confidence interval [CI]) 60.6 (36.0-85.2) metres, 356.0 (173.0-539.0) seconds (both p≤0.001) and (-)1.0 ((-)1.8-(-)0.2) points (p<0.05), respectively. HADS anxiety improvements failed to reach significance. At 1 year, there was a significant decrease in CHF-related hospitalisations, mean change (95% CI) (-)0.8 ((-)1.1-(-)0.4), p≤0.001 and CHF bed days (-)13.0 ((-)24.4-(-)1.6), p<0.05. Early rehabilitation significantly improved exercise capacity and depression and reduced CHF-associated health care utilisation in patients who had recently been hospitalised. The intervention was safe. However, the sample size was small and results were not compared to a control group. Therefore, the effects of natural recovery are unknown.
Femoral neck radiography: effect of flexion on visualization.
Garry, Steven C; Jhangri, Gian S; Lambert, Robert G W
2005-06-01
To determine whether flexion improves radiographic visualization of the femoral neck when the femur is externally rotated. Five human femora, with varying neck-shaft and anteversion angles, were measured and immobilized. Degree of flexion required to bring the femoral neck horizontal was measured, varying the rotation. Next, one bone was radiographed in 16 positions, varying rotation in 15 degrees and flexion in 10 degrees increments. Radiographs were presented in randomized blinded fashion to 15 staff radiologists for scoring of femoral neck visualization. Following this, all 5 bones were radiographed in 4 positions of rotation and at 0 degree and 20 degrees flexion, and blinded randomized review of radiographs was repeated. Comparisons between angles and rotations were made using the Mann-Whitney test. The flexion angle required to bring the long axis of the femoral neck horizontal correlated directly with the degree of external rotation (p < 0.05). Visualization of the femoral neck in the extended position progressively deteriorated from 15 degrees internal rotation to 30 degrees external rotation (p < 0.01). However, when 20 degrees flexion was applied to bones in external rotation, visualization significantly improved at 15 degrees (p < 0.05) and 30 degrees (p < 0.01). Flexion of the externally rotated femur can bring the femoral neck into horizontal alignment, and a relatively small amount (20 degrees) of flexion can significantly improve radiographic visualization. This manoeuvre could be useful for radiography of the femoral neck when initial radiographs are inadequate because of external rotation of the leg.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Incorporation of real-time component information using equipment condition assessment (ECA) through the developmentof enhanced risk monitors (ERM) for active components in advanced reactor (AR) and advanced small modular reactor (SMR) designs. We incorporate time-dependent failure probabilities from prognostic health management (PHM) systems to dynamically update the risk metric of interest. This information is used to augment data used for supervisory control and plant-wide coordination of multiple modules by providing the incremental risk incurred due to aging and demands placed on components that support mission requirements.
Laminar Boundary Layer Stability Measurements at Mach 7 Including Wall Temperature Effects
1977-11-01
Diagnostics were done by a four-probe rake bearing a pitot tube, a To probe and two hot films - a boundary layer hot film probe (BLHF) and a freestream hot...tunnel. Inset shows hot film probe close to the surface with second probe positioned higher to sample simultaneously the frcestream turbulence ( pitot ... rake away from the surface by small increments and taking readings at each stop. The readings were simultaneously recorded and reduced by the Tunnel B
Pharmacogenomics: where will it take us?
Felcone, Linda Hull
2004-07-01
Until now, drug research has focused on discovering blockbusters to treat millions of patients. Pharmacogenomics, a multidisciplinary effort arising from the Human Genome Project, strives to deliver "personalized medicine." Researchers use genetic information to understand disease pathways and create drugs designed for small, likely-to-respond populations. The path from research to finished drugs is as logistically complex as landing a human on the moon, but don't expect a giant leap; progress will come throughout the next couple of decades via incremental steps.
Computer Processing Of Tunable-Diode-Laser Spectra
NASA Technical Reports Server (NTRS)
May, Randy D.
1991-01-01
Tunable-diode-laser spectrometer measuring transmission spectrum of gas operates under control of computer, which also processes measurement data. Measurements in three channels processed into spectra. Computer controls current supplied to tunable diode laser, stepping it through small increments of wavelength while processing spectral measurements at each step. Program includes library of routines for general manipulation and plotting of spectra, least-squares fitting of direct-transmission and harmonic-absorption spectra, and deconvolution for determination of laser linewidth and for removal of instrumental broadening of spectral lines.
The influence of atmospheric stratification on scatterometer data
NASA Technical Reports Server (NTRS)
Louis, Jean-Francois; Hoffman, Ross N.
1989-01-01
The effects of atmospheric stratification and the stability of the atmospheric stratification on the scatterometer data measuring surface winds over the ocean were investigated using the boundary layer model developed by Louis (1979). A variational analysis method is proposed, which allows direct assimilation of scatterometer data. It is shown that the effect of the stability of atmospheric stratification on the wind increment is relatively small. However, it is a systematic effect, and neglecting it would consistently underestimate the winds in stable regions.
Technology Challenges in Solid Energetic Materials for Micro Propulsion Applications
2009-11-01
thruster is a relatively new class of micro propulsion system for micro spacecraft , though there are many other potential uses in power generation...micro spacecraft , micro satellites (10 to 100 kg), nano satellites (1 to 10 kg), and pico satellites (0.1 to 1 kg). These small-scale satellites will...rocket thruster, assuming that it is used for the attitude control of a 10 kg spacecraft with 1 m/s velocity increment to maneuver around an object in
Incremental Refinement of FAÇADE Models with Attribute Grammar from 3d Point Clouds
NASA Astrophysics Data System (ADS)
Dehbi, Y.; Staat, C.; Mandtler, L.; Pl¨umer, L.
2016-06-01
Data acquisition using unmanned aerial vehicles (UAVs) has gotten more and more attention over the last years. Especially in the field of building reconstruction the incremental interpretation of such data is a demanding task. In this context formal grammars play an important role for the top-down identification and reconstruction of building objects. Up to now, the available approaches expect offline data in order to parse an a-priori known grammar. For mapping on demand an on the fly reconstruction based on UAV data is required. An incremental interpretation of the data stream is inevitable. This paper presents an incremental parser of grammar rules for an automatic 3D building reconstruction. The parser enables a model refinement based on new observations with respect to a weighted attribute context-free grammar (WACFG). The falsification or rejection of hypotheses is supported as well. The parser can deal with and adapt available parse trees acquired from previous interpretations or predictions. Parse trees derived so far are updated in an iterative way using transformation rules. A diagnostic step searches for mismatches between current and new nodes. Prior knowledge on façades is incorporated. It is given by probability densities as well as architectural patterns. Since we cannot always assume normal distributions, the derivation of location and shape parameters of building objects is based on a kernel density estimation (KDE). While the level of detail is continuously improved, the geometrical, semantic and topological consistency is ensured.
Socioeconomic development and secular trend in height in China.
Zong, Xin-Nan; Li, Hui; Wu, Hua-Hong; Zhang, Ya-Qin
2015-12-01
The objective of this study was to examine the effect of socioeconomic development on secular trend in height among children and adolescents in China. Body height and spermarcheal/menarcheal ages were obtained from two periodic large-scale national representative surveys in China between 1975 and 2010. Chinese socioeconomic development indicators were obtained from the United Nations world population prospects. The effects of plausible determinants were assessed by partial least-squares regression. The average height of children and adolescents improved in tandem with socioeconomic development, without any tendency to plateau. The increment of height trend presented larger around puberty than earlier or later ages. The partial least-squares regressions with gross national income, life expectancy and spermarcheal/menarcheal age accounted for increment of height trend from 88.3% to 98.3% for males and from 82.9% to 97.3% for females in adolescence. Further, through the analysis of the variable importance for projection, the contributions of gross national income and life expectancy on height increment were confirmed to be significant in childhood and adolescence, and the contribution of spermarcheal/menarcheal age was superior to both of them in adolescence. We concluded that positive secular trend in height in China was significantly associated with socioeconomic status (GNI as indicator) and medical and health conditions (life expectancy as indicator). Earlier onset of spermarche and menarche proved to be an important role in larger increment of the trend over time of height at puberty for a population. Copyright © 2015 Elsevier B.V. All rights reserved.
Ostensson, Ellinor; Fröberg, Maria; Hjerpe, Anders; Zethraeus, Niklas; Andersson, Sonia
2010-10-01
To assess the cost-effectiveness of using human papillomavirus testing (HPV triage) in the management of women with minor cytological abnormalities in Sweden. An economic analysis based on a clinical trial, complemented with data from published meta-analyses on accuracy of HPV triage. The study takes perspective of the Swedish healthcare system. The Swedish population-based cervical cancer screening program. A decision analytic model was constructed to evaluate cost-effectiveness of HPV triage compared to repeat cytology and immediate colposcopy with biopsy, stratifying by index cytology (ASCUS = atypical squamous cells of undetermined significance, and LSIL = low-grade squamous intraepithelial lesion) and age (23-60 years, <30 years and ≥30 years). Costs, incremental cost, incremental effectiveness and incremental cost per additional high-grade lesion (CIN2+) detected. For women with ASCUS ≥30 years, HPV triage is the least costly alternative, whereas immediate colposcopy with biopsy provides the most effective option at an incremental cost-effectiveness ratio (ICER) of SEK 2,056 per additional case of CIN2+ detected. For LSIL (all age groups) and ASCUS (23-60 years and <30 years), HPV triage is dominated by immediate colposcopy and biopsy. Model results were sensitive to HPV test cost changes. With improved HPV testing techniques at lower costs, HPV triage can become a cost-effective alternative for follow-up of minor cytological abnormalities. Today, immediate colposcopy with biopsy is a cost-effective alternative compared to HPV triage and repeat cytology.
Ruffing, Stephanie; Wach, F. -Sophie; Spinath, Frank M.; Brünken, Roland; Karbach, Julia
2015-01-01
Recent research has revealed that learning behavior is associated with academic achievement at the college level, but the impact of specific learning strategies on academic success as well as gender differences therein are still not clear. Therefore, the aim of this study was to investigate gender differences in the incremental contribution of learning strategies over general cognitive ability in the prediction of academic achievement. The relationship between these variables was examined by correlation analyses. A set of t-tests was used to test for gender differences in learning strategies, whereas structural equation modeling as well as multi-group analyses were applied to investigate the incremental contribution of learning strategies for male and female students’ academic performance. The sample consisted of 461 students (mean age = 21.2 years, SD = 3.2). Correlation analyses revealed that general cognitive ability as well as the learning strategies effort, attention, and learning environment were positively correlated with academic achievement. Gender differences were found in the reported application of many learning strategies. Importantly, the prediction of achievement in structural equation modeling revealed that only effort explained incremental variance (10%) over general cognitive ability. Results of multi-group analyses showed no gender differences in this prediction model. This finding provides further knowledge regarding gender differences in learning research and the specific role of learning strategies for academic achievement. The incremental assessment of learning strategy use as well as gender-differences in their predictive value contributes to the understanding and improvement of successful academic development. PMID:26347698
Li, Zong-Tao; Wu, Tie-Jun; Lin, Can-Long; Ma, Long-Hua
2011-01-01
A new generalized optimum strapdown algorithm with coning and sculling compensation is presented, in which the position, velocity and attitude updating operations are carried out based on the single-speed structure in which all computations are executed at a single updating rate that is sufficiently high to accurately account for high frequency angular rate and acceleration rectification effects. Different from existing algorithms, the updating rates of the coning and sculling compensations are unrelated with the number of the gyro incremental angle samples and the number of the accelerometer incremental velocity samples. When the output sampling rate of inertial sensors remains constant, this algorithm allows increasing the updating rate of the coning and sculling compensation, yet with more numbers of gyro incremental angle and accelerometer incremental velocity in order to improve the accuracy of system. Then, in order to implement the new strapdown algorithm in a single FPGA chip, the parallelization of the algorithm is designed and its computational complexity is analyzed. The performance of the proposed parallel strapdown algorithm is tested on the Xilinx ISE 12.3 software platform and the FPGA device XC6VLX550T hardware platform on the basis of some fighter data. It is shown that this parallel strapdown algorithm on the FPGA platform can greatly decrease the execution time of algorithm to meet the real-time and high precision requirements of system on the high dynamic environment, relative to the existing implemented on the DSP platform. PMID:22164058
Ruffing, Stephanie; Wach, F-Sophie; Spinath, Frank M; Brünken, Roland; Karbach, Julia
2015-01-01
Recent research has revealed that learning behavior is associated with academic achievement at the college level, but the impact of specific learning strategies on academic success as well as gender differences therein are still not clear. Therefore, the aim of this study was to investigate gender differences in the incremental contribution of learning strategies over general cognitive ability in the prediction of academic achievement. The relationship between these variables was examined by correlation analyses. A set of t-tests was used to test for gender differences in learning strategies, whereas structural equation modeling as well as multi-group analyses were applied to investigate the incremental contribution of learning strategies for male and female students' academic performance. The sample consisted of 461 students (mean age = 21.2 years, SD = 3.2). Correlation analyses revealed that general cognitive ability as well as the learning strategies effort, attention, and learning environment were positively correlated with academic achievement. Gender differences were found in the reported application of many learning strategies. Importantly, the prediction of achievement in structural equation modeling revealed that only effort explained incremental variance (10%) over general cognitive ability. Results of multi-group analyses showed no gender differences in this prediction model. This finding provides further knowledge regarding gender differences in learning research and the specific role of learning strategies for academic achievement. The incremental assessment of learning strategy use as well as gender-differences in their predictive value contributes to the understanding and improvement of successful academic development.
[Assessing the economic impact of adverse events in Spanish hospitals by using administrative data].
Allué, Natalia; Chiarello, Pietro; Bernal Delgado, Enrique; Castells, Xavier; Giraldo, Priscila; Martínez, Natalia; Sarsanedas, Eugenia; Cots, Francesc
2014-01-01
To evaluate the incidence and costs of adverse events registered in an administrative dataset in Spanish hospitals from 2008 to 2010. A retrospective study was carried out that estimated the incremental cost per episode, depending on the presence of adverse events. Costs were obtained from the database of the Spanish Network of Hospital Costs. This database contains data from 12 hospitals that have costs per patient records based on activities and clinical records. Adverse events were identified through the Patient Safety Indicators (validated in the Spanish Health System) created by the Agency for Healthcare Research and Quality together with indicators of the EuroDRG European project. This study included 245,320 episodes with a total cost of 1,308,791,871€. Approximately 17,000 patients (6.8%) experienced an adverse event, representing 16.2% of the total cost. Adverse events, adjusted by diagnosis-related groups, added a mean incremental cost of between €5,260 and €11,905. Six of the 10 adverse events with the highest incremental cost were related to surgical interventions. The total incremental cost of adverse events was € 88,268,906, amounting to an additional 6.7% of total health expenditure. Assessment of the impact of adverse events revealed that these episodes represent significant costs that could be reduced by improving the quality and safety of the Spanish Health System. Copyright © 2013 SESPAS. Published by Elsevier Espana. All rights reserved.
Poupard, Laurent; Court-Fortune, Isabelle; Pichot, Vincent; Chouchou, Florian; Barthélémy, Jean-Claude; Roche, Frédéric
2011-12-01
Several studies have correlated the ratio of the very low frequency power spectral density of heart rate increment (%VLFI) with obstructive sleep apnoea syndrome (OSAS). However, patients with impaired heart rate variability may exhibit large variations of heart rate increment (HRI) spectral pattern and alter the screening accuracy of the method. To overcome this limitation, the present study uses the high-frequency increment (HFI) peak in the HRI spectrum, which corresponds to the respiratory influence on RR variations over the frequency range 0.2 to 0.4 Hz. We evaluated 288 consecutive patients referred for snoring, observed nocturnal breathing cessation and/or daytime sleepiness. Patients were classified as OSAS if their apnoea plus hypopnoea index (AHI) during polysomnography exceeded 15 events per hour. Synchronized electrocardiogram Holter monitoring allowed HRI analysis. Using a %VLFI threshold >2.4% for identifying the presence of OSAS, sensitivity for OSAS was 74.9%, specificity 51%, positive predictive value 54.9% and negative predictive value 71.7% (33 false negative subjects). Using threshold for %VLFI >2.4% and HFI peak position >0.4 Hz, negative predictive value increased to 78.2% while maintaining specificity at 50.6%. Among 11 subjects with %VLFI <2.4% and HFI peak >0.4 Hz, nine demonstrated moderate to severe OSAS (AHI >30). HFI represents a minimal physiological criterion for applying %VLFI by ensuring that heart rate variations are band frequency limited.
NASA Technical Reports Server (NTRS)
Helfenstein, P.; Parmentier, E. M.
1985-01-01
This study considers the global patterns of fracture that would result from nonsynchronous rotation of a tidally distorted planetary body. The incremental horizontal stresses in a thin elastic or viscous shell due to a small displacement of the axis of maximum tidal elongation are derived, and the resulting stress distributions are applied to interpret the observed pattern of fracture lineaments on Europa. The observed pattern of lineaments can be explained by nonsynchronous rotation if these features formed by tension fracturing and dike emplacement. Tension fracturing can occur for a small displacement of the tidal axis, so that the resulting lineaments may be consistent with other evidence suggesting a young age for the surface.
Real-time garbage collection for list processing
NASA Technical Reports Server (NTRS)
Shuler, R. L., Jr. (Inventor)
1986-01-01
In a list processing system, small reference counters are maintained in conjunction with memory cells for the purpose of identifying memory cells that become available for re-use. The counters are updated as references to the cells are created and destroyed, and when a counter of a cell is decremented to logical zero the cell is immediately returned to a list of free cells. In those cases where a counter must be incremented beyond the maximum value that can be represented in a small counter, the cell is restructured so that the additional reference count can be represented. The restructuring involves allocating an additional cell, distributing counter, tag, and pointer information among the two cells, and linking both cells appropriately into the existing list structure.
2013-03-01
and improvement efforts with DOD, F-35 program, and contractor officials. We toured the aircraft manufacturing plant , obtained production and supply...members of the contractor’s work force and DOD plant representatives. We evaluated DOD’s restructuring actions and impacts on the program, tracked...Testing of the final increment is expected to begin in 2014 and continue through 2016. Initial operational test and evaluation ( IOT &E) is
SMOS brightness temperature assimilation into the Community Land Model
NASA Astrophysics Data System (ADS)
Rains, Dominik; Han, Xujun; Lievens, Hans; Montzka, Carsten; Verhoest, Niko E. C.
2017-11-01
SMOS (Soil Moisture and Ocean Salinity mission) brightness temperatures at a single incident angle are assimilated into the Community Land Model (CLM) across Australia to improve soil moisture simulations. Therefore, the data assimilation system DasPy is coupled to the local ensemble transform Kalman filter (LETKF) as well as to the Community Microwave Emission Model (CMEM). Brightness temperature climatologies are precomputed to enable the assimilation of brightness temperature anomalies, making use of 6 years of SMOS data (2010-2015). Mean correlation R with in situ measurements increases moderately from 0.61 to 0.68 (11 %) for upper soil layers if the root zone is included in the updates. A reduced improvement of 5 % is achieved if the assimilation is restricted to the upper soil layers. Root-zone simulations improve by 7 % when updating both the top layers and root zone, and by 4 % when only updating the top layers. Mean increments and increment standard deviations are compared for the experiments. The long-term assimilation impact is analysed by looking at a set of quantiles computed for soil moisture at each grid cell. Within hydrological monitoring systems, extreme dry or wet conditions are often defined via their relative occurrence, adding great importance to assimilation-induced quantile changes. Although still being limited now, longer L-band radiometer time series will become available and make model output improved by assimilating such data that are more usable for extreme event statistics.
Vijayapushpam, T; Subba Rao, G M; Antony, Grace Maria; Rao, D Raghunatha
2008-06-01
Nutrition education for student volunteers can enhance their skills, and they can act as change agents in the community. There is a dearth of data from India on the effectiveness of different communication tools in providing nutrition education to student volunteers. This study aims to examine the comparative effectiveness of two different methods of communication--lectures in the classroom aided by print material, and a televised version of a local folk-dance form--for providing nutrition education to student community volunteers in a South Indian state. Interventions were conducted during two mega-camps of student volunteers (camps 1 and 2) with 70 and 137 participants, respectively. Their knowledge levels were tested at baseline. Camp 1 received the lecture intervention and camp 2 the televised folk-dance intervention. Knowledge scores were measured before and after the intervention in each camp, and the two camps were compared for significant improvements in knowledge. At baseline, the knowledge levels of students in both camps were comparable. Significant improvement in knowledge was observed in both camps after intervention (p < .05). Although there was no significant difference between the camps in improvement in knowledge, a significant difference was observed when only the positive increments (improvement over baseline) were compared. The televised version of the folk-dance form was better in bringing about positive increment.
Ito, Kouta; Shrank, William H; Avorn, Jerry; Patrick, Amanda R; Brennan, Troyen A; Antman, Elliot M; Choudhry, Niteesh K
2012-01-01
Objective To evaluate the comparative cost-effectiveness of interventions to improve adherence to evidence-based medications among postmyocardial infarction (MI) patients. Data Sources/Study Setting Cost-effectiveness analysis. Study Design We developed a Markov model simulating a hypothetical cohort of 65-year-old post-MI patients who were prescribed secondary prevention medications. We evaluated mailed education, disease management, polypill use, and combinations of these interventions. The analysis was performed from a societal perspective over a lifetime horizon. The main outcome was an incremental cost-effectiveness ratio (ICER) as measured by cost per quality-adjusted life year (QALY) gained. Data Collection/Extraction Methods Model inputs were extracted from published literature. Principal Findings Compared with usual care, only mailed education had both improved health outcomes and reduced spending. Mailed education plus disease management, disease management, polypill use, polypill use plus mailed education, and polypill use plus disease management cost were $74,600, $69,200, $133,000, $113,000, and $142,900 per QALY gained, respectively. In an incremental analysis, only mailed education had an ICER of less than $100,000 per QALY and was therefore the optimal strategy. Polypill use, particularly when combined with mailed education, could be cost effective, and potentially cost saving if its price decreased to less than $100 per month. Conclusions Mailed education and a polypill, once available, may be the cost-saving strategies for improving post-MI medication adherence. PMID:22998129
Ito, Kouta; Shrank, William H; Avorn, Jerry; Patrick, Amanda R; Brennan, Troyen A; Antman, Elliot M; Choudhry, Niteesh K
2012-12-01
To evaluate the comparative cost-effectiveness of interventions to improve adherence to evidence-based medications among postmyocardial infarction (MI) patients. Cost-effectiveness analysis. We developed a Markov model simulating a hypothetical cohort of 65-year-old post-MI patients who were prescribed secondary prevention medications. We evaluated mailed education, disease management, polypill use, and combinations of these interventions. The analysis was performed from a societal perspective over a lifetime horizon. The main outcome was an incremental cost-effectiveness ratio (ICER) as measured by cost per quality-adjusted life year (QALY) gained. Model inputs were extracted from published literature. Compared with usual care, only mailed education had both improved health outcomes and reduced spending. Mailed education plus disease management, disease management, polypill use, polypill use plus mailed education, and polypill use plus disease management cost were $74,600, $69,200, $133,000, $113,000, and $142,900 per QALY gained, respectively. In an incremental analysis, only mailed education had an ICER of less than $100,000 per QALY and was therefore the optimal strategy. Polypill use, particularly when combined with mailed education, could be cost effective, and potentially cost saving if its price decreased to less than $100 per month. Mailed education and a polypill, once available, may be the cost-saving strategies for improving post-MI medication adherence. © Health Research and Educational Trust.
Effects of organizational context on Lean implementation in five hospital systems.
Harrison, Michael I; Paez, Kathryn; Carman, Kristin L; Stephens, Jennifer; Smeeding, Lauren; Devers, Kelly J; Garfinkel, Steven
2016-01-01
Despite broad agreement among researchers about the value of examining how context shapes implementation of improvement programs and projects, limited attention has been paid to contextual effects on implementation of Lean. To help reduce gaps in knowledge of effects of intraorganizational context, we researched Lean implementation initiatives in five organizations and examined 12 of their Lean rapid improvement projects. All projects aimed at improving clinical care delivery. On the basis of the literature on Lean, innovation, and quality improvement, we developed a framework of factors likely to affect Lean implementation and outcomes. Drawing on the framework, we conducted semistructured interviews and applied qualitative codes to the transcribed interviews. Available documents, data, and observations supplemented the interviews. We constructed case studies of Lean implementation in each organization, compared implementation across organizations, and compared the 12 projects. Intraorganizational characteristics affecting organization-wide Lean initiatives and often also shaping project outcomes included CEO commitment to Lean and active support for it, prior organizational capacity for quality improvement-based performance improvement, alignment of the Lean initiative with the organizational mission, dedication of resources and experts to Lean, staff training before and during projects, establishment of measurable and relevant project targets, planning of project sequences that enhance staff capabilities and commitment without overburdening them, and ensuring communication between project members and other affected staff. Dependence of projects on inputs of new information technology was a barrier to project success. Incremental implementation of Lean produced reported improvements in operational efficiency and occasionally in care quality. However, even under the relatively favorable circumstances prevailing in our study sites, incremental implementation did not readily change organizational culture. This study should alert researchers, managers, and teachers of management to ways that contexts shape Lean implementation and may affect other types of process redesign and quality improvement.
Operational and financial impact of physician screening in the ED.
Soremekun, Olanrewaju A; Biddinger, Paul D; White, Benjamin A; Sinclair, Julia R; Chang, Yuchiao; Carignan, Sarah B; Brown, David F M
2012-05-01
Physician screening is one of many front-end interventions being implemented to improve emergency department (ED) efficiency. We aimed to quantify the operational and financial impact of this intervention at an urban tertiary academic center. We conducted a 2-year before-after analysis of a physician screening system at an urban tertiary academic center with 90 000 annual visits. Financial impact consisted of the ED and inpatient revenue generated from the incremental capacity and the reduction in left without being seen (LWBS) rates. The ED and inpatient margin contribution as well as capital expenditure were based on available published data. We summarized the financial impact using net present value of future cash flows performing sensitivity analysis on the assumptions. Operational outcome measures were ED length of stay and percentage of LWBS. During the first year, we estimate the contribution margin of the screening system to be $2.71 million and the incremental operational cost to be $1.86 million. Estimated capital expenditure for the system was $1 200 000. The NPV of this investment was $2.82 million, and time to break even from the initial investment was 13 months. Operationally, despite a 16.7% increase in patient volume and no decrease in boarding hours, there was a 7.4% decrease in ED length of stay and a reduction in LWBS from 3.3% to 1.8%. In addition to improving operational measures, the implementation of a physician screening program in the ED allowed for an incremental increase in patient care capacity leading to an overall positive financial impact. Copyright © 2012 Elsevier Inc. All rights reserved.
National Health Insurance or Incremental Reform: Aim High, or at Our Feet?
Himmelstein, David U.; Woolhandler, Steffie
2003-01-01
Single-payer national health insurance could cover the uninsured and upgrade coverage for most Americans without increasing costs; savings on insurance overhead and other bureaucracy would fully offset the costs of improved care. In contrast, proposed incremental reforms are projected to cover a fraction of the uninsured, at great cost. Moreover, even these projections are suspect; reforms of the past quarter century have not stemmed the erosion of coverage. Despite incrementalists’ claims of pragmatism, they have proven unable to shepherd meaningful reform through the political system. While national health insurance is often dismissed as ultra left by the policy community, it is dead center in public opinion. Polls have consistently shown that at least 40%, and perhaps 60%, of Americans favor such reform. PMID:12511395
National Health Insurance or Incremental Reform: Aim High, or at Our Feet?
Himmelstein, David U.; Woolhandler, Steffie
2008-01-01
Single-payer national health insurance could cover the uninsured and upgrade coverage for most Americans without increasing costs; savings on insurance overhead and other bureaucracy would fully offset the costs of improved care. In contrast, proposed incremental reforms are projected to cover a fraction of the uninsured, at great cost. Moreover, even these projections are suspect; reforms of the past quarter century have not stemmed the erosion of coverage. Despite incrementalists’ claims of pragmatism, they have proven unable to shepherd meaningful reform through the political system. While national health insurance is often dismissed as ultra left by the policy community, it is dead center in public opinion. Polls have consistently shown that at least 40%, and perhaps 60%, of Americans favor such reform. PMID:18687624
Scalable Prediction of Energy Consumption using Incremental Time Series Clustering
DOE Office of Scientific and Technical Information (OSTI.GOV)
Simmhan, Yogesh; Noor, Muhammad Usman
2013-10-09
Time series datasets are a canonical form of high velocity Big Data, and often generated by pervasive sensors, such as found in smart infrastructure. Performing predictive analytics on time series data can be computationally complex, and requires approximation techniques. In this paper, we motivate this problem using a real application from the smart grid domain. We propose an incremental clustering technique, along with a novel affinity score for determining cluster similarity, which help reduce the prediction error for cumulative time series within a cluster. We evaluate this technique, along with optimizations, using real datasets from smart meters, totaling ~700,000 datamore » points, and show the efficacy of our techniques in improving the prediction error of time series data within polynomial time.« less
Veillette, Marc; Avalos Ramirez, Antonio; Heitz, Michèle
2012-01-01
An evaluation of the effect of ammonium on the performance of two up-flow inorganic packed bed biofilters treating methane was conducted. The air flow rate was set to 3.0 L min(-1) for an empty bed residence time of 6.0 min. The biofilter was fed with a methane concentration of 0.30% (v/v). The ammonium concentration in the nutrient solution was increased by small increments (from 0.01 to 0.025 gN-NH(4) (+) L(-1)) for one biofilter and by large increments of 0.05 gN-NH(4) (+) L(-1) in the other biofilter. The total concentration of nitrogen was kept constant at 0.5 gN-NH(4) (+) L(-1) throughout the experiment by balancing ammonium with nitrate. For both biofilters, the methane elimination capacity, carbon dioxide production, nitrogen bed retention and biomass content decreased with the ammonium concentration in the nutrient solution. The biofilter with smaller ammonium increments featured a higher elimination capacity and carbon dioxide production rate, which varied from 4.9 to 14.3 g m(-3) h(-1) and from 11.5 to 30 g m(-3) h(-1), respectively. Denitrification was observed as some values of the nitrate production rate were negative for ammonium concentrations below 0.2 gN-NH(4) (+) L(-1). A Michalelis-Menten-type model fitted the ammonium elimination rate and the nitrate production rate.
Slow growth rates of Amazonian trees: Consequences for carbon cycling
Vieira, Simone; Trumbore, Susan; Camargo, Plinio B.; Selhorst, Diogo; Chambers, Jeffrey Q.; Higuchi, Niro; Martinelli, Luiz Antonio
2005-01-01
Quantifying age structure and tree growth rate of Amazonian forests is essential for understanding their role in the carbon cycle. Here, we use radiocarbon dating and direct measurement of diameter increment to document unexpectedly slow growth rates for trees from three locations spanning the Brazilian Amazon basin. Central Amazon trees, averaging only ≈1mm/year diameter increment, grow half as fast as those from areas with more seasonal rainfall to the east and west. Slow growth rates mean that trees can attain great ages; across our sites we estimate 17-50% of trees with diameter >10 cm have ages exceeding 300 years. Whereas a few emergent trees that make up a large portion of the biomass grow faster, small trees that are more abundant grow slowly and attain ages of hundreds of years. The mean age of carbon in living trees (60-110 years) is within the range of or slightly longer than the mean residence time calculated from C inventory divided by annual C allocation to wood growth (40-100 years). Faster C turnover is observed in stands with overall higher rates of diameter increment and a larger fraction of the biomass in large, fast-growing trees. As a consequence, forests can recover biomass relatively quickly after disturbance, whereas recovering species composition may take many centuries. Carbon cycle models that apply a single turnover time for carbon in forest biomass do not account for variations in life strategy and therefore may overestimate the carbon sequestration potential of Amazon forests. PMID:16339903
ERIC Educational Resources Information Center
Callahan, Emily H.; Gillis, Jennifer M.; Romanczyk, Raymond G.; Mattson, Richard E.
2011-01-01
Many treatment programs for individuals with an autism spectrum disorder (ASD) target social skills, and there is growing attention directed toward the development of specific interventions to improve social skills and social interactions in this population (Hestenes & Carroll, 2000; Strain & Hoyson, 2000). However, there are limited tools…
Stretching the Higher Education Dollar: How Innovation Can Improve Access, Equity, and Affordability
ERIC Educational Resources Information Center
Kelly, Andrew P., Ed.; Carey, Kevin, Ed.
2013-01-01
In this provocative volume, higher education experts explore innovative ways that colleges and universities can unbundle the various elements of the college experience while assessing costs and benefits and realizing savings. "Stretching the Higher Education Dollar" traces the reform continuum from incremental to more ambitious efforts.…
A Comparison of Two Flashcard Drill Methods Targeting Word Recognition
ERIC Educational Resources Information Center
Volpe, Robert J.; Mule, Christina M.; Briesch, Amy M.; Joseph, Laurice M.; Burns, Matthew K.
2011-01-01
Traditional drill and practice (TD) and incremental rehearsal (IR) are two flashcard drill instructional methods previously noted to improve word recognition. The current study sought to compare the effectiveness and efficiency of these two methods, as assessed by next day retention assessments, under 2 conditions (i.e., opportunities to respond…
Empirically Derived Optimal Growth Equations For Hardwoods and Softwoods in Arkansas
Don C. Bragg
2002-01-01
Accurate growth projections are critical to reliable forest models, and ecologically based simulators can improve siivicultural predictions because of their sensitivity to change and their capacity to produce long-term forecasts. Potential relative increment (PRI) optimal diameter growth equations for loblolly pine, shortleaf pine, sweetgum, and white oak were fit to...
Process level improvements in the CMAQ system have been made to WRF meteorology, national ammonia emission profiles, and CMAQ ammonia air-surface exchange. An incremental study was conducted to quantify the impact of individual and combined changes on modeled inorganic depositio...