Flexible multiply towpreg and method of production therefor
NASA Technical Reports Server (NTRS)
Muzzy, John D. (Inventor); Varughese, Babu (Inventor)
1992-01-01
This invention relates to an improved flexible towpreg and a method of production therefor. The improved flexible towpreg comprises a plurality of towpreg plies which comprise reinforcing filaments and matrix forming material; the reinforcing filaments being substantially wetout by the matrix forming material such that the towpreg plies are substantially void-free composite articles, and the towpreg plies having an average thickness less than about 100 microns. The method of production for the improved flexible towpreg comprises the steps of spreading the reinforcing filaments to expose individually substantially all of the reinforcing filaments; coating the reinforcing filaments with the matrix forming material in a manner causing interfacial adhesion of the matrix forming material to the reinforcing filaments; forming the towpreg plies by heating the matrix forming material contacting the reinforcing filaments until the matrix forming material liquefies and coats the reinforcing filaments; and cooling the towpreg plies in a manner such that substantial cohesion between neighboring towpreg plies is prevented until the matrix forming material solidifies.
Enhancement of mechanical properties of 123 superconductors
Balachandran, Uthamalingam
1995-01-01
A composition and method of preparing YBa.sub.2 Cu.sub.3 O.sub.7-x superconductor. Addition of tin oxide containing compounds to YBCO superconductors results in substantial improvement of fracture toughness and other mechanical properties without affect on T.sub.c. About 5-20% additions give rise to substantially improved mechanical properties.
NASA Technical Reports Server (NTRS)
Muzzy, John D. (Inventor); Varughese, Babu (Inventor)
1992-01-01
This invention relates to an improved flexible towpreg and a method of production therefor. The improved flexible towpreg comprises a plurality of towpreg plies which comprise reinforcing filaments and matrix forming material; the reinforcing filaments being substantially wetout by the matrix forming material such that the towpreg plies are substantially void-free composite articles, and the towpreg plies having an average thickness less than about 100 microns. The method of production for the improved flexible towpreg comprises the steps of spreading the reinforcing filaments to expose individually substantially all of the reinforcing filaments; coating the reinforcing filaments with the matrix forming material in a manner causing interfacial adhesion of the matrix forming material to the reinforcing filaments; forming the towpreg plies by heating the matrix forming material contacting the reinforcing filaments until the matrix forming material liquifies and coats the reinforcing filaments; and cooling the towpreg plies in a manner such that substantial cohesion between neighboring towpreg plies is prevented until the matrix forming material solidifies.
Enhancement of mechanical properties of 123 superconductors
Balachandran, U.
1995-04-25
A composition and method are disclosed of preparing YBa{sub 2}Cu{sub 3}O{sub 7{minus}x} superconductor. Addition of tin oxide containing compounds to YBCO superconductors results in substantial improvement of fracture toughness and other mechanical properties without affect on T{sub c}. About 5-20% additions give rise to substantially improved mechanical properties.
Multilayer insulation blanket, fabricating apparatus and method
Gonczy, John D.; Niemann, Ralph C.; Boroski, William N.
1992-01-01
An improved multilayer insulation blanket for insulating cryogenic structures operating at very low temperatures is disclosed. An apparatus and method for fabricating the improved blanket are also disclosed. In the improved blanket, each successive layer of insulating material is greater in length and width than the preceding layer so as to accommodate thermal contraction of the layers closest to the cryogenic structure. The fabricating apparatus has a rotatable cylindrical mandrel having an outer surface of fixed radius that is substantially arcuate, preferably convex, in cross-section. The method of fabricating the improved blanket comprises (a) winding a continuous sheet of thermally reflective material around the circumference of the mandrel to form multiple layers, (b) binding the layers along two lines substantially parallel to the edges of the circumference of the mandrel, (c) cutting the layers along a line parallel to the axle of the mandrel, and (d) removing the bound layers from the mandrel.
Method of fabricating a multilayer insulation blanket
Gonczy, John D.; Niemann, Ralph C.; Boroski, William N.
1993-01-01
An improved multilayer insulation blanket for insulating cryogenic structures operating at very low temperatures is disclosed. An apparatus and method for fabricating the improved blanket are also disclosed. In the improved blanket, each successive layer of insulating material is greater in length and width than the preceding layer so as to accommodate thermal contraction of the layers closest to the cryogenic structure. The fabricating apparatus has a rotatable cylindrical mandrel having an outer surface of fixed radius that is substantially arcuate, preferably convex, in cross-section. The method of fabricating the improved blanket comprises (a) winding a continuous sheet of thermally reflective material around the circumference of the mandrel to form multiple layers, (b) binding the layers along two lines substantially parallel to the edges of the circumference of the mandrel, (c) cutting the layers along a line parallel to the axle of the mandrel, and (d) removing the bound layers from the mandrel.
Method of fabricating a multilayer insulation blanket
Gonczy, J.D.; Niemann, R.C.; Boroski, W.N.
1993-07-06
An improved multilayer insulation blanket for insulating cryogenic structures operating at very low temperatures is disclosed. An apparatus and method for fabricating the improved blanket are also disclosed. In the improved blanket, each successive layer of insulating material is greater in length and width than the preceding layer so as to accommodate thermal contraction of the layers closest to the cryogenic structure. The fabricating apparatus has a rotatable cylindrical mandrel having an outer surface of fixed radius that is substantially arcuate, preferably convex, in cross-section. The method of fabricating the improved blanket comprises (a) winding a continuous sheet of thermally reflective material around the circumference of the mandrel to form multiple layers, (b) binding the layers along two lines substantially parallel to the edges of the circumference of the mandrel, (c) cutting the layers along a line parallel to the axle of the mandrel, and (d) removing the bound layers from the mandrel.
Multilayer insulation blanket, fabricating apparatus and method
Gonczy, J.D.; Niemann, R.C.; Boroski, W.N.
1992-09-01
An improved multilayer insulation blanket for insulating cryogenic structures operating at very low temperatures is disclosed. An apparatus and method for fabricating the improved blanket are also disclosed. In the improved blanket, each successive layer of insulating material is greater in length and width than the preceding layer so as to accommodate thermal contraction of the layers closest to the cryogenic structure. The fabricating apparatus has a rotatable cylindrical mandrel having an outer surface of fixed radius that is substantially arcuate, preferably convex, in cross-section. The method of fabricating the improved blanket comprises (a) winding a continuous sheet of thermally reflective material around the circumference of the mandrel to form multiple layers, (b) binding the layers along two lines substantially parallel to the edges of the circumference of the mandrel, (c) cutting the layers along a line parallel to the axle of the mandrel, and (d) removing the bound layers from the mandrel. 7 figs.
Report: EPA’s Method for Calculating Air Toxics Emissions for Reporting Results Needs Improvement
Report #2004-P-00012, March 31, 2004. Although the methods by which air toxics emissions are estimated have improved substantially, unvalidated assumptions and other limitations underlying the NTI continue to impact its use as a GPRA performance measure.
Highly porous ceramic oxide aerogels having improved flexibility
NASA Technical Reports Server (NTRS)
Guo, Haiquan (Inventor); Meador, Mary Ann B. (Inventor); Nguyen, Baochau N. (Inventor)
2012-01-01
Ceramic oxide aerogels having improved flexibility are disclosed. Preferred embodiments exhibit high modulus and other strength properties despite their improved flexibility. The gels may be polymer cross-linked via organic polymer chains to further improve strength properties, without substantially detracting from the improved flexibility. Methods of making such aerogels are also disclosed.
Improved method for producing small hollow spheres
Rosencwaig, A.; Koo, J.C.; Dressler, J.L.
An improved method and apparatus for producing small hollow spheres of glass having an outer diameter ranging from about 100..mu.. to about 500..mu.. with a substantially uniform wall thickness in the range of about 0.5 to 20..mu.. are described. The method involves introducing aqueous droplets of a glass-forming solution into a long vertical drop oven or furnace having varying temperature regions.
Strengthening of certain types of arch dams at broad sites
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kaganov, G. M.; Volkov, V. I.; Uchevatkin, A. A.
2012-01-15
The problem of strengthening defective and damaged arch dams is formulated, and methodical calculations are performed to substantiate a set of structural and production measures permitting substantial improvement in the stress-strain state and an increase in the safety factor of the structure. Feasibility of practical implementation of the results is foreseen.
Method and apparatus for measuring the NMR spectrum of an orientationally disordered sample
Pines, Alexander; Samoson, Ago
1990-01-01
An improved NMR probe and method are described which substantially improve the resolution of NMR measurements made on powdered or amorphous or otherwise oreintationally disordered samples. The apparatus mechanically varies the orientation of the sample such that the time average of two or more sets of spherical harmonic functions is zero.
Method for production of sorghum hybrids with selected flowering times
Mullet, John E.; Rooney, William L.
2016-08-30
Methods and composition for the production of sorghum hybrids with selected and different flowering times are provided. In accordance with the invention, a substantially continual and high-yield harvest of sorghum is provided. Improved methods of seed production are also provided.
Method and apparatus for operating an improved thermocline storage unit
Copeland, R.J.
1982-09-30
A method and apparatus for operating a thermocline storage unit in which an insulated barrier member is provided substantially at the interface region between the hot and cold liquids in the storage tank. The barrier member physically and thermally separates the hot and cold liquids substantially preventing any diffusing or mixing between them and substantially preventing any heat transfer there between. The barrier member follows the rise and fall of the interface region between the liquids as the tank is charged and discharged. Two methods of maintaining it in the interface region are disclosed. With the structure and operation of the present invention and in particular the significant reduction in diffusing or mixing between the hot and cold liquids as well as the significant reduction in the thermal heat transfer between them, the performance of the storage tank is improved. More specifically, the stability of the interface region or thermocline is enhanced and the thickness of the thermocline is reduced producing a corresponding increase in the steepness of the temperature gradient across the thermocline and a more efficiently operating thermocline storage unit.
Method and apparatus for operating an improved thermocline storage unit
Copeland, Robert J.
1985-01-01
A method and apparatus for operating a thermocline storage unit in which an insulated barrier member is provided substantially at the interface region between the hot and cold liquids in the storage tank. The barrier member physically and thermally separates the hot and cold liquids substantially preventing any diffusing or mixing between them and substantially preventing any heat transfer therebetween. The barrier member follows the rise and fall of the interface region between the liquids as the tank is charged and discharged. Two methods of maintaining it in the interface region are disclosed. With the structure and operation of the present invention and in particular the significant reduction in diffusing or mixing between the hot and cold liquids as well as the significant reduction in the thermal heat transfer between them, the performance of the storage tank is improved. More specifically, the stability of the interface region or thermocline is enhanced and the thickness of the thermocline is reduced producing a corresponding increase in the steepness of the temperature gradient across the thermocline and a more efficiently operating thermocline storage unit.
Pines, Alexander; Samoson, Ago
1990-01-01
An improved NMR apparatus and method are described which substantially improve the resolution of NMR measurements made on powdered or amorphous or otherwise orientationally disordered samples. The apparatus spins the sample about an axis. The angle of the axis is mechanically varied such that the time average of two or more Legendre polynomials are zero.
ERIC Educational Resources Information Center
Stewart, Kelise K.; Carr, James E.; Brandt, Charles W.; McHenry, Meade M.
2007-01-01
The present study evaluated the effects of both a traditional lecture and the conservative dual-criterion (CDC) judgment aid on the ability of 6 university students to visually inspect AB-design line graphs. The traditional lecture reliably failed to improve visual inspection accuracy, whereas the CDC method substantially improved the performance…
USDA-ARS?s Scientific Manuscript database
The development of genomic selection methodology, with accompanying substantial gains in reliability for low-heritability traits, may dramatically improve the feasibility of genetic improvement of dairy cow health. Many methods for genomic analysis have now been developed, including the “Bayesian Al...
Are we in the dark ages of environmental toxicology?
McCarty, L S
2013-12-01
Environmental toxicity is judged to be in a "dark ages" period due to longstanding limitations in the implementation of the simple conceptual model that is the basis of current aquatic toxicity testing protocols. Fortunately, the environmental regulatory revolution of the last half-century is not substantially compromised as development of past regulatory guidance was designed to deal with limited amounts of relatively poor quality toxicity data. However, as regulatory objectives have substantially increased in breadth and depth, aquatic toxicity data derived with old testing methods are no longer adequate. In the near-term explicit model description and routine assumption validation should be mandatory. Updated testing methods could provide some improvements in toxicological data quality. A thorough reevaluation of toxicity testing objectives and methods resulting in substantially revised standard testing methods, plus a comprehensive scheme for classification of modes/mechanisms of toxic action, should be the long-term objective. Copyright © 2013 Elsevier Inc. All rights reserved.
Improved numerical methods for turbulent viscous recirculating flows
NASA Technical Reports Server (NTRS)
Turan, A.; Vandoormaal, J. P.
1988-01-01
The performance of discrete methods for the prediction of fluid flows can be enhanced by improving the convergence rate of solvers and by increasing the accuracy of the discrete representation of the equations of motion. This report evaluates the gains in solver performance that are available when various acceleration methods are applied. Various discretizations are also examined and two are recommended because of their accuracy and robustness. Insertion of the improved discretization and solver accelerator into a TEACH mode, that has been widely applied to combustor flows, illustrates the substantial gains to be achieved.
Microporous alumina ceramic membranes
Anderson, M.A.; Guangyao Sheng.
1993-05-04
Several methods are disclosed for the preparation microporous alumina ceramic membranes. For the first time, porous alumina membranes are made which have mean pore sizes less than 100 Angstroms and substantially no pores larger than that size. The methods are based on improved sol-gel techniques.
Method for preparing 6-.beta.-halopenicillanic acids
Hansen, Erik I.; Kran-Nielsen, Mogens P.; Von Daehne, Welf
1989-01-01
The present invention relates to a new and improved method for the preparation of a compound of the formula I ##STR1## in which R stands for halogen, giving rise to high yields of substantially pure 6.beta.-halopenicillanic acids, obtained in one step.
Microporous alumina ceramic membranes
Anderson, Marc A.; Sheng, Guangyao
1993-01-01
Several methods are disclosed for the preparation microporous alumina ceramic membranes. For the first time, porous alumina membranes are made which have mean pore sizes less than 100 Angstroms and substantially no pores larger than that size. The methods are based on improved sol-gel techniques.
FACTORING TO FIT OFF DIAGONALS.
imply an upper bound on the number of factors. When applied to somatotype data, the method improved substantially on centroid solutions and indicated a reinterpretation of earlier factoring studies. (Author)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Anthony Leonard; Phillippe Chatelain; Michael Rebel
Heavy ground vehicles, especially those involved in long-haul freight transportation, consume a significant part of our nation's energy supply. it is therefore of utmost importance to improve their efficiency, both to reduce emissions and to decrease reliance on imported oil. At highway speeds, more than half of the power consumed by a typical semi truck goes into overcoming aerodynamic drag, a fraction which increases with speed and crosswind. Thanks to better tools and increased awareness, recent years have seen substantial aerodynamic improvements by the truck industry, such as tractor/trailer height matching, radiator area reduction, and swept fairings. However, there remainsmore » substantial room for improvement as understanding of turbulent fluid dynamics grows. The group's research effort focused on vortex particle methods, a novel approach for computational fluid dynamics (CFD). Where common CFD methods solve or model the Navier-Stokes equations on a grid which stretches from the truck surface outward, vortex particle methods solve the vorticity equation on a Lagrangian basis of smooth particles and do not require a grid. They worked to advance the state of the art in vortex particle methods, improving their ability to handle the complicated, high Reynolds number flow around heavy vehicles. Specific challenges that they have addressed include finding strategies to accurate capture vorticity generation and resultant forces at the truck wall, handling the aerodynamics of spinning bodies such as tires, application of the method to the GTS model, computation time reduction through improved integration methods, a closest point transform for particle method in complex geometrics, and work on large eddy simulation (LES) turbulence modeling.« less
Probe for high resolution NMR with sample reorientation
Pines, Alexander; Samoson, Ago
1990-01-01
An improved NMR probe and method are described which substantially improve the resolution of NMR measurements made on powdered or amorphous or otherwise orientationally disordered samples. The apparatus mechanically varies the orientation of the sample such that the time average of two or more sets of spherical harmonic functions are zero.
Combining Heterogeneous Correlation Matrices: Simulation Analysis of Fixed-Effects Methods
ERIC Educational Resources Information Center
Hafdahl, Adam R.
2008-01-01
Monte Carlo studies of several fixed-effects methods for combining and comparing correlation matrices have shown that two refinements improve estimation and inference substantially. With rare exception, however, these simulations have involved homogeneous data analyzed using conditional meta-analytic procedures. The present study builds on…
Quantitative cardiac SPECT reconstruction with reduced image degradation due to patient anatomy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tsui, B.M.W.; Zhao, X.D.; Gregoriou, G.K.
1994-12-01
Patient anatomy has complicated effects on cardiac SPECT images. The authors investigated reconstruction methods which substantially reduced these effects for improved image quality. A 3D mathematical cardiac-torso (MCAT) phantom which models the anatomical structures in the thorax region were used in the study. The phantom was modified to simulate variations in patient anatomy including regions of natural thinning along the myocardium, body size, diaphragmatic shape, gender, and size and shape of breasts for female patients. Distributions of attenuation coefficients and Tl-201 uptake in different organs in a normal patient were also simulated. Emission projection data were generated from the phantomsmore » including effects of attenuation and detector response. The authors have observed the attenuation-induced artifacts caused by patient anatomy in the conventional FBP reconstructed images. Accurate attenuation compensation using iterative reconstruction algorithms and attenuation maps substantially reduced the image artifacts and improved quantitative accuracy. They conclude that reconstruction methods which accurately compensate for non-uniform attenuation can substantially reduce image degradation caused by variations in patient anatomy in cardiac SPECT.« less
Calamur, Narasimhan; Carrera, Martin E.; Devlin, David J.; Archuleta, Tom
2000-01-01
The present invention relates to an improved method and apparatus for separating one or more condensable compounds from a mixture of two or more gases of differing volatilities by capillary fractionation in a membrane-type apparatus, and a method of forming porous structures therefor. More particularly, the invention includes methods of forming and using an apparatus consisting, at least in part, of a porous structure having capillary-type passages extending between a plurality of small openings on the first side and larger openings on a second side of the structure, the passages being adapted to permit a condensed liquid to flow therethrough substantially by capillary forces, whereby vapors from the mixture are condensed, at least in part, and substantially in and adjacent to the openings on the first side, and are caused to flow in a condensed liquid state, substantially in the absence of vapor, from the openings on the first side to the openings on the second side.
Borman, Andrew M; Fraser, Mark; Linton, Christopher J; Palmer, Michael D; Johnson, Elizabeth M
2010-06-01
Here, we present a significantly improved version of our previously published method for the extraction of fungal genomic DNA from pure cultures using Whatman FTA filter paper matrix technology. This modified protocol is extremely rapid, significantly more cost effective than our original method, and importantly, substantially reduces the problem of potential cross-contamination between sequential filters when employing FTA technology.
Improving Your Exploratory Factor Analysis for Ordinal Data: A Demonstration Using FACTOR
ERIC Educational Resources Information Center
Baglin, James
2014-01-01
Exploratory factor analysis (EFA) methods are used extensively in the field of assessment and evaluation. Due to EFA's widespread use, common methods and practices have come under close scrutiny. A substantial body of literature has been compiled highlighting problems with many of the methods and practices used in EFA, and, in response, many…
Brenner, Hermann; Jansen, Lina
2016-02-01
Monitoring cancer survival is a key task of cancer registries, but timely disclosure of progress in long-term survival remains a challenge. We introduce and evaluate a novel method, denoted "boomerang method," for deriving more up-to-date estimates of long-term survival. We applied three established methods (cohort, complete, and period analysis) and the boomerang method to derive up-to-date 10-year relative survival of patients diagnosed with common solid cancers and hematological malignancies in the United States. Using the Surveillance, Epidemiology and End Results 9 database, we compared the most up-to-date age-specific estimates that might have been obtained with the database including patients diagnosed up to 2001 with 10-year survival later observed for patients diagnosed in 1997-2001. For cancers with little or no increase in survival over time, the various estimates of 10-year relative survival potentially available by the end of 2001 were generally rather similar. For malignancies with strongly increasing survival over time, including breast and prostate cancer and all hematological malignancies, the boomerang method provided estimates that were closest to later observed 10-year relative survival in 23 of the 34 groups assessed. The boomerang method can substantially improve up-to-dateness of long-term cancer survival estimates in times of ongoing improvement in prognosis. Copyright © 2016 Elsevier Inc. All rights reserved.
Method for providing improved solid fuels from agglomerated subbituminous coal
Janiak, Jerzy S.; Turak, Ali A.; Pawlak, Wanda; Ignasiak, Boleslaw L.
1989-01-01
A method is provided for separating agglomerated subbituminous coal and the heavy bridging liquid used to form the agglomerates. The separation is performed by contacting the agglomerates with inert gas or steam at a temperature in the range of 250.degree. to 350.degree. C. at substantially atmospheric pressure.
IMPROVED TEMPERATURE STABILITY OF SULFUR DIOXIDE SAMPLES COLLECTED BY THE FEDERAL REFERENCE METHOD
This report describes an examination of the reagents present in the SO2 Federal Reference Method (FRM) to determine if any change in reagent concentration or condition could bring about substantial, if not complete, retardation of the effect of temperature on the stability of col...
Photovoltaic cell with thin CS layer
Jordan, John F.; Albright, Scot P.
1994-01-18
An improved photovoltaic panel and method of forming a photovoltaic panel are disclosed for producing a high efficiency CdS/CdTe photovoltaic cell. The photovoltaic panel of the present invention is initially formed with a substantially thick Cds layer, and the effective thickness of the CdS layer is substantially reduced during regrowth to both form larger diameter CdTe crystals and substantially reduce the effective thickness of the C This invention was made with Government support under Subcontract No. ZL-7-06031-3 awarded by the Department of Energy. The Government has certain rights in this invention.
Probe for high resolution NMR with sample reorientation
Pines, A.; Samoson, A.
1990-02-06
An improved NMR probe and method are described which substantially improve the resolution of NMR measurements made on powdered or amorphous or otherwise orientationally disordered samples. The apparatus mechanically varies the orientation of the sample such that the time average of two or more sets of spherical harmonic functions are zero. 8 figs.
ERIC Educational Resources Information Center
Steward, Michelle D.; Martin, Gregory S.; Burns, Alvin C.; Bush, Ronald F.
2010-01-01
This study introduces marketing educators to the Madeline Hunter Direct Instruction Model (HDIM) as an approach to significantly and substantially improve student learning through course-embedded assessment. The effectiveness of the method is illustrated in three different marketing courses taught by three different marketing professors. The…
A METHOD OF STIMULATING ORIGINAL THINKING IN COLLEGE STUDENTS.
ERIC Educational Resources Information Center
ROBERTSON, MALCOLM H.
AN ATTEMPT WAS MADE TO SUBSTANTIATE THE HYPOTHESES THAT SUBJECTS RECEIVING SENSORY DEPRIVATION WOULD SHOW MORE IMPROVEMENT IN ORIGINALITY THAN THOSE EXPOSED TO A NORMAL SENSORY ENVIRONMENT, AND THOSE SUBJECTS RECEIVING 4 HOURS OF ISOLATION WOULD SHOW MORE IMPROVEMENT IN ORIGINALITY THAN THOSE RECEIVING 2 HOURS OF ISOLATION. ABOUT 60 VOLUNTEER,…
Lynn, Joanne
2011-04-01
The methods for healthcare reform are strikingly underdeveloped, with much reliance on political power. A methodology that combined methods from sources such as clinical trials, experience-based wisdom, and improvement science could be among the aims of the upcoming work in the USA on comparative effectiveness and on the agenda of the Center for Medicare and Medicaid Innovation in the Centers for Medicare and Medicaid Services. Those working in quality improvement have an unusual opportunity to generate substantial input into these processes through professional organisations such as the Academy for Healthcare Improvement and dominant leadership organisations such as the Institute for Healthcare Improvement.
Improved COD Measurements for Organic Content in Flowback Water with High Chloride Concentrations.
Cardona, Isabel; Park, Ho Il; Lin, Lian-Shin
2016-03-01
An improved method was used to determine chemical oxygen demand (COD) as a measure of organic content in water samples containing high chloride content. A contour plot of COD percent error in the Cl(-)-Cl(-):COD domain showed that COD errors increased with Cl(-):COD. Substantial errors (>10%) could occur in low Cl(-):COD regions (<300) for samples with low (<10 g/L) and high chloride concentrations (>25 g/L). Applying the method to flowback water samples resulted in COD concentrations ranging in 130 to 1060 mg/L, which were substantially lower than the previously reported values for flowback water samples from Marcellus Shale (228 to 21 900 mg/L). It is likely that overestimations of COD in the previous studies occurred as result of chloride interferences. Pretreatment with mercuric sulfate, and use of a low-strength digestion solution, and the contour plot to correct COD measurements are feasible steps to significantly improve the accuracy of COD measurements.
Quantifying Reporting Timeliness to Improve Outbreak Control
Swaan, Corien; van Steenbergen, Jim; Kretzschmar, Mirjam
2015-01-01
The extent to which reporting delays should be reduced to gain substantial improvement in outbreak control is unclear. We developed a model to quantitatively assess reporting timeliness. Using reporting speed data for 6 infectious diseases in the notification system in the Netherlands, we calculated the proportion of infections produced by index and secondary cases until the index case is reported. We assumed interventions that immediately stop transmission. Reporting delays render useful only those interventions that stop transmission from index and secondary cases. We found that current reporting delays are adequate for hepatitis A and B control. However, reporting delays should be reduced by a few days to improve measles and mumps control, by at least 10 days to improve shigellosis control, and by at least 5 weeks to substantially improve pertussis control. Our method provides quantitative insight into the required reporting delay reductions needed to achieve outbreak control and other transmission prevention goals. PMID:25625374
Meaningful improvement in gait speed in hip fracture recovery.
Alley, Dawn E; Hicks, Gregory E; Shardell, Michelle; Hawkes, William; Miller, Ram; Craik, Rebecca L; Mangione, Kathleen K; Orwig, Denise; Hochberg, Marc; Resnick, Barbara; Magaziner, Jay
2011-09-01
To estimate meaningful improvements in gait speed observed during recovery from hip fracture and to evaluate the sensitivity and specificity of gait speed changes in detecting change in self-reported mobility. Secondary longitudinal data analysis from two randomized controlled trials Twelve hospitals in the Baltimore, Maryland, area. Two hundred seventeen women admitted with hip fracture. Usual gait speed and self-reported mobility (ability to walk 1 block and climb 1 flight of stairs) measured 2 and 12 months after fracture. Effect size-based estimates of meaningful differences were 0.03 for small differences and 0.09 for substantial differences. Depending on the anchor (stairs vs walking) and method (mean difference vs regression), anchor-based estimates ranged from 0.10 to 0.17 m/s for small meaningful improvements and 0.17 to 0.26 m/s for substantial meaningful improvement. Optimal gait speed cutpoints yielded low sensitivity (0.39-0.62) and specificity (0.57-0.76) for improvements in self-reported mobility. Results from this sample of women recovering from hip fracture provide only limited support for the 0.10-m/s cut point for substantial meaningful change previously identified in community-dwelling older adults experiencing declines in walking abilities. Anchor-based estimates and cut points derived from receiver operating characteristic curve analysis suggest that greater improvements in gait speed may be required for substantial perceived mobility improvement in female hip fracture patients. Furthermore, gait speed change performed poorly in discriminating change in self-reported mobility. Estimates of meaningful change in gait speed may differ based on the direction of change (improvement vs decline) or between patient populations. © 2011, Copyright the Authors. Journal compilation © 2011, The American Geriatrics Society.
Meaningful Improvement in Gait Speed in Hip Fracture Recovery
Alley, Dawn E.; Hicks, Gregory E.; Shardell, Michelle; Hawkes, William; Miller, Ram; Craik, Rebecca L.; Mangione, Kathleen K.; Orwig, Denise; Hochberg, Marc; Resnick, Barbara; Magaziner, Jay
2011-01-01
OBJECTIVES To estimate meaningful improvements in gait speed observed during recovery from hip fracture and to evaluate the sensitivity and specificity of gait speed changes in detecting change in self-reported mobility. DESIGN Secondary longitudinal data analysis from two randomized controlled trials SETTING Twelve hospitals in the Baltimore, Maryland, area. PARTICIPANTS Two hundred seventeen women admitted with hip fracture. MEASUREMENTS Usual gait speed and self-reported mobility (ability to walk 1 block and climb 1 flight of stairs) measured 2 and 12 months after fracture. RESULTS Effect size–based estimates of meaningful differences were 0.03 for small differences and 0.09 for substantial differences. Depending on the anchor (stairs vs walking) and method (mean difference vs regression), anchor-based estimates ranged from 0.10 to 0.17 m/s for small meaningful improvements and 0.17 to 0.26 m/s for substantial meaningful improvement. Optimal gait speed cut-points yielded low sensitivity (0.39–0.62) and specificity (0.57–0.76) for improvements in self-reported mobility. CONCLUSION Results from this sample of women recovering from hip fracture provide only limited support for the 0.10-m/s cut point for substantial meaningful change previously identified in community-dwelling older adults experiencing declines in walking abilities. Anchor-based estimates and cut points derived from receiver operating characteristic curve analysis suggest that greater improvements in gait speed may be required for substantial perceived mobility improvement in female hip fracture patients. Furthermore, gait speed change performed poorly in discriminating change in self-reported mobility. Estimates of meaningful change in gait speed may differ based on the direction of change (improvement vs decline) or between patient populations. PMID:21883109
Apparatus and method for the electrolytic production of metals
Sadoway, Donald R.
1991-01-01
Improved electrolytic cells and methods for producing metals by electrolytic reduction of a compound dissolved in a molten electrolyte are disclosed. In the improved cells and methods, a protective surface layer is formed upon at least one electrode in the electrolytic reduction cell and, optionally, upon the lining of the cell. This protective surface layer comprises a material that, at the operating conditions of the cell: (a) is not substantially reduced by the metal product; (b) is not substantially reactive with the cell electrolyte to form materials that are reactive with the metal product; and, (c) has an electrochemical potential that is more electronegative than that of the compound undergoing electrolysis to produce the metal product of the cell. The protective surface layer can be formed upon an electrode metal layer comprising a material, the oxide of which also satisfies the protective layer selection criteria. The protective layer material can also be used on the surface of a cell lining.
Ma, Yan; Zhang, Wei; Lyman, Stephen; Huang, Yihe
2018-06-01
To identify the most appropriate imputation method for missing data in the HCUP State Inpatient Databases (SID) and assess the impact of different missing data methods on racial disparities research. HCUP SID. A novel simulation study compared four imputation methods (random draw, hot deck, joint multiple imputation [MI], conditional MI) for missing values for multiple variables, including race, gender, admission source, median household income, and total charges. The simulation was built on real data from the SID to retain their hierarchical data structures and missing data patterns. Additional predictive information from the U.S. Census and American Hospital Association (AHA) database was incorporated into the imputation. Conditional MI prediction was equivalent or superior to the best performing alternatives for all missing data structures and substantially outperformed each of the alternatives in various scenarios. Conditional MI substantially improved statistical inferences for racial health disparities research with the SID. © Health Research and Educational Trust.
NASA Technical Reports Server (NTRS)
Seng, G. T.; Otterson, D. A.
1983-01-01
Two high performance liquid chromatographic (HPLC) methods have been developed for the determination of saturates, olefins and aromatics in petroleum and shale derived mid-distillate fuels. In one method the fuel to be analyzed is reacted with sulfuric acid, to remove a substantial portion of the aromatics, which provides a reacted fuel fraction for use in group type quantitation. The second involves the removal of a substantial portion of the saturates fraction from the HPLC system to permit the determination of olefin concentrations as low as 0.3 volume percent, and to improve the accuracy and precision of olefins determinations. Each method was evaluated using model compound mixtures and real fuel samples.
Curry, Leslie A; Brault, Marie A; Linnander, Erika L; McNatt, Zahirah; Brewster, Amanda L; Cherlin, Emily; Flieger, Signe Peterson; Ting, Henry H; Bradley, Elizabeth H
2018-01-01
Background Hospital organisational culture affects patient outcomes including mortality rates for patients with acute myocardial infarction; however, little is known about whether and how culture can be positively influenced. Methods This is a 2-year, mixed-methods interventional study in 10 US hospitals to foster improvements in five domains of organisational culture: (1) learning environment, (2) senior management support, (3) psychological safety, (4) commitment to the organisation and (5) time for improvement. Outcomes were change in culture, uptake of five strategies associated with lower risk-standardised mortality rates (RSMR) and RSMR. Measures included a validated survey at baseline and at 12 and 24 months (n=223; average response rate 88%); in-depth interviews (n=393 interviews with 197 staff); and RSMR data from the Centers for Medicare and Medicaid Services. Results We observed significant changes (p<0.05) in culture between baseline and 24 months in the full sample, particularly in learning environment (p<0.001) and senior management support (p<0.001). Qualitative data indicated substantial shifts in these domains as well as psychological safety. Six of the 10 hospitals achieved substantial improvements in culture, and four made less progress. The use of evidence-based strategies also increased significantly (per hospital average of 2.4 strategies at baseline to 3.9 strategies at 24 months; p<0.05). The six hospitals that demonstrated substantial shifts in culture also experienced significantly greater reductions in RSMR than the four hospitals that did not shift culture (reduced RSMR by 1.07 percentage points vs 0.23 percentage points; p=0.03) between 2011–2014 and 2012–2015. Conclusions Investing in strategies to foster an organisational culture that supports high performance may help hospitals in their efforts to improve clinical outcomes. PMID:29101292
ERIC Educational Resources Information Center
Walberg, Herbert J.
2010-01-01
This book summarizes the major research findings that show how to substantially increase student achievement. This book draws on a number of investigators who have statistically synthesized many studies. A new education method showing superior results in 90% of the studies concerning it has more credibility than a method that shows results in only…
An improved semi-implicit method for structural dynamics analysis
NASA Technical Reports Server (NTRS)
Park, K. C.
1982-01-01
A semi-implicit algorithm is presented for direct time integration of the structural dynamics equations. The algorithm avoids the factoring of the implicit difference solution matrix and mitigates the unacceptable accuracy losses which plagued previous semi-implicit algorithms. This substantial accuracy improvement is achieved by augmenting the solution matrix with two simple diagonal matrices of the order of the integration truncation error.
Structural reanalysis via a mixed method. [using Taylor series for accuracy improvement
NASA Technical Reports Server (NTRS)
Noor, A. K.; Lowder, H. E.
1975-01-01
A study is made of the approximate structural reanalysis technique based on the use of Taylor series expansion of response variables in terms of design variables in conjunction with the mixed method. In addition, comparisons are made with two reanalysis techniques based on the displacement method. These techniques are the Taylor series expansion and the modified reduced basis. It is shown that the use of the reciprocals of the sizing variables as design variables (which is the natural choice in the mixed method) can result in a substantial improvement in the accuracy of the reanalysis technique. Numerical results are presented for a space truss structure.
1991-08-01
being used in both current and long-range research programs that are expected to make the Army more effective in matching the requirements for first- and... make substantial improvements to the existing selection and classifi- cation system. xi IMPROVING THE SELECTION, CLASSIFICATION, AND UTILIZATION OF...basis for new methods of allocating personnel, and making near-real-time decisions on the best match between characteristics of an individual enlistee
Gentile, T. R.; Nacher, P. J.; Saam, B.; Walker, T. G.
2018-01-01
This article reviews the physics and technology of producing large quantities of highly spin-polarized 3He nuclei using spin-exchange (SEOP) and metastability-exchange (MEOP) optical pumping. Both technical developments and deeper understanding of the physical processes involved have led to substantial improvements in the capabilities of both methods. For SEOP, the use of spectrally narrowed lasers and K-Rb mixtures has substantially increased the achievable polarization and polarizing rate. For MEOP nearly lossless compression allows for rapid production of polarized 3He and operation in high magnetic fields has likewise significantly increased the pressure at which this method can be performed, and revealed new phenomena. Both methods have benefitted from development of storage methods that allow for spin-relaxation times of hundreds of hours, and specialized precision methods for polarimetry. SEOP and MEOP are now widely applied for spin-polarized targets, neutron spin filters, magnetic resonance imaging, and precision measurements. PMID:29503479
Gentile, T R; Nacher, P J; Saam, B; Walker, T G
2017-01-01
This article reviews the physics and technology of producing large quantities of highly spin-polarized 3 He nuclei using spin-exchange (SEOP) and metastability-exchange (MEOP) optical pumping. Both technical developments and deeper understanding of the physical processes involved have led to substantial improvements in the capabilities of both methods. For SEOP, the use of spectrally narrowed lasers and K-Rb mixtures has substantially increased the achievable polarization and polarizing rate. For MEOP nearly lossless compression allows for rapid production of polarized 3 He and operation in high magnetic fields has likewise significantly increased the pressure at which this method can be performed, and revealed new phenomena. Both methods have benefitted from development of storage methods that allow for spin-relaxation times of hundreds of hours, and specialized precision methods for polarimetry. SEOP and MEOP are now widely applied for spin-polarized targets, neutron spin filters, magnetic resonance imaging, and precision measurements.
NASA Astrophysics Data System (ADS)
Gentile, T. R.; Nacher, P. J.; Saam, B.; Walker, T. G.
2017-10-01
This article reviews the physics and technology of producing large quantities of highly spin-polarized 3He nuclei using spin-exchange (SEOP) and metastability-exchange (MEOP) optical pumping. Both technical developments and deeper understanding of the physical processes involved have led to substantial improvements in the capabilities of both methods. For SEOP, the use of spectrally narrowed lasers and K-Rb mixtures has substantially increased the achievable polarization and polarizing rate. For MEOP nearly lossless compression allows for rapid production of polarized 3He and operation in high magnetic fields has likewise significantly increased the pressure at which this method can be performed, and revealed new phenomena. Both methods have benefitted from development of storage methods that allow for spin-relaxation times of hundreds of hours, and specialized precision methods for polarimetry. SEOP and MEOP are now widely applied for spin-polarized targets, neutron spin filters, magnetic resonance imaging, and precision measurements.
ERIC Educational Resources Information Center
Botagariyev, ?ulegen A.; Kubiyeva, Svetlana S.; Baizakova, Venera E.; Mambetov, Nurolla; Tulegenov, Yerkin K.; Aralbayev, Alpysbay S.; Kairgozhin, Dulat U.
2016-01-01
The purpose of this study was to determine the effectiveness of the existing model of teaching physical training in secondary schools and the analysis of a game like method introduced to improve physical fitness of students. The authors substantiated the use of a game like method during physical training classes, which implementation should create…
Alsop, Eric B; Raymond, Jason
2013-01-01
Oligonucleotide signatures, especially tetranucleotide signatures, have been used as method for homology binning by exploiting an organism's inherent biases towards the use of specific oligonucleotide words. Tetranucleotide signatures have been especially useful in environmental metagenomics samples as many of these samples contain organisms from poorly classified phyla which cannot be easily identified using traditional homology methods, including NCBI BLAST. This study examines oligonucleotide signatures across 1,424 completed genomes from across the tree of life, substantially expanding upon previous work. A comprehensive analysis of mononucleotide through nonanucleotide word lengths suggests that longer word lengths substantially improve the classification of DNA fragments across a range of sizes of relevance to high throughput sequencing. We find that, at present, heptanucleotide signatures represent an optimal balance between prediction accuracy and computational time for resolving taxonomy using both genomic and metagenomic fragments. We directly compare the ability of tetranucleotide and heptanucleotide world lengths (tetranucleotide signatures are the current standard for oligonucleotide word usage analyses) for taxonomic binning of metagenome reads. We present evidence that heptanucleotide word lengths consistently provide more taxonomic resolving power, particularly in distinguishing between closely related organisms that are often present in metagenomic samples. This implies that longer oligonucleotide word lengths should replace tetranucleotide signatures for most analyses. Finally, we show that the application of longer word lengths to metagenomic datasets leads to more accurate taxonomic binning of DNA scaffolds and have the potential to substantially improve taxonomic assignment and assembly of metagenomic data.
Smith, Lindsey P; Hua, Jenna; Seto, Edmund; Du, Shufa; Zang, Jiajie; Zou, Shurong; Popkin, Barry M; Mendez, Michelle A
2014-01-01
This paper addresses the need for diet assessment methods that capture the rapidly changing beverage consumption patterns in China. The objective of this study was to develop a 3-day smartphone-assisted 24-hour recall to improve the quantification of beverage intake amongst young Chinese adults (n=110) and validate, in a small subset (n=34), the extent to which the written record and smartphone-assisted recalls adequately estimated total fluid intake, using 24-hour urine samples. The smartphone-assisted method showed improved validity compared with the written record-assisted method, when comparing reported total fluid intake to total urine volume. However, participants reported consuming fewer beverages on the smartphone-assisted method compared with the written record-assisted method, primarily due to decreased consumption of traditional zero-energy beverages (i.e. water, tea) in the smartphone-assisted method. It is unclear why participants reported fewer beverages in the smartphone-assisted method than the written record -assisted method. One possibility is that participants found the smartphone method too cumbersome, and responded by decreasing beverage intake. These results suggest that smartphone-assisted 24-hour recalls perform comparably but do not appear to substantially improve beverage quantification compared with the current written record-based approach. In addition, we piloted a beverage screener to identify consumers of episodically consumed SSBs. As expected, a substantially higher proportion of consumers reported consuming SSBs on the beverage screener compared with either recall type, suggesting that a beverage screener may be useful in characterizing consumption of episodically consumed beverages in China's dynamic food and beverage landscape.
Smith, Lindsey P.; Hua, Jenna; Seto, Edmund; Du, Shufa; Zang, Jiajie; Zou, Shurong; Popkin, Barry M.; Mendez, Michelle A.
2014-01-01
This paper addresses the need for diet assessment methods that capture the rapidly changing beverage consumption patterns in China. The objective of this study was to develop a 3-day smartphone-assisted 24-hour recall to improve the quantification of beverage intake amongst young Chinese adults (n=110) and validate, in a small subset (n=34), the extent to which the written record and smartphone-assisted recalls adequately estimated total fluid intake, using 24-hour urine samples. The smartphone-assisted method showed improved validity compared to the written-assisted method, when comparing reported total fluid intake to total urine volume. However, participants reported consuming fewer beverages on the smartphone-assisted method compared to the written-assisted method, primarily due to decreased consumption of traditional zero-energy beverages (i.e. water, tea) in the smartphone-assisted method. It is unclear why participants reported fewer beverages in the smartphone-assisted method than the written-assisted method. One possibility is that participants found the smartphone method too cumbersome, and responded by decreasing beverage intake. These results suggest that smartphone-assisted 24-hour recalls perform comparably but do not appear to substantially improve beverage quantification compared to the current written record based approach. In addition, we piloted a beverage screener to identify consumers of episodically consumed SSBs. As expected, a substantially higher proportion of consumers reported consuming SSBs on the beverage screener compared to either recall type, suggesting that a beverage screener may be useful in characterizing consumption of episodically consumed beverages in China’s dynamic food and beverage landscape. PMID:25516327
Ozeki, Tetsuya; Tagami, Tatsuaki
2013-01-01
The development of drug nanoparticles has attracted substantial attention because of their potential to improve the dissolution rate and oral availability of poorly water-soluble drugs. This review summarizes the recent articles that discussed nanoparticle-based oral drug delivery systems. The preparation methods were categorized as top-down and bottom-up methods, which are common methods for preparing drug nanoparticles. In addition, methods of handling drug nanoparticles (e.g., one-step preparation of nanocomposites which are microparticles containing drug nanoparticles) were introduced for the effective preservation of drug nanoparticles. The carrier-based preparation of drug nanoparticles was also introduced as a potentially promising oral drug delivery system.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Namikawa, Toshiya
We present here a new method for delensing B modes of the cosmic microwave background (CMB) using a lensing potential reconstructed from the same realization of the CMB polarization (CMB internal delensing). The B -mode delensing is required to improve sensitivity to primary B modes generated by, e.g., the inflationary gravitational waves, axionlike particles, modified gravity, primordial magnetic fields, and topological defects such as cosmic strings. However, the CMB internal delensing suffers from substantial biases due to correlations between observed CMB maps to be delensed and that used for reconstructing a lensing potential. Since the bias depends on realizations, wemore » construct a realization-dependent (RD) estimator for correcting these biases by deriving a general optimal estimator for higher-order correlations. The RD method is less sensitive to simulation uncertainties. Compared to the previous ℓ -splitting method, we find that the RD method corrects the biases without substantial degradation of the delensing efficiency.« less
Automated Analysis of Renewable Energy Datasets ('EE/RE Data Mining')
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bush, Brian; Elmore, Ryan; Getman, Dan
This poster illustrates methods to substantially improve the understanding of renewable energy data sets and the depth and efficiency of their analysis through the application of statistical learning methods ('data mining') in the intelligent processing of these often large and messy information sources. The six examples apply methods for anomaly detection, data cleansing, and pattern mining to time-series data (measurements from metering points in buildings) and spatiotemporal data (renewable energy resource datasets).
How to reduce injuries to residual trees during stand management activities.
Paul E. Aho; Gary Fiddler; Gregory M. Filip
1983-01-01
Losses of trees and tree volume that result from decay initiated by mechanical injuries during stand management activities in the western United States are substantial. They can be reduced through improved logging methods and careful planning of other forest management activities.
Infrared upconversion for astronomy
NASA Technical Reports Server (NTRS)
Boyd, R. W.
1977-01-01
The basic theory of upconversion is presented, along with a brief historical summary of upconversion techniques. Upconverters were used in astronomical studies, but have met with only modest success. Upconversion will become a useful detection method for astronomy only if substantial but perhaps forseeable, improvements can be realized.
NASA Technical Reports Server (NTRS)
Crimi, P.
1974-01-01
A method for analyzing unsteady airfoil stall was refined by including nonlinear effects in the representation of the inviscid flow. Certain other aspects of the potential-flow model were reexamined and the effects of varying Reynolds number on stall characteristics were investigated. Refinement of the formulation improved the representation of the flow and chordwise pressure distribution below stall, but substantial quantitative differences between computed and measured results are still evident for sinusoidal pitching through stall. Agreement is substantially improved by assuming the growth rate of the dead-air region at the onset of leading-edge stall is of the order of the component of the free stream normal to the airfoil chordline. The method predicts the expected increase in the resistance to stalling with increasing Reynolds number. Results indicate that a given airfoil can undergo both trailing-edge and leading-edge stall under unsteady conditions.
NASA Astrophysics Data System (ADS)
Wong, Jaime G.; Rosi, Giuseppe A.; Rouhi, Amirreza; Rival, David E.
2017-10-01
Particle tracking velocimetry (PTV) produces high-quality temporal information that is often neglected when computing spatial gradients. A method is presented here to utilize this temporal information in order to improve the estimation of spatial gradients for spatially unstructured Lagrangian data sets. Starting with an initial guess, this method penalizes any gradient estimate where the substantial derivative of vorticity along a pathline is not equal to the local vortex stretching/tilting. Furthermore, given an initial guess, this method can proceed on an individual pathline without any further reference to neighbouring pathlines. The equivalence of the substantial derivative and vortex stretching/tilting is based on the vorticity transport equation, where viscous diffusion is neglected. By minimizing the residual of the vorticity-transport equation, the proposed method is first tested to reduce error and noise on a synthetic Taylor-Green vortex field dissipating in time. Furthermore, when the proposed method is applied to high-density experimental data collected with `Shake-the-Box' PTV, noise within the spatial gradients is significantly reduced. In the particular test case investigated here of an accelerating circular plate captured during a single run, the method acts to delineate the shear layer and vortex core, as well as resolve the Kelvin-Helmholtz instabilities, which were previously unidentifiable without the use of ensemble averaging. The proposed method shows promise for improving PTV measurements that require robust spatial gradients while retaining the unstructured Lagrangian perspective.
Method of dehydroxylating a hydroxylated material and method of making a mesoporous film
Domansky, Karel [Richland, WA; Fryxell, Glen E [Kennewick, WA; Liu, Jun [West Richland, WA; Kohler, Nathan J [Richland, WA; Baskaran, Suresh [Kennewick, WA
2002-05-07
The present invention is a method of dehydroxylating a silica surface that is hydroxylated having the steps of exposing the silica surface separately to a silicon organic compound and a dehydroxylating gas. Exposure to the silicon organic compound can be in liquid, gas or solution phase, and exposure to a dehydroxylating gas is typically at elevated temperatures. In one embodiment, the improvement of the dehydroxylation procedure is the repetition of the soaking and dehydroxylating gas exposure. In another embodiment, the improvement is the use of an inert gas that is substantially free of hydrogen. In yet another embodiment, the present invention is the combination of the two-step dehydroxylation method with a surfactant templating method of making a mesoporous film.
NASA Astrophysics Data System (ADS)
Mason, J. M.; Fahy, F. J.
1986-10-01
The effectiveness of tuned Helmholtz resonators connected to the partition cavity in double-leaf partitions utilized in situations requiring low weight structures with high transmission loss is investigated as a method of improving sound transmission loss. This is demonstrated by a simple theoretical model and then experimentally verified. Results show that substantial improvements may be obtained at and around the mass-air-mass frequency for a total resonator volume 15 percent of the cavity volume.
Microbial fuel cell with improved anode
Borole, Abhijeet P.
2010-04-13
The present invention relates to a method for preparing a microbial fuel cell, wherein the method includes: (i) inoculating an anodic liquid medium in contact with an anode of the microbial fuel cell with one or more types of microorganisms capable of functioning by an exoelectrogenic mechanism; (ii) establishing a biofilm of the microorganisms on and/or within the anode along with a substantial absence of planktonic forms of the microorganisms by substantial removal of the planktonic microorganisms during forced flow and recirculation conditions of the anodic liquid medium; and (iii) subjecting the microorganisms of the biofilm to a growth stage by incorporating one or more carbon-containing nutritive compounds in the anodic liquid medium during biofilm formation or after biofilm formation on the anode has been established.
Application of Hands-On Simulation Games to Improve Classroom Experience
ERIC Educational Resources Information Center
Hamzeh, Farook; Theokaris, Christina; Rouhana, Carel; Abbas, Yara
2017-01-01
While many construction companies claim substantial productivity and profit gains when applying lean construction principles, it remains a challenge to teach these principles in a classroom. Lean construction emphasises collaborative processes and integrated delivery practices. Consequently, new teaching methods that nurture such values should…
FOCUSING ON CHILDREN’S INHALATION DOSIMETRY AND HEALTH EFFECTS FOR RISK ASSESSMENT: AN INTRODUCTION
Substantial effort has been invested in improving children’s health risk assessment in recent years. However, the body of scientific evidence in support of children’s health assessment is constantly advancing requiring continual updating of risk assessment methods. Children’s i...
Polymer nanocomposites for lithium battery applications
Sandi-Tapia, Giselle; Gregar, Kathleen Carrado
2006-07-18
A single ion-conducting nanocomposite of a substantially amorphous polyethylene ether and a negatively charged synthetic smectite clay useful as an electrolyte. Excess SiO2 improves conductivity and when combined with synthetic hectorite forms superior membranes for batteries. A method of making membranes is also disclosed.
Numerical analysis of mixing enhancement for micro-electroosmotic flow
NASA Astrophysics Data System (ADS)
Tang, G. H.; He, Y. L.; Tao, W. Q.
2010-05-01
Micro-electroosmotic flow is usually slow with negligible inertial effects and diffusion-based mixing can be problematic. To gain an improved understanding of electroosmotic mixing in microchannels, a numerical study has been carried out for channels patterned with wall blocks, and channels patterned with heterogeneous surfaces. The lattice Boltzmann method has been employed to obtain the external electric field, electric potential distribution in the electrolyte, the flow field, and the species concentration distribution within the same framework. The simulation results show that wall blocks and heterogeneous surfaces can significantly disturb the streamlines by fluid folding and stretching leading to apparently substantial improvements in mixing. However, the results show that the introduction of such features can substantially reduce the mass flow rate and thus effectively prolongs the available mixing time when the flow passes through the channel. This is a non-negligible factor on the effectiveness of the observed improvements in mixing efficiency. Compared with the heterogeneous surface distribution, the wall block cases can achieve more effective enhancement in the same mixing time. In addition, the field synergy theory is extended to analyze the mixing enhancement in electroosmotic flow. The distribution of the local synergy angle in the channel aids to evaluate the effectiveness of enhancement method.
Improved method for implicit Monte Carlo
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brown, F. B.; Martin, W. R.
2001-01-01
The Implicit Monte Carlo (IMC) method has been used for over 30 years to analyze radiative transfer problems, such as those encountered in stellar atmospheres or inertial confinement fusion. Reference [2] provided an exact error analysis of IMC for 0-D problems and demonstrated that IMC can exhibit substantial errors when timesteps are large. These temporal errors are inherent in the method and are in addition to spatial discretization errors and approximations that address nonlinearities (due to variation of physical constants). In Reference [3], IMC and four other methods were analyzed in detail and compared on both theoretical grounds and themore » accuracy of numerical tests. As discussed in, two alternative schemes for solving the radiative transfer equations, the Carter-Forest (C-F) method and the Ahrens-Larsen (A-L) method, do not exhibit the errors found in IMC; for 0-D, both of these methods are exact for all time, while for 3-D, A-L is exact for all time and C-F is exact within a timestep. These methods can yield substantially superior results to IMC.« less
Zhou, Xu; Wang, Qilin; Jiang, Guangming; Zhang, Xiwang; Yuan, Zhiguo
2014-12-01
Improvement of sludge dewaterability is crucial for reducing the costs of sludge disposal in wastewater treatment plants. This study presents a novel method based on combined conditioning with zero-valent iron (ZVI) and hydrogen peroxide (HP) at pH 2.0 to improve dewaterability of a full-scale waste activated sludge (WAS). The combination of ZVI (0-750mg/L) and HP (0-750mg/L) at pH 2.0 substantially improved the WAS dewaterability due to Fenton-like reactions. The highest improvement in WAS dewaterability was attained at 500mg ZVI/L and 250mg HP/L, when the capillary suction time of the WAS was reduced by approximately 50%. Particle size distribution indicated that the sludge flocs were decomposed after conditioning. Economic analysis showed that combined conditioning with ZVI and HP was a more economically favorable method for improving WAS dewaterability than the classical Fenton reaction based method initiated by ferrous salts and HP. Copyright © 2014 Elsevier Ltd. All rights reserved.
Research methodology in Dentistry: Part I – The essentials and relevance of research
Krithikadatta, Jogikalmat
2012-01-01
The need for scientific evidence should be the basis of clinical practice. The field of restorative dentistry and endodontics is evolving at a rapid pace, with the introduction of several materials, instruments, and equipments. However, there is minimal information of their relevance in clinical practice. On the one hand, material and laboratory research is critical, however; its translation into clinical practice is not being substantiated enough with clinical research. This four part review series focuses on methods to improve evidence-based practice, by improving methods to integrate laboratory and clinical research. PMID:22368327
Method For Enhanced Gas Monitoring In High Density Flow Streams
Von Drasek, William A.; Mulderink, Kenneth A.; Marin, Ovidiu
2005-09-13
A method for conducting laser absorption measurements in high temperature process streams having high levels of particulate matter is disclosed. An impinger is positioned substantially parallel to a laser beam propagation path and at upstream position relative to the laser beam. Beam shielding pipes shield the beam from the surrounding environment. Measurement is conducted only in the gap between the two shielding pipes where the beam propagates through the process gas. The impinger facilitates reduced particle presence in the measurement beam, resulting in improved SNR (signal-to-noise) and improved sensitivity and dynamic range of the measurement.
Methods for recalibration of mass spectrometry data
Tolmachev, Aleksey V [Richland, WA; Smith, Richard D [Richland, WA
2009-03-03
Disclosed are methods for recalibrating mass spectrometry data that provide improvement in both mass accuracy and precision by adjusting for experimental variance in parameters that have a substantial impact on mass measurement accuracy. Optimal coefficients are determined using correlated pairs of mass values compiled by matching sets of measured and putative mass values that minimize overall effective mass error and mass error spread. Coefficients are subsequently used to correct mass values for peaks detected in the measured dataset, providing recalibration thereof. Sub-ppm mass measurement accuracy has been demonstrated on a complex fungal proteome after recalibration, providing improved confidence for peptide identifications.
Evaluation of new techniques for the calculation of internal recirculating flows
NASA Technical Reports Server (NTRS)
Van Doormaal, J. P.; Turan, A.; Raithby, G. D.
1987-01-01
The performance of discrete methods for the prediction of fluid flows can be enhanced by improving the convergence rate of solvers and by increasing the accuracy of the discrete representation of the equations of motion. This paper evaluates the gains in solver performance that are available when various acceleration methods are applied. Various discretizations are also examined and two are recommended because of their accuracy and robustness. Insertion of the improved discretization and solver accelerator into a TEACH code, that has been widely applied to combustor flows, illustrates the substantial gains that can be achieved.
Discovering regions of the bovine genome associated with variation in the immune response
USDA-ARS?s Scientific Manuscript database
Infectious disease of livestock continues to be a cause of substantial economic loss and adverse welfare. Breeding for disease resistant livestock could improve both the economic burden and animal welfare. Using genetic linkage and association methods, we aim to identify key genes and pathways that ...
Using Cognitive Load Theory to Tailor Instruction to Levels of Accounting Students' Expertise
ERIC Educational Resources Information Center
Blayney, Paul; Kalyuga, Slava; Sweller, John
2015-01-01
Tailoring of instructional methods to learner levels of expertise may reduce extraneous cognitive load and improve learning. Contemporary technology-based learning environments have the potential to substantially enable learner-adapted instruction. This paper investigates the effects of adaptive instruction based on using the isolated-interactive…
Substantial effort has been invested in improving children’s health risk assessment in recent years. However, the body of scientific evidence in support of children’s health assessment is constantly advancing requiring continual updating of risk assessment methods. Children’s i...
Analytical concepts for health management systems of liquid rocket engines
NASA Technical Reports Server (NTRS)
Williams, Richard; Tulpule, Sharayu; Hawman, Michael
1990-01-01
Substantial improvement in health management systems performance can be realized by implementing advanced analytical methods of processing existing liquid rocket engine sensor data. In this paper, such techniques ranging from time series analysis to multisensor pattern recognition to expert systems to fault isolation models are examined and contrasted. The performance of several of these methods is evaluated using data from test firings of the Space Shuttle main engines.
ERIC Educational Resources Information Center
Nadal, Gloria Claveria; Lancis, Carlos Sanchez
1997-01-01
Notes that the employment of databases to the study of the history of a language is a method that allows for substantial improvement in investigative quality. Illustrates this with the example of the application of this method to two studies of the history of Spanish developed in the Language and Information Seminary of the Independent University…
Andrew N. Gray; Thomas R. Whittier; David L. Azuma
2014-01-01
A substantial portion of the carbon (C) emitted by human activity is apparently being stored in forest ecosystems in the Northern Hemisphere, but the magnitude and cause are not precisely understood. Current official estimates of forest C flux are based on a combination of field measurements and other methods. The goal of this study was to improve on existing methods...
Accurate and reproducible functional maps in 127 human cell types via 2D genome segmentation
Hardison, Ross C.
2017-01-01
Abstract The Roadmap Epigenomics Consortium has published whole-genome functional annotation maps in 127 human cell types by integrating data from studies of multiple epigenetic marks. These maps have been widely used for studying gene regulation in cell type-specific contexts and predicting the functional impact of DNA mutations on disease. Here, we present a new map of functional elements produced by applying a method called IDEAS on the same data. The method has several unique advantages and outperforms existing methods, including that used by the Roadmap Epigenomics Consortium. Using five categories of independent experimental datasets, we compared the IDEAS and Roadmap Epigenomics maps. While the overall concordance between the two maps is high, the maps differ substantially in the prediction details and in their consistency of annotation of a given genomic position across cell types. The annotation from IDEAS is uniformly more accurate than the Roadmap Epigenomics annotation and the improvement is substantial based on several criteria. We further introduce a pipeline that improves the reproducibility of functional annotation maps. Thus, we provide a high-quality map of candidate functional regions across 127 human cell types and compare the quality of different annotation methods in order to facilitate biomedical research in epigenomics. PMID:28973456
NASA Astrophysics Data System (ADS)
Karimi, Hamed; Rosenberg, Gili; Katzgraber, Helmut G.
2017-10-01
We present and apply a general-purpose, multistart algorithm for improving the performance of low-energy samplers used for solving optimization problems. The algorithm iteratively fixes the value of a large portion of the variables to values that have a high probability of being optimal. The resulting problems are smaller and less connected, and samplers tend to give better low-energy samples for these problems. The algorithm is trivially parallelizable since each start in the multistart algorithm is independent, and could be applied to any heuristic solver that can be run multiple times to give a sample. We present results for several classes of hard problems solved using simulated annealing, path-integral quantum Monte Carlo, parallel tempering with isoenergetic cluster moves, and a quantum annealer, and show that the success metrics and the scaling are improved substantially. When combined with this algorithm, the quantum annealer's scaling was substantially improved for native Chimera graph problems. In addition, with this algorithm the scaling of the time to solution of the quantum annealer is comparable to the Hamze-de Freitas-Selby algorithm on the weak-strong cluster problems introduced by Boixo et al. Parallel tempering with isoenergetic cluster moves was able to consistently solve three-dimensional spin glass problems with 8000 variables when combined with our method, whereas without our method it could not solve any.
Curry, Leslie A; Brault, Marie A; Linnander, Erika L; McNatt, Zahirah; Brewster, Amanda L; Cherlin, Emily; Flieger, Signe Peterson; Ting, Henry H; Bradley, Elizabeth H
2018-03-01
Hospital organisational culture affects patient outcomes including mortality rates for patients with acute myocardial infarction; however, little is known about whether and how culture can be positively influenced. This is a 2-year, mixed-methods interventional study in 10 US hospitals to foster improvements in five domains of organisational culture: (1) learning environment, (2) senior management support, (3) psychological safety, (4) commitment to the organisation and (5) time for improvement. Outcomes were change in culture, uptake of five strategies associated with lower risk-standardised mortality rates (RSMR) and RSMR. Measures included a validated survey at baseline and at 12 and 24 months (n=223; average response rate 88%); in-depth interviews (n=393 interviews with 197 staff); and RSMR data from the Centers for Medicare and Medicaid Services. We observed significant changes (p<0.05) in culture between baseline and 24 months in the full sample, particularly in learning environment (p<0.001) and senior management support (p<0.001). Qualitative data indicated substantial shifts in these domains as well as psychological safety. Six of the 10 hospitals achieved substantial improvements in culture, and four made less progress. The use of evidence-based strategies also increased significantly (per hospital average of 2.4 strategies at baseline to 3.9 strategies at 24 months; p<0.05). The six hospitals that demonstrated substantial shifts in culture also experienced significantly greater reductions in RSMR than the four hospitals that did not shift culture (reduced RSMR by 1.07 percentage points vs 0.23 percentage points; p=0.03) between 2011-2014 and 2012-2015. Investing in strategies to foster an organisational culture that supports high performance may help hospitals in their efforts to improve clinical outcomes. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2018. All rights reserved. No commercial use is permitted unless otherwise expressly granted.
An improved Newton iteration for the generalized inverse of a matrix, with applications
NASA Technical Reports Server (NTRS)
Pan, Victor; Schreiber, Robert
1990-01-01
The purpose here is to clarify and illustrate the potential for the use of variants of Newton's method of solving problems of practical interest on highly personal computers. The authors show how to accelerate the method substantially and how to modify it successfully to cope with ill-conditioned matrices. The authors conclude that Newton's method can be of value for some interesting computations, especially in parallel and other computing environments in which matrix products are especially easy to work with.
Method of chaotic mixing and improved stirred tank reactors
Muzzio, F.J.; Lamberto, D.J.
1999-07-13
The invention provides a method and apparatus for efficiently achieving a homogeneous mixture of fluid components by introducing said components having a Reynolds number of between about [le]1 to about 500 into a vessel and continuously perturbing the mixing flow by altering the flow speed and mixing time until homogeneity is reached. This method prevents the components from aggregating into non-homogeneous segregated regions within said vessel during mixing and substantially reduces the time the admixed components reach homogeneity. 19 figs.
Method of chaotic mixing and improved stirred tank reactors
Muzzio, Fernando J.; Lamberto, David J.
1999-01-01
The invention provides a method and apparatus for efficiently achieving a homogeneous mixture of fluid components by introducing said components having a Reynolds number of between about .ltoreq.1 to about 500 into a vessel and continuously perturbing the mixing flow by altering the flow speed and mixing time until homogeniety is reached. This method prevents the components from aggregating into non-homogeneous segregated regions within said vessel during mixing and substantially reduces the time the admixed components reach homogeneity.
Frailty Models for Familial Risk with Application to Breast Cancer.
Gorfine, Malka; Hsu, Li; Parmigiani, Giovanni
2013-12-01
In evaluating familial risk for disease we have two main statistical tasks: assessing the probability of carrying an inherited genetic mutation conferring higher risk; and predicting the absolute risk of developing diseases over time, for those individuals whose mutation status is known. Despite substantial progress, much remains unknown about the role of genetic and environmental risk factors, about the sources of variation in risk among families that carry high-risk mutations, and about the sources of familial aggregation beyond major Mendelian effects. These sources of heterogeneity contribute substantial variation in risk across families. In this paper we present simple and efficient methods for accounting for this variation in familial risk assessment. Our methods are based on frailty models. We implemented them in the context of generalizing Mendelian models of cancer risk, and compared our approaches to others that do not consider heterogeneity across families. Our extensive simulation study demonstrates that when predicting the risk of developing a disease over time conditional on carrier status, accounting for heterogeneity results in a substantial improvement in the area under the curve of the receiver operating characteristic. On the other hand, the improvement for carriership probability estimation is more limited. We illustrate the utility of the proposed approach through the analysis of BRCA1 and BRCA2 mutation carriers in the Washington Ashkenazi Kin-Cohort Study of Breast Cancer.
Method Of Making Closed End Ceramic Fuel Cell Tubes
Borglum, Brian P.
2002-04-30
A method of manufacturing closed end ceramic fuel cell tubes with improved properties and higher manufacturing yield is disclosed. The method involves bonding an unfired cap to a hollow unfired tube to form a compound joint. The assembly is then fired to net shape without subsequent machining. The resultant closed end tube is superior in that it provides a leak-tight seal and its porosity is substantially identical to that of the tube wall. The higher manufacturing yield associated with the present method decreases overall fuel cell cost significantly.
DFTB3: Extension of the self-consistent-charge density-functional tight-binding method (SCC-DFTB).
Gaus, Michael; Cui, Qiang; Elstner, Marcus
2012-04-10
The self-consistent-charge density-functional tight-binding method (SCC-DFTB) is an approximate quantum chemical method derived from density functional theory (DFT) based on a second-order expansion of the DFT total energy around a reference density. In the present study we combine earlier extensions and improve them consistently with, first, an improved Coulomb interaction between atomic partial charges, and second, the complete third-order expansion of the DFT total energy. These modifications lead us to the next generation of the DFTB methodology called DFTB3, which substantially improves the description of charged systems containing elements C, H, N, O, and P, especially regarding hydrogen binding energies and proton affinities. As a result, DFTB3 is particularly applicable to biomolecular systems. Remaining challenges and possible solutions are also briefly discussed.
CMB internal delensing with general optimal estimator for higher-order correlations
Namikawa, Toshiya
2017-05-24
We present here a new method for delensing B modes of the cosmic microwave background (CMB) using a lensing potential reconstructed from the same realization of the CMB polarization (CMB internal delensing). The B -mode delensing is required to improve sensitivity to primary B modes generated by, e.g., the inflationary gravitational waves, axionlike particles, modified gravity, primordial magnetic fields, and topological defects such as cosmic strings. However, the CMB internal delensing suffers from substantial biases due to correlations between observed CMB maps to be delensed and that used for reconstructing a lensing potential. Since the bias depends on realizations, wemore » construct a realization-dependent (RD) estimator for correcting these biases by deriving a general optimal estimator for higher-order correlations. The RD method is less sensitive to simulation uncertainties. Compared to the previous ℓ -splitting method, we find that the RD method corrects the biases without substantial degradation of the delensing efficiency.« less
Forensic Child Sexual Abuse Evaluations: Assessing Subjectivity and Bias in Professional Judgements
ERIC Educational Resources Information Center
Everson, Mark D.; Sandoval, Jose Miguel
2011-01-01
Objectives: Evaluators examining the same evidence often arrive at substantially different conclusions in forensic assessments of child sexual abuse (CSA). This study attempts to identify and quantify subjective factors that contribute to such disagreements so that interventions can be devised to improve the reliability of case decisions. Methods:…
ERIC Educational Resources Information Center
Sencibaugh, Joseph M.
2005-01-01
This paper examines research studies, which focus on interventions commonly used with students who are learning disabled and identify effective methods that produce substantial benefits concerning reading comprehension. This paper synthesizes previous observation studies by conducting a meta-analysis of strategies used to improve the reading…
ERIC Educational Resources Information Center
Sencibaugh, Joseph M.
2007-01-01
This paper examines research studies, which focus on interventions commonly used with students who are learning disabled and identifies effective methods that produce substantial benefits concerning reading comprehension. This paper synthesizes previous observation studies by conducting a meta-analysis of strategies used to improve the reading…
USDA-ARS?s Scientific Manuscript database
As new research is conducted and new methods for solving problems are developed, the USDAARS has a program that allocates substantial funding to ensure these improved strategies and techniques are adopted by those who can benefit from them. These programs are called Area-wide demonstrations. A partn...
NASA Astrophysics Data System (ADS)
Kozynchenko, Alexander I.; Kozynchenko, Sergey A.
2017-03-01
In the paper, a problem of improving efficiency of the particle-particle- particle-mesh (P3M) algorithm in computing the inter-particle electrostatic forces is considered. The particle-mesh (PM) part of the algorithm is modified in such a way that the space field equation is solved by the direct method of summation of potentials over the ensemble of particles lying not too close to a reference particle. For this purpose, a specific matrix "pattern" is introduced to describe the spatial field distribution of a single point charge, so the "pattern" contains pre-calculated potential values. This approach allows to reduce a set of arithmetic operations performed at the innermost of nested loops down to an addition and assignment operators and, therefore, to decrease the running time substantially. The simulation model developed in C++ substantiates this view, showing the descent accuracy acceptable in particle beam calculations together with the improved speed performance.
System and method for improving performance of a fluid sensor for an internal combustion engine
Kubinski, David [Canton, MI; Zawacki, Garry [Livonia, MI
2009-03-03
A system and method for improving sensor performance of an on-board vehicle sensor, such as an exhaust gas sensor, while sensing a predetermined substance in a fluid flowing through a pipe include a structure for extending into the pipe and having at least one inlet for receiving fluid flowing through the pipe and at least one outlet generally opposite the at least one inlet, wherein the structure redirects substantially all fluid flowing from the at least one inlet to the sensor to provide a representative sample of the fluid to the sensor before returning the fluid through the at least one outlet.
PASTA: Ultra-Large Multiple Sequence Alignment for Nucleotide and Amino-Acid Sequences.
Mirarab, Siavash; Nguyen, Nam; Guo, Sheng; Wang, Li-San; Kim, Junhyong; Warnow, Tandy
2015-05-01
We introduce PASTA, a new multiple sequence alignment algorithm. PASTA uses a new technique to produce an alignment given a guide tree that enables it to be both highly scalable and very accurate. We present a study on biological and simulated data with up to 200,000 sequences, showing that PASTA produces highly accurate alignments, improving on the accuracy and scalability of the leading alignment methods (including SATé). We also show that trees estimated on PASTA alignments are highly accurate--slightly better than SATé trees, but with substantial improvements relative to other methods. Finally, PASTA is faster than SATé, highly parallelizable, and requires relatively little memory.
Bounds on the minimum number of recombination events in a sample history.
Myers, Simon R; Griffiths, Robert C
2003-01-01
Recombination is an important evolutionary factor in many organisms, including humans, and understanding its effects is an important task facing geneticists. Detecting past recombination events is thus important; this article introduces statistics that give a lower bound on the number of recombination events in the history of a sample, on the basis of the patterns of variation in the sample DNA. Such lower bounds are appropriate, since many recombination events in the history are typically undetectable, so the true number of historical recombinations is unobtainable. The statistics can be calculated quickly by computer and improve upon the earlier bound of Hudson and Kaplan 1985. A method is developed to combine bounds on local regions in the data to produce more powerful improved bounds. The method is flexible to different models of recombination occurrence. The approach gives recombination event bounds between all pairs of sites, to help identify regions with more detectable recombinations, and these bounds can be viewed graphically. Under coalescent simulations, there is a substantial improvement over the earlier method (of up to a factor of 2) in the expected number of recombination events detected by one of the new minima, across a wide range of parameter values. The method is applied to data from a region within the lipoprotein lipase gene and the amount of detected recombination is substantially increased. Further, there is strong clustering of detected recombination events in an area near the center of the region. A program implementing these statistics, which was used for this article, is available from http://www.stats.ox.ac.uk/mathgen/programs.html. PMID:12586723
Resonance-induced sensitivity enhancement method for conductivity sensors
NASA Technical Reports Server (NTRS)
Tai, Yu-Chong (Inventor); Shih, Chi-yuan (Inventor); Li, Wei (Inventor); Zheng, Siyang (Inventor)
2009-01-01
Methods and systems for improving the sensitivity of a variety of conductivity sensing devices, in particular capacitively-coupled contactless conductivity detectors. A parallel inductor is added to the conductivity sensor. The sensor with the parallel inductor is operated at a resonant frequency of the equivalent circuit model. At the resonant frequency, parasitic capacitances that are either in series or in parallel with the conductance (and possibly a series resistance) is substantially removed from the equivalent circuit, leaving a purely resistive impedance. An appreciably higher sensor sensitivity results. Experimental verification shows that sensitivity improvements of the order of 10,000-fold are possible. Examples of detecting particulates with high precision by application of the apparatus and methods of operation are described.
Improving traditional balancing methods for high-speed rotors
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ling, J.; Cao, Y.
1996-01-01
This paper introduces frequency response functions, analyzes the relationships between the frequency response functions and influence coefficients theoretically, and derives corresponding mathematical equations for high-speed rotor balancing. The relationships between the imbalance masses on the rotor and frequency response functions are also analyzed based upon the modal balancing method, and the equations related to the static and dynamic imbalance masses and the frequency response function are obtained. Experiments on a high-speed rotor balancing rig were performed to verify the theory, and the experimental data agree satisfactorily with the analytical solutions. The improvement on the traditional balancing method proposed in thismore » paper will substantially reduce the number of rotor startups required during the balancing process of rotating machinery.« less
Improving IQ measurement in intellectual disabilities using true deviation from population norms
2014-01-01
Background Intellectual disability (ID) is characterized by global cognitive deficits, yet the very IQ tests used to assess ID have limited range and precision in this population, especially for more impaired individuals. Methods We describe the development and validation of a method of raw z-score transformation (based on general population norms) that ameliorates floor effects and improves the precision of IQ measurement in ID using the Stanford Binet 5 (SB5) in fragile X syndrome (FXS; n = 106), the leading inherited cause of ID, and in individuals with idiopathic autism spectrum disorder (ASD; n = 205). We compared the distributional characteristics and Q-Q plots from the standardized scores with the deviation z-scores. Additionally, we examined the relationship between both scoring methods and multiple criterion measures. Results We found evidence that substantial and meaningful variation in cognitive ability on standardized IQ tests among individuals with ID is lost when converting raw scores to standardized scaled, index and IQ scores. Use of the deviation z- score method rectifies this problem, and accounts for significant additional variance in criterion validation measures, above and beyond the usual IQ scores. Additionally, individual and group-level cognitive strengths and weaknesses are recovered using deviation scores. Conclusion Traditional methods for generating IQ scores in lower functioning individuals with ID are inaccurate and inadequate, leading to erroneously flat profiles. However assessment of cognitive abilities is substantially improved by measuring true deviation in performance from standardization sample norms. This work has important implications for standardized test development, clinical assessment, and research for which IQ is an important measure of interest in individuals with neurodevelopmental disorders and other forms of cognitive impairment. PMID:26491488
Trust regions in Kriging-based optimization with expected improvement
NASA Astrophysics Data System (ADS)
Regis, Rommel G.
2016-06-01
The Kriging-based Efficient Global Optimization (EGO) method works well on many expensive black-box optimization problems. However, it does not seem to perform well on problems with steep and narrow global minimum basins and on high-dimensional problems. This article develops a new Kriging-based optimization method called TRIKE (Trust Region Implementation in Kriging-based optimization with Expected improvement) that implements a trust-region-like approach where each iterate is obtained by maximizing an Expected Improvement (EI) function within some trust region. This trust region is adjusted depending on the ratio of the actual improvement to the EI. This article also develops the Kriging-based CYCLONE (CYClic Local search in OptimizatioN using Expected improvement) method that uses a cyclic pattern to determine the search regions where the EI is maximized. TRIKE and CYCLONE are compared with EGO on 28 test problems with up to 32 dimensions and on a 36-dimensional groundwater bioremediation application in appendices supplied as an online supplement available at http://dx.doi.org/10.1080/0305215X.2015.1082350. The results show that both algorithms yield substantial improvements over EGO and they are competitive with a radial basis function method.
Using Genotype Abundance to Improve Phylogenetic Inference
Mesin, Luka; Victora, Gabriel D; Minin, Vladimir N; Matsen, Frederick A
2018-01-01
Abstract Modern biological techniques enable very dense genetic sampling of unfolding evolutionary histories, and thus frequently sample some genotypes multiple times. This motivates strategies to incorporate genotype abundance information in phylogenetic inference. In this article, we synthesize a stochastic process model with standard sequence-based phylogenetic optimality, and show that tree estimation is substantially improved by doing so. Our method is validated with extensive simulations and an experimental single-cell lineage tracing study of germinal center B cell receptor affinity maturation. PMID:29474671
Clare, John; McKinney, Shawn T.; DePue, John E.; Loftin, Cynthia S.
2017-01-01
It is common to use multiple field sampling methods when implementing wildlife surveys to compare method efficacy or cost efficiency, integrate distinct pieces of information provided by separate methods, or evaluate method-specific biases and misclassification error. Existing models that combine information from multiple field methods or sampling devices permit rigorous comparison of method-specific detection parameters, enable estimation of additional parameters such as false-positive detection probability, and improve occurrence or abundance estimates, but with the assumption that the separate sampling methods produce detections independently of one another. This assumption is tenuous if methods are paired or deployed in close proximity simultaneously, a common practice that reduces the additional effort required to implement multiple methods and reduces the risk that differences between method-specific detection parameters are confounded by other environmental factors. We develop occupancy and spatial capture–recapture models that permit covariance between the detections produced by different methods, use simulation to compare estimator performance of the new models to models assuming independence, and provide an empirical application based on American marten (Martes americana) surveys using paired remote cameras, hair catches, and snow tracking. Simulation results indicate existing models that assume that methods independently detect organisms produce biased parameter estimates and substantially understate estimate uncertainty when this assumption is violated, while our reformulated models are robust to either methodological independence or covariance. Empirical results suggested that remote cameras and snow tracking had comparable probability of detecting present martens, but that snow tracking also produced false-positive marten detections that could potentially substantially bias distribution estimates if not corrected for. Remote cameras detected marten individuals more readily than passive hair catches. Inability to photographically distinguish individual sex did not appear to induce negative bias in camera density estimates; instead, hair catches appeared to produce detection competition between individuals that may have been a source of negative bias. Our model reformulations broaden the range of circumstances in which analyses incorporating multiple sources of information can be robustly used, and our empirical results demonstrate that using multiple field-methods can enhance inferences regarding ecological parameters of interest and improve understanding of how reliably survey methods sample these parameters.
Efficient genotype compression and analysis of large genetic variation datasets
Layer, Ryan M.; Kindlon, Neil; Karczewski, Konrad J.; Quinlan, Aaron R.
2015-01-01
Genotype Query Tools (GQT) is a new indexing strategy that expedites analyses of genome variation datasets in VCF format based on sample genotypes, phenotypes and relationships. GQT’s compressed genotype index minimizes decompression for analysis, and performance relative to existing methods improves with cohort size. We show substantial (up to 443 fold) performance gains over existing methods and demonstrate GQT’s utility for exploring massive datasets involving thousands to millions of genomes. PMID:26550772
Fritz, Gregory M.; Weihs, Timothy P.; Grzyb, Justin A.
2016-07-05
An energetic composite having a plurality of reactive particles each having a reactive multilayer construction formed by successively depositing reactive layers on a rod-shaped substrate having a longitudinal axis, dividing the reactive-layer-deposited rod-shaped substrate into a plurality of substantially uniform longitudinal segments, and removing the rod-shaped substrate from the longitudinal segments, so that the reactive particles have a controlled, substantially uniform, cylindrically curved or otherwise rod-contoured geometry which facilitates handling and improves its packing fraction, while the reactant multilayer construction controls the stability, reactivity and energy density of the energetic composite.
Fritz, Gregory M; Knepper, Robert Allen; Weihs, Timothy P; Gash, Alexander E; Sze, John S
2013-04-30
An energetic composite having a plurality of reactive particles each having a reactive multilayer construction formed by successively depositing reactive layers on a rod-shaped substrate having a longitudinal axis, dividing the reactive-layer-deposited rod-shaped substrate into a plurality of substantially uniform longitudinal segments, and removing the rod-shaped substrate from the longitudinal segments, so that the reactive particles have a controlled, substantially uniform, cylindrically curved or otherwise rod-contoured geometry which facilitates handling and improves its packing fraction, while the reactant multilayer construction controls the stability, reactivity and energy density of the energetic composite.
Histogram equalization with Bayesian estimation for noise robust speech recognition.
Suh, Youngjoo; Kim, Hoirin
2018-02-01
The histogram equalization approach is an efficient feature normalization technique for noise robust automatic speech recognition. However, it suffers from performance degradation when some fundamental conditions are not satisfied in the test environment. To remedy these limitations of the original histogram equalization methods, class-based histogram equalization approach has been proposed. Although this approach showed substantial performance improvement under noise environments, it still suffers from performance degradation due to the overfitting problem when test data are insufficient. To address this issue, the proposed histogram equalization technique employs the Bayesian estimation method in the test cumulative distribution function estimation. It was reported in a previous study conducted on the Aurora-4 task that the proposed approach provided substantial performance gains in speech recognition systems based on the acoustic modeling of the Gaussian mixture model-hidden Markov model. In this work, the proposed approach was examined in speech recognition systems with deep neural network-hidden Markov model (DNN-HMM), the current mainstream speech recognition approach where it also showed meaningful performance improvement over the conventional maximum likelihood estimation-based method. The fusion of the proposed features with the mel-frequency cepstral coefficients provided additional performance gains in DNN-HMM systems, which otherwise suffer from performance degradation in the clean test condition.
Social preferences toward energy generation with woody biomass from public forests in Montana, USA
Robert M. Campbell; Tyron J. Venn; Nathaniel M. Anderson
2016-01-01
In Montana, USA, there are substantial opportunities for mechanized thinning treatments on public forests to reduce the likelihood of severe and damaging wildfires and improve forest health. These treatments produce residues that can be used to generate renewable energy and displace fossil fuels. The choice modeling method is employed to examine the marginal...
USDA-ARS?s Scientific Manuscript database
Unintended effects to food quality and composition occur no matter the method of plant improvement. The existence of unintended effects is perhaps not the most important point within the discussion, but rather the identity and significance of the compositional changes observed. Here we report the ex...
Moore, Albert S.; Verhoff, Francis H.
1980-01-01
The present invention is directed to an improved wet air oxidation system and method for reducing the chemical oxygen demand (COD) of waste water used from scrubbers of coal gasification plants, with this COD reduction being sufficient to effectively eliminate waste water as an environmental pollutant. The improvement of the present invention is provided by heating the air used in the oxidation process to a temperature substantially equal to the temperature in the oxidation reactor before compressing or pressurizing the air. The compression of the already hot air further heats the air which is then passed in heat exchange with gaseous products of the oxidation reaction for "superheating" the gaseous products prior to the use thereof in turbines as the driving fluid. The superheating of the gaseous products significantly minimizes condensation of gaseous products in the turbine so as to provide a substantially greater recovery of mechanical energy from the process than heretofore achieved.
Guadalupe, Zenaida; Soldevilla, Alberto; Sáenz-Navajas, María-Pilar; Ayestarán, Belén
2006-04-21
A multiple-step analytical method was developed to improve the analysis of polymeric phenolics in red wines. With a common initial step based on the fractionation of wine phenolics by gel permeation chromatography (GPC), different analytical techniques were used: high-performance liquid chromatography-diode array detection (HPLC-DAD), HPLC-mass spectrometry (MS), capillary zone electrophoresis (CZE) and spectrophotometry. This method proved to be valid for analyzing different families of phenolic compounds, such as monomeric phenolics and their derivatives, polymeric pigments and proanthocyanidins. The analytical characteristics of fractionation by GPC were studied and the method was fully validated, yielding satisfactory statistical results. GPC fractionation substantially improved the analysis of polymeric pigments by CZE, in terms of response, repeatability and reproducibility. It also represented an improvement in the traditional vanillin assay used for proanthocyanidin (PA) quantification. Astringent proanthocyanidins were also analyzed using a simple combined method that allowed these compounds, for which only general indexes were available, to be quantified.
High performance p-type thermoelectric materials and methods of preparation
NASA Technical Reports Server (NTRS)
Caillat, Thierry (Inventor); Borshchevsky, Alexander (Inventor); Fleurial, Jean-Pierre (Inventor)
2005-01-01
The present invention is embodied in high performance p-type thermoelectric materials having enhanced thermoelectric properties and the methods of preparing such materials. In one aspect of the invention, p-type semiconductors of formula Zn4-xAxSb3-yBy wherein 0?x?4, A is a transition metal, B is a pnicogen, and 0?y?3 are formed for use in manufacturing thermoelectric devices with substantially enhanced operating characteristics and improved efficiency. Two methods of preparing p-type Zn4Sb3 and related alloys of the present invention include a crystal growth method and a powder metallurgy method.
Parrinello, Christina M.; Grams, Morgan E.; Couper, David; Ballantyne, Christie M.; Hoogeveen, Ron C.; Eckfeldt, John H.; Selvin, Elizabeth; Coresh, Josef
2016-01-01
Background Equivalence of laboratory tests over time is important for longitudinal studies. Even a small systematic difference (bias) can result in substantial misclassification. Methods We selected 200 Atherosclerosis Risk in Communities Study participants attending all 5 study visits over 25 years. Eight analytes were re-measured in 2011–13 from stored blood samples from multiple visits: creatinine, uric acid, glucose, total cholesterol, HDL-cholesterol, LDL-cholesterol, triglycerides, and high-sensitivity C-reactive protein. Original values were recalibrated to re-measured values using Deming regression. Differences >10% were considered to reflect substantial bias, and correction equations were applied to affected analytes in the total study population. We examined trends in chronic kidney disease (CKD) pre- and post-recalibration. Results Repeat measures were highly correlated with original values (Pearson’s r>0.85 after removing outliers [median 4.5% of paired measurements]), but 2 of 8 analytes (creatinine and uric acid) had differences >10%. Original values of creatinine and uric acid were recalibrated to current values using correction equations. CKD prevalence differed substantially after recalibration of creatinine (visits 1, 2, 4 and 5 pre-recalibration: 21.7%, 36.1%, 3.5%, 29.4%; post-recalibration: 1.3%, 2.2%, 6.4%, 29.4%). For HDL-cholesterol, the current direct enzymatic method differed substantially from magnesium dextran precipitation used during visits 1–4. Conclusions Analytes re-measured in samples stored for ~25 years were highly correlated with original values, but two of the 8 analytes showed substantial bias at multiple visits. Laboratory recalibration improved reproducibility of test results across visits and resulted in substantial differences in CKD prevalence. We demonstrate the importance of consistent recalibration of laboratory assays in a cohort study. PMID:25952043
NASA Astrophysics Data System (ADS)
Liu, Bilan; Qiu, Xing; Zhu, Tong; Tian, Wei; Hu, Rui; Ekholm, Sven; Schifitto, Giovanni; Zhong, Jianhui
2016-03-01
Subject-specific longitudinal DTI study is vital for investigation of pathological changes of lesions and disease evolution. Spatial Regression Analysis of Diffusion tensor imaging (SPREAD) is a non-parametric permutation-based statistical framework that combines spatial regression and resampling techniques to achieve effective detection of localized longitudinal diffusion changes within the whole brain at individual level without a priori hypotheses. However, boundary blurring and dislocation limit its sensitivity, especially towards detecting lesions of irregular shapes. In the present study, we propose an improved SPREAD (dubbed improved SPREAD, or iSPREAD) method by incorporating a three-dimensional (3D) nonlinear anisotropic diffusion filtering method, which provides edge-preserving image smoothing through a nonlinear scale space approach. The statistical inference based on iSPREAD was evaluated and compared with the original SPREAD method using both simulated and in vivo human brain data. Results demonstrated that the sensitivity and accuracy of the SPREAD method has been improved substantially by adapting nonlinear anisotropic filtering. iSPREAD identifies subject-specific longitudinal changes in the brain with improved sensitivity, accuracy, and enhanced statistical power, especially when the spatial correlation is heterogeneous among neighboring image pixels in DTI.
Relative Effectiveness of Worker Safety and Health Training Methods
Burke, Michael J.; Sarpy, Sue Ann; Smith-Crowe, Kristin; Chan-Serafin, Suzanne; Salvador, Rommel O.; Islam, Gazi
2006-01-01
Objectives. We sought to determine the relative effectiveness of different methods of worker safety and health training aimed at improving safety knowledge and performance and reducing negative outcomes (accidents, illnesses, and injuries). Methods. Ninety-five quasi-experimental studies (n=20991) were included in the analysis. Three types of intervention methods were distinguished on the basis of learners’ participation in the training process: least engaging (lecture, pamphlets, videos), moderately engaging (programmed instruction, feedback interventions), and most engaging (training in behavioral modeling, hands-on training). Results. As training methods became more engaging (i.e., requiring trainees’ active participation), workers demonstrated greater knowledge acquisition, and reductions were seen in accidents, illnesses, and injuries. All methods of training produced meaningful behavioral performance improvements. Conclusions. Training involving behavioral modeling, a substantial amount of practice, and dialogue is generally more effective than other methods of safety and health training. The present findings challenge the current emphasis on more passive computer-based and distance training methods within the public health workforce. PMID:16380566
PASTA: Ultra-Large Multiple Sequence Alignment for Nucleotide and Amino-Acid Sequences
Mirarab, Siavash; Nguyen, Nam; Guo, Sheng; Wang, Li-San; Kim, Junhyong
2015-01-01
Abstract We introduce PASTA, a new multiple sequence alignment algorithm. PASTA uses a new technique to produce an alignment given a guide tree that enables it to be both highly scalable and very accurate. We present a study on biological and simulated data with up to 200,000 sequences, showing that PASTA produces highly accurate alignments, improving on the accuracy and scalability of the leading alignment methods (including SATé). We also show that trees estimated on PASTA alignments are highly accurate—slightly better than SATé trees, but with substantial improvements relative to other methods. Finally, PASTA is faster than SATé, highly parallelizable, and requires relatively little memory. PMID:25549288
Silicon ribbon growth by a capillary action shaping technique
NASA Technical Reports Server (NTRS)
Schwuttke, G. H.; Ciszek, T. F.; Kran, A.; Yang, K.
1977-01-01
The crystal-growth method under investigation is a capillary action shaping technique. Meniscus shaping for the desired ribbon geometry occurs at the vertex of a wettable dye. As ribbon growth depletes the melt meniscus, capillary action supplies replacement material. The configuration of the technique used in our initial studies is shown. The crystal-growth method has been applied to silicon ribbons it was found that substantial improvements in ribbon surface quality could be achieved with a higher melt meniscus than that attainable with the EFG technique.
NASA Technical Reports Server (NTRS)
Gong, Jian; Volakis, John L.; Nurnberger, Michael W.
1995-01-01
This semi-annual report describes progress up to mid-January 1995. The report contains five sections all dealing with the modeling of spiral and patch antennas recessed in metallic platforms. Of significance is the development of decomposition schemes which separate the different regions of the antenna volume. Substantial effort was devoted to improving the feed model in the context of the finite element method (FEM). Finally, an innovative scheme for truncating finite element meshes is presented.
Corrosion prevention of magnesium surfaces via surface conversion treatments using ionic liquids
Qu, Jun; Luo, Huimin
2016-09-06
A method for conversion coating a magnesium-containing surface, the method comprising contacting the magnesium-containing surface with an ionic liquid compound under conditions that result in decomposition of the ionic liquid compound to produce a conversion coated magnesium-containing surface having a substantially improved corrosion resistance relative to the magnesium-containing surface before said conversion coating. Also described are the resulting conversion-coated magnesium-containing surface, as well as mechanical components and devices containing the conversion-coated magnesium-containing surface.
Kongskov, Rasmus Dalgas; Jørgensen, Jakob Sauer; Poulsen, Henning Friis; Hansen, Per Christian
2016-04-01
Classical reconstruction methods for phase-contrast tomography consist of two stages: phase retrieval and tomographic reconstruction. A novel algebraic method combining the two was suggested by Kostenko et al. [Opt. Express21, 12185 (2013)OPEXFF1094-408710.1364/OE.21.012185], and preliminary results demonstrated improved reconstruction compared with a given two-stage method. Using simulated free-space propagation experiments with a single sample-detector distance, we thoroughly compare the novel method with the two-stage method to address limitations of the preliminary results. We demonstrate that the novel method is substantially more robust toward noise; our simulations point to a possible reduction in counting times by an order of magnitude.
Damping in high-temperature superconducting levitation systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hull, John R.
2009-12-15
Methods and apparatuses for improved damping in high-temperature superconducting levitation systems are disclosed. A superconducting element (e.g., a stator) generating a magnetic field and a magnet (e.g. a rotor) supported by the magnetic field are provided such that the superconducting element is supported relative to a ground state with damped motion substantially perpendicular to the support of the magnetic field on the magnet. Applying this, a cryostat housing the superconducting bearing may be coupled to the ground state with high damping but low radial stiffness, such that its resonant frequency is less than that of the superconducting bearing. The dampingmore » of the cryostat may be substantially transferred to the levitated magnetic rotor, thus, providing damping without affecting the rotational loss, as can be derived applying coupled harmonic oscillator theory in rotor dynamics. Thus, damping can be provided to a levitated object, without substantially affecting the rotational loss.« less
Damping in high-temperature superconducting levitation systems
Hull, John R [Sammamish, WA
2009-12-15
Methods and apparatuses for improved damping in high-temperature superconducting levitation systems are disclosed. A superconducting element (e.g., a stator) generating a magnetic field and a magnet (e.g. a rotor) supported by the magnetic field are provided such that the superconducting element is supported relative to a ground state with damped motion substantially perpendicular to the support of the magnetic field on the magnet. Applying this, a cryostat housing the superconducting bearing may be coupled to the ground state with high damping but low radial stiffness, such that its resonant frequency is less than that of the superconducting bearing. The damping of the cryostat may be substantially transferred to the levitated magnetic rotor, thus, providing damping without affecting the rotational loss, as can be derived applying coupled harmonic oscillator theory in rotor dynamics. Thus, damping can be provided to a levitated object, without substantially affecting the rotational loss.
Monitoring socioeconomic inequity in maternal health indicators in Egypt: 1995-2005
2009-01-01
Background Egypt's longstanding commitment to safe motherhood and maternal health has paid off in substantial declines in maternal mortality ratio and significant improvement in the levels of many maternal health indicators. The current study aims to monitor trends of maternal health indicators and their socioeconomic inequities among Egyptian women over ten-year period (1995-2005). It poses the question "to what extent have the recent maternal health improvements been shared among the various socioeconomic categories of women?" Methods The current paper uses data on maternal health available in three consecutive Demographic and Health Surveys (1995-2000-2005). Concentration index is used to assess the levels of health inequity over the ten year period. Results Although previous efforts in maternal health have contributed to substantial improvements in the general levels of maternal health indicators, these improvements were not enjoyed equally by women in various social groups. Indicators that have long been the focus of health policy such as fertility and contraceptive use showed some declines in disparities but they are far behind from achieving equity. Other indicators which relate to unmet need, prenatal care, delivery, postnatal care still loaded with high levels of inequity and call for more comprehensive policy interventions. PMID:19895706
Ceramic honeycomb structures and the method thereof
NASA Technical Reports Server (NTRS)
Riccitiello, Salvatore R. (Inventor); Cagliostro, Domenick E. (Inventor)
1987-01-01
The subject invention pertains to a method of producing an improved composite-composite honeycomb structure for aircraft or aerospace use. Specifically, the subject invention relates to a method for the production of a lightweight ceramic-ceramic composite honeycomb structure, which method comprises: (1) pyrolyzing a loosely woven fabric/binder having a honeycomb shape and having a high char yield and geometric integrity after pyrolysis at between about 700 and 1,100 C; (2) substantially evenly depositing at least one layer of ceramic material on the pyrolyzed fabric/binder of step (1); (3) recovering the coated ceramic honeycomb structure; (4) removing the pyrolyzed fabric/binder of the structure of step (3) by slow pyrolysis at between 700 and 1000 C in between about a 2 to 5% by volume oxygen atmosphere for between about 0.5 and 5 hr.; and (5) substantially evenly depositing on and within the rigid hollow honeycomb structure at least one additional layer of the same or a different ceramic material by chemical vapor deposition and chemical vapor infiltration. The honeycomb shaped ceramic articles have enhanced physical properties and are useful in aircraft and aerospace uses.
Review of recent advances in analytical techniques for the determination of neurotransmitters
Perry, Maura; Li, Qiang; Kennedy, Robert T.
2009-01-01
Methods and advances for monitoring neurotransmitters in vivo or for tissue analysis of neurotransmitters over the last five years are reviewed. The review is organized primarily by neurotransmitter type. Transmitter and related compounds may be monitored by either in vivo sampling coupled to analytical methods or implanted sensors. Sampling is primarily performed using microdialysis, but low-flow push-pull perfusion may offer advantages of spatial resolution while minimizing the tissue disruption associated with higher flow rates. Analytical techniques coupled to these sampling methods include liquid chromatography, capillary electrophoresis, enzyme assays, sensors, and mass spectrometry. Methods for the detection of amino acid, monoamine, neuropeptide, acetylcholine, nucleoside, and soluable gas neurotransmitters have been developed and improved upon. Advances in the speed and sensitivity of these methods have enabled improvements in temporal resolution and increased the number of compounds detectable. Similar advances have enabled improved detection at tissue samples, with a substantial emphasis on single cell and other small samples. Sensors provide excellent temporal and spatial resolution for in vivo monitoring. Advances in application to catecholamines, indoleamines, and amino acids have been prominent. Improvements in stability, sensitivity, and selectivity of the sensors have been of paramount interest. PMID:19800472
Ratcliffe, B; El-Dien, O G; Klápště, J; Porth, I; Chen, C; Jaquish, B; El-Kassaby, Y A
2015-01-01
Genomic selection (GS) potentially offers an unparalleled advantage over traditional pedigree-based selection (TS) methods by reducing the time commitment required to carry out a single cycle of tree improvement. This quality is particularly appealing to tree breeders, where lengthy improvement cycles are the norm. We explored the prospect of implementing GS for interior spruce (Picea engelmannii × glauca) utilizing a genotyped population of 769 trees belonging to 25 open-pollinated families. A series of repeated tree height measurements through ages 3–40 years permitted the testing of GS methods temporally. The genotyping-by-sequencing (GBS) platform was used for single nucleotide polymorphism (SNP) discovery in conjunction with three unordered imputation methods applied to a data set with 60% missing information. Further, three diverse GS models were evaluated based on predictive accuracy (PA), and their marker effects. Moderate levels of PA (0.31–0.55) were observed and were of sufficient capacity to deliver improved selection response over TS. Additionally, PA varied substantially through time accordingly with spatial competition among trees. As expected, temporal PA was well correlated with age-age genetic correlation (r=0.99), and decreased substantially with increasing difference in age between the training and validation populations (0.04–0.47). Moreover, our imputation comparisons indicate that k-nearest neighbor and singular value decomposition yielded a greater number of SNPs and gave higher predictive accuracies than imputing with the mean. Furthermore, the ridge regression (rrBLUP) and BayesCπ (BCπ) models both yielded equal, and better PA than the generalized ridge regression heteroscedastic effect model for the traits evaluated. PMID:26126540
Ratcliffe, B; El-Dien, O G; Klápště, J; Porth, I; Chen, C; Jaquish, B; El-Kassaby, Y A
2015-12-01
Genomic selection (GS) potentially offers an unparalleled advantage over traditional pedigree-based selection (TS) methods by reducing the time commitment required to carry out a single cycle of tree improvement. This quality is particularly appealing to tree breeders, where lengthy improvement cycles are the norm. We explored the prospect of implementing GS for interior spruce (Picea engelmannii × glauca) utilizing a genotyped population of 769 trees belonging to 25 open-pollinated families. A series of repeated tree height measurements through ages 3-40 years permitted the testing of GS methods temporally. The genotyping-by-sequencing (GBS) platform was used for single nucleotide polymorphism (SNP) discovery in conjunction with three unordered imputation methods applied to a data set with 60% missing information. Further, three diverse GS models were evaluated based on predictive accuracy (PA), and their marker effects. Moderate levels of PA (0.31-0.55) were observed and were of sufficient capacity to deliver improved selection response over TS. Additionally, PA varied substantially through time accordingly with spatial competition among trees. As expected, temporal PA was well correlated with age-age genetic correlation (r=0.99), and decreased substantially with increasing difference in age between the training and validation populations (0.04-0.47). Moreover, our imputation comparisons indicate that k-nearest neighbor and singular value decomposition yielded a greater number of SNPs and gave higher predictive accuracies than imputing with the mean. Furthermore, the ridge regression (rrBLUP) and BayesCπ (BCπ) models both yielded equal, and better PA than the generalized ridge regression heteroscedastic effect model for the traits evaluated.
Improving IQ measurement in intellectual disabilities using true deviation from population norms.
Sansone, Stephanie M; Schneider, Andrea; Bickel, Erika; Berry-Kravis, Elizabeth; Prescott, Christina; Hessl, David
2014-01-01
Intellectual disability (ID) is characterized by global cognitive deficits, yet the very IQ tests used to assess ID have limited range and precision in this population, especially for more impaired individuals. We describe the development and validation of a method of raw z-score transformation (based on general population norms) that ameliorates floor effects and improves the precision of IQ measurement in ID using the Stanford Binet 5 (SB5) in fragile X syndrome (FXS; n = 106), the leading inherited cause of ID, and in individuals with idiopathic autism spectrum disorder (ASD; n = 205). We compared the distributional characteristics and Q-Q plots from the standardized scores with the deviation z-scores. Additionally, we examined the relationship between both scoring methods and multiple criterion measures. We found evidence that substantial and meaningful variation in cognitive ability on standardized IQ tests among individuals with ID is lost when converting raw scores to standardized scaled, index and IQ scores. Use of the deviation z- score method rectifies this problem, and accounts for significant additional variance in criterion validation measures, above and beyond the usual IQ scores. Additionally, individual and group-level cognitive strengths and weaknesses are recovered using deviation scores. Traditional methods for generating IQ scores in lower functioning individuals with ID are inaccurate and inadequate, leading to erroneously flat profiles. However assessment of cognitive abilities is substantially improved by measuring true deviation in performance from standardization sample norms. This work has important implications for standardized test development, clinical assessment, and research for which IQ is an important measure of interest in individuals with neurodevelopmental disorders and other forms of cognitive impairment.
Resistive hydrogen sensing element
Lauf, Robert J.
2000-01-01
Systems and methods are described for providing a hydrogen sensing element with a more robust exposed metallization by application of a discontinuous or porous overlay to hold the metallization firmly on the substrate. An apparatus includes: a substantially inert, electrically-insulating substrate; a first Pd containing metallization deposited upon the substrate and completely covered by a substantially hydrogen-impermeable layer so as to form a reference resistor on the substrate; a second Pd containing metallization deposited upon the substrate and at least a partially accessible to a gas to be tested, so as to form a hydrogen-sensing resistor; a protective structure disposed upon at least a portion of the second Pd containing metallization and at least a portion of the substrate to improve the attachment of the second Pd containing metallization to the substrate while allowing the gas to contact said the second Pd containing metallization; and a resistance bridge circuit coupled to both the first and second Pd containing metallizations. The circuit determines the difference in electrical resistance between the first and second Pd containing metallizations. The hydrogen concentration in the gas may be determined. The systems and methods provide advantages because adhesion is improved without adversely effecting measurement speed or sensitivity.
Integrated multi-ISE arrays with improved sensitivity, accuracy and precision
NASA Astrophysics Data System (ADS)
Wang, Chunling; Yuan, Hongyan; Duan, Zhijuan; Xiao, Dan
2017-03-01
Increasing use of ion-selective electrodes (ISEs) in the biological and environmental fields has generated demand for high-sensitivity ISEs. However, improving the sensitivities of ISEs remains a challenge because of the limit of the Nernstian slope (59.2/n mV). Here, we present a universal ion detection method using an electronic integrated multi-electrode system (EIMES) that bypasses the Nernstian slope limit of 59.2/n mV, thereby enabling substantial enhancement of the sensitivity of ISEs. The results reveal that the response slope is greatly increased from 57.2 to 1711.3 mV, 57.3 to 564.7 mV and 57.7 to 576.2 mV by electronic integrated 30 Cl- electrodes, 10 F- electrodes and 10 glass pH electrodes, respectively. Thus, a tiny change in the ion concentration can be monitored, and correspondingly, the accuracy and precision are substantially improved. The EIMES is suited for all types of potentiometric sensors and may pave the way for monitoring of various ions with high accuracy and precision because of its high sensitivity.
NASA Technical Reports Server (NTRS)
Reynolds, J. H.; Alexander, E. C., Jr.; Davis, P. K.; Srinivasan, B.
1974-01-01
The lunar breccia 14318 is one of three Apollo-14 breccias containing substantial amounts of parentless xenon from the spontaneous fission of extinct Pu-244. The argon and xenon contained in this breccia were studied by stepwise heating of pristine and neutron-irradiated samples. The isotopic composition of xenon from fission, determined by an improved method, is shown to be from Pu-244. Concentrations of this fissiogenic xenon are in substantial excess (15-fold) of what could be produced by spontaneous fission of U-238. The breccia is found to contain abundant trapped argon with an Ar-40/Ar-36 ratio of roughly 14. Otherwise, the argon is radiogenic and gives a convincing K-Ar age of 3.69 plus or minus 0.09 b.y. by the stepwise Ar-40/Ar-39 method, nearly in agreement with ages for other Apollo-14 breccias.
Method and apparatus for constructing an underground barrier wall structure
Dwyer, Brian P.; Stewart, Willis E.; Dwyer, Stephen F.
2002-01-01
A method and apparatus for constructing a underground barrier wall structure using a jet grout injector subassembly comprising a pair of primary nozzles and a plurality of secondary nozzles, the secondary nozzles having a smaller diameter than the primary nozzles, for injecting grout in directions other than the primary direction, which creates a barrier wall panel having a substantially uniform wall thickess. This invention addresses the problem of the weak "bow-tie" shape that is formed during conventional jet injection when using only a pair of primary nozzles. The improvement is accomplished by using at least four secondary nozzles, of smaller diameter, located on both sides of the primary nozzles. These additional secondary nozzles spray grout or permeable reactive materials in other directions optimized to fill in the thin regions of the bow-tie shape. The result is a panel with increased strength and substantially uniform wall thickness.
Chen, Kun; Zhang, Hongyuan; Wei, Haoyun; Li, Yan
2014-08-20
In this paper, we propose an improved subtraction algorithm for rapid recovery of Raman spectra that can substantially reduce the computation time. This algorithm is based on an improved Savitzky-Golay (SG) iterative smoothing method, which involves two key novel approaches: (a) the use of the Gauss-Seidel method and (b) the introduction of a relaxation factor into the iterative procedure. By applying a novel successive relaxation (SG-SR) iterative method to the relaxation factor, additional improvement in the convergence speed over the standard Savitzky-Golay procedure is realized. The proposed improved algorithm (the RIA-SG-SR algorithm), which uses SG-SR-based iteration instead of Savitzky-Golay iteration, has been optimized and validated with a mathematically simulated Raman spectrum, as well as experimentally measured Raman spectra from non-biological and biological samples. The method results in a significant reduction in computing cost while yielding consistent rejection of fluorescence and noise for spectra with low signal-to-fluorescence ratios and varied baselines. In the simulation, RIA-SG-SR achieved 1 order of magnitude improvement in iteration number and 2 orders of magnitude improvement in computation time compared with the range-independent background-subtraction algorithm (RIA). Furthermore the computation time of the experimentally measured raw Raman spectrum processing from skin tissue decreased from 6.72 to 0.094 s. In general, the processing of the SG-SR method can be conducted within dozens of milliseconds, which can provide a real-time procedure in practical situations.
Isotope separation apparatus and method
Feldman, Barry J.
1985-01-01
The invention relates to an improved method and apparatus for laser isotope separation by photodeflection. A molecular beam comprising at least two isotopes to be separated intersects, preferably substantially perpendicular to one broad side of the molecular beam, with a laser beam traveling in a first direction. The laser beam is reflected back through the molecular beam, preferably in a second direction essentially opposite to the first direction. Because the molecules in the beam occupy various degenerate energy levels, if the laser beam comprises chirped pulses comprising selected wavelengths, the laser beam will very efficiently excite substantially all unexcited molecules and will cause stimulated emission of substantially all excited molecules of a selected one of the isotopes in the beam which such pulses encounter. Excitation caused by first direction chirped pulses moves molecules of the isotope excited thereby in the first direction. Stimulated emission of excited molecules of the isotope is brought about by returning chirped pulses traveling in the second direction. Stimulated emission moves emitting molecules in a direction opposite to the photon emitted. Because emitted photons travel in the second direction, emitting molecules move in the first direction. Substantial molecular movement of essentially all the molecules containing the one isotope is accomplished by a large number of chirped pulse-molecule interactions. A beam corer collects the molecules in the resulting enriched divergent portions of the beam.
Relative effectiveness of worker safety and health training methods.
Burke, Michael J; Sarpy, Sue Ann; Smith-Crowe, Kristin; Chan-Serafin, Suzanne; Salvador, Rommel O; Islam, Gazi
2006-02-01
We sought to determine the relative effectiveness of different methods of worker safety and health training aimed at improving safety knowledge and performance and reducing negative outcomes (accidents, illnesses, and injuries). Ninety-five quasi-experimental studies (n=20991) were included in the analysis. Three types of intervention methods were distinguished on the basis of learners' participation in the training process: least engaging (lecture, pamphlets, videos), moderately engaging (programmed instruction, feedback interventions), and most engaging (training in behavioral modeling, hands-on training). As training methods became more engaging (i.e., requiring trainees' active participation), workers demonstrated greater knowledge acquisition, and reductions were seen in accidents, illnesses, and injuries. All methods of training produced meaningful behavioral performance improvements. Training involving behavioral modeling, a substantial amount of practice, and dialogue is generally more effective than other methods of safety and health training. The present findings challenge the current emphasis on more passive computer-based and distance training methods within the public health workforce.
Feng, Shuo
2014-01-01
Parallel excitation (pTx) techniques with multiple transmit channels have been widely used in high field MRI imaging to shorten the RF pulse duration and/or reduce the specific absorption rate (SAR). However, the efficiency of pulse design still needs substantial improvement for practical real-time applications. In this paper, we present a detailed description of a fast pulse design method with Fourier domain gridding and a conjugate gradient method. Simulation results of the proposed method show that the proposed method can design pTx pulses at an efficiency 10 times higher than that of the conventional conjugate-gradient based method, without reducing the accuracy of the desirable excitation patterns. PMID:24834420
Feng, Shuo; Ji, Jim
2014-04-01
Parallel excitation (pTx) techniques with multiple transmit channels have been widely used in high field MRI imaging to shorten the RF pulse duration and/or reduce the specific absorption rate (SAR). However, the efficiency of pulse design still needs substantial improvement for practical real-time applications. In this paper, we present a detailed description of a fast pulse design method with Fourier domain gridding and a conjugate gradient method. Simulation results of the proposed method show that the proposed method can design pTx pulses at an efficiency 10 times higher than that of the conventional conjugate-gradient based method, without reducing the accuracy of the desirable excitation patterns.
Quality Improvement on the Acute Inpatient Psychiatry Unit Using the Model for Improvement
Singh, Kuldeep; Sanderson, Joshua; Galarneau, David; Keister, Thomas; Hickman, Dean
2013-01-01
Background A need exists for constant evaluation and modification of processes within healthcare systems to achieve quality improvement. One common approach is the Model for Improvement that can be used to clearly define aims, measures, and changes that are then implemented through a plan-do-study-act (PDSA) cycle. This approach is a commonly used method for improving quality in a wide range of fields. The Model for Improvement allows for a systematic process that can be revised at set time intervals to achieve a desired result. Methods We used the Model for Improvement in an acute psychiatry unit (APU) to improve the screening incidence of abnormal involuntary movements in eligible patients—those starting or continuing on standing neuroleptics—with the Abnormal Involuntary Movement Scale (AIMS). Results After 8 weeks of using the Model for Improvement, both of the participating inpatient services in the APU showed substantial overall improvement in screening for abnormal involuntary movements using the AIMS. Conclusion Crucial aspects of a successful quality improvement initiative based on the Model for Improvement are well-defined goals, process measures, and structured PDSA cycles. Success also requires communication, organization, and participation of the entire team. PMID:24052768
Hybrid force-velocity sliding mode control of a prosthetic hand.
Engeberg, Erik D; Meek, Sanford G; Minor, Mark A
2008-05-01
Four different methods of hand prosthesis control are developed and examined experimentally. Open-loop control is shown to offer the least sensitivity when manipulating objects. Force feedback substantially improves upon open-loop control. However, it is shown that the inclusion of velocity and/or position feedback in a hybrid force-velocity control scheme can further improve the functionality of hand prostheses. Experimental results indicate that the sliding mode controller with force, position, and velocity feedback is less prone to unwanted force overshoot when initially grasping objects than the other controllers.
Swofford, John; Whang, Peter G.; Frank, Clay J.; Glaser, John A.; Limoni, Robert P.; Cher, Daniel J.; Wine, Kathryn D.; Sembrano, Jonathan N.
2016-01-01
Background Sacroiliac joint (SIJ) dysfunction is an important and underappreciated cause of chronic low back pain. Objective To prospectively and concurrently compare outcomes after surgical and non-surgical treatment for chronic SIJ dysfunction. Methods One hundred and forty-eight subjects with SIJ dysfunction were randomly assigned to minimally invasive SIJ fusion with triangular titanium implants (SIJF, n = 102) or non-surgical management (NSM, n = 46). SIJ pain (measured with a 100-point visual analog scale, VAS), disability (measured with Oswestry Disability Index, ODI) and quality of life scores were collected at baseline and at scheduled visits to 24 months. Crossover from non-surgical to surgical care was allowed after the 6-month study visit was complete. Improvements in continuous measures were compared using repeated measures analysis of variance. The proportions of subjects with clinical improvement (SIJ pain improvement ≥20 points, ODI ≥15 points) and substantial clinical benefit (SIJ pain improvement ≥25 points or SIJ pain rating ≤35, ODI ≥18.8 points) were compared. Results In the SIJF group, mean SIJ pain improved rapidly and was sustained (mean improvement of 55.4 points) at month 24. The 6-month mean change in the NSM group (12.2 points) was substantially smaller than that in the SIJF group (by 38.3 points, p<.0001 for superiority). By month 24, 83.1% and 82.0% received either clinical improvement or substantial clinical benefit in VAS SIJ pain score. Similarly, 68.2% and 65.9% had received clinical improvement or substantial clinical benefit in ODI score at month 24. In the NSM group, these proportions were <10% with non-surgical treatment only. Parallel changes were seen for EQ-5D and SF-36, with larger changes in the surgery group at 6 months compared to NSM. The rate of adverse events related to SIJF was low and only 3 subjects assigned to SIJF underwent revision surgery within the 24-month follow-up period. Conclusions In this Level 1 multicenter prospective randomized controlled trial, minimally invasive SIJF with triangular titanium implants provided larger improvements in pain, disability and quality of life compared to NSM. Improvements after SIJF persisted to 24 months. This study was approved by a local or central IRB before any subjects were enrolled. All patients provided study-specific informed consent prior to participation. PMID:27652199
Austin, Peter C.; Tu, Jack V.; Ho, Jennifer E.; Levy, Daniel; Lee, Douglas S.
2014-01-01
Objective Physicians classify patients into those with or without a specific disease. Furthermore, there is often interest in classifying patients according to disease etiology or subtype. Classification trees are frequently used to classify patients according to the presence or absence of a disease. However, classification trees can suffer from limited accuracy. In the data-mining and machine learning literature, alternate classification schemes have been developed. These include bootstrap aggregation (bagging), boosting, random forests, and support vector machines. Study design and Setting We compared the performance of these classification methods with those of conventional classification trees to classify patients with heart failure according to the following sub-types: heart failure with preserved ejection fraction (HFPEF) vs. heart failure with reduced ejection fraction (HFREF). We also compared the ability of these methods to predict the probability of the presence of HFPEF with that of conventional logistic regression. Results We found that modern, flexible tree-based methods from the data mining literature offer substantial improvement in prediction and classification of heart failure sub-type compared to conventional classification and regression trees. However, conventional logistic regression had superior performance for predicting the probability of the presence of HFPEF compared to the methods proposed in the data mining literature. Conclusion The use of tree-based methods offers superior performance over conventional classification and regression trees for predicting and classifying heart failure subtypes in a population-based sample of patients from Ontario. However, these methods do not offer substantial improvements over logistic regression for predicting the presence of HFPEF. PMID:23384592
Mengo, Doris M.; Mohamud, Abdikher D.; Ochieng, Susan M.; Milgo, Sammy K.; Sexton, Connie J.; Moyo, Sikhulile; Luman, Elizabeth T.
2014-01-01
Background Kenya has implemented the Strengthening Laboratory Management Toward Accreditation (SLMTA) programme to facilitate quality improvement in medical laboratories and to support national accreditation goals. Continuous quality improvement after SLMTA completion is needed to ensure sustainability and continue progress toward accreditation. Methods Audits were conducted by qualified, independent auditors to assess the performance of five enrolled laboratories using the Stepwise Laboratory Quality Improvement Process Towards Accreditation (SLIPTA) checklist. End-of-programme (exit) and one year post-programme (surveillance) audits were compared for overall score, star level (from zero to five, based on scores) and scores for each of the 12 Quality System Essential (QSE) areas that make up the SLIPTA checklist. Results All laboratories improved from exit to surveillance audit (median improvement 38 percentage points, range 5–45 percentage points). Two laboratories improved from zero to one star, two improved from zero to three stars and one laboratory improved from three to four stars. The lowest median QSE scores at exit were: internal audit; corrective action; and occurrence management and process improvement (< 20%). Each of the 12 QSEs improved substantially at surveillance audit, with the greatest improvement in client management and customer service, internal audit and information management (≥ 50 percentage points). The two laboratories with the greatest overall improvement focused heavily on the internal audit and corrective action QSEs. Conclusion Whilst all laboratories improved from exit to surveillance audit, those that focused on the internal audit and corrective action QSEs improved substantially more than those that did not; internal audits and corrective actions may have acted as catalysts, leading to improvements in other QSEs. Systematic identification of core areas and best practices to address them is a critical step toward strengthening public medical laboratories. PMID:29043193
Bradley, Kendall E.
2016-01-01
Objectives To pilot test if Orthopaedic Surgery residents could self-assess their performance using newly created milestones, as defined by the Accreditation Council on Graduate Medical Education. Methods In June 2012, an email was sent to Program Directors and administrative coordinators of the154 accredited Orthopaedic Surgery Programs, asking them to send their residents a link to an online survey. The survey was adapted from the Orthopaedic Surgery Milestone Project. Completed surveys were aggregated in an anonymous, confidential database. SAS 9.3 was used to perform the analyses. Results Responses from 71 residents were analyzed. First and second year residents indicated through self-assessment that they had substantially achieved Level 1 and Level 2 milestones. Third year residents reported they had substantially achieved 30/41, and fourth year residents, all Level 3 milestones. Fifth year, graduating residents, reported they had substantially achieved 17 Level 4 milestones, and were extremely close on another 15. No milestone was rated at Level 5, the maximum possible. Earlier in training, Patient Care and Medical Knowledge milestones were rated lower than the milestones reflecting the other four competencies of Practice Based Learning and Improvement, Systems Based Practice, Professionalism, and Interpersonal Communication. The gap was closed by the fourth year. Conclusions Residents were able to successfully self-assess using the 41 Orthopaedic Surgery milestones. Respondents’ rate improved proficiency over time. Graduating residents report they have substantially, or close to substantially, achieved all Level 4 milestones. Milestone self-assessment may be a useful tool as one component of a program’s overall performance assessment strategy. PMID:26752012
Martini, Daniela; Biasini, Beatrice; Zavaroni, Ivana; Bedogni, Giorgio; Musci, Marilena; Pruneti, Carlo; Passeri, Giovanni; Ventura, Marco; Galli, Daniela; Mirandola, Prisco; Vitale, Marco; Dei Cas, Alessandra; Bonadonna, Riccardo C; Del Rio, Daniele
2018-04-01
Most requests for authorization to bear health claims under Articles 13(5) and 14 related to blood glucose and insulin concentration/regulation presented to the European Food Safety Authority (EFSA) receive a negative opinion. Reasons for such decisions are mainly ascribable to poor substantiation of the claimed effects. In this scenario, a project was carried out aiming at critically analysing the outcome variables (OVs) and methods of measurement (MMs) to be used to substantiate health claims, with the final purpose to improve the quality of applications provided by stakeholders to EFSA. This manuscript provides a position statement of the experts involved in the project, reporting the results of an investigation aimed to collect, collate and critically analyse the information relevant to claimed effects (CEs), OVs and MMs related to blood glucose and insulin levels and homoeostasis compliant with Regulation 1924/2006. The critical analysis of OVs and MMs was performed with the aid of the pertinent scientific literature and was aimed at defining their appropriateness (alone or in combination with others) to support a specific CE. The results can be used to properly select OVs and MMs in a randomized controlled trial, for an effective substantiation of the claims, using the reference method(s) whenever available. Moreover, results can help EFSA in updating the guidance for the scientific requirements of health claims.
van der Vorm, Lisa N; Hendriks, Jan C M; Laarakkers, Coby M; Klaver, Siem; Armitage, Andrew E; Bamberg, Alison; Geurts-Moespot, Anneke J; Girelli, Domenico; Herkert, Matthias; Itkonen, Outi; Konrad, Robert J; Tomosugi, Naohisa; Westerman, Mark; Bansal, Sukhvinder S; Campostrini, Natascia; Drakesmith, Hal; Fillet, Marianne; Olbina, Gordana; Pasricha, Sant-Rayn; Pitts, Kelly R; Sloan, John H; Tagliaro, Franco; Weykamp, Cas W; Swinkels, Dorine W
2016-07-01
Absolute plasma hepcidin concentrations measured by various procedures differ substantially, complicating interpretation of results and rendering reference intervals method dependent. We investigated the degree of equivalence achievable by harmonization and the identification of a commutable secondary reference material to accomplish this goal. We applied technical procedures to achieve harmonization developed by the Consortium for Harmonization of Clinical Laboratory Results. Eleven plasma hepcidin measurement procedures (5 mass spectrometry based and 6 immunochemical based) quantified native individual plasma samples (n = 32) and native plasma pools (n = 8) to assess analytical performance and current and achievable equivalence. In addition, 8 types of candidate reference materials (3 concentrations each, n = 24) were assessed for their suitability, most notably in terms of commutability, to serve as secondary reference material. Absolute hepcidin values and reproducibility (intrameasurement procedure CVs 2.9%-8.7%) differed substantially between measurement procedures, but all were linear and correlated well. The current equivalence (intermeasurement procedure CV 28.6%) between the methods was mainly attributable to differences in calibration and could thus be improved by harmonization with a common calibrator. Linear regression analysis and standardized residuals showed that a candidate reference material consisting of native lyophilized plasma with cryolyoprotectant was commutable for all measurement procedures. Mathematically simulated harmonization with this calibrator resulted in a maximum achievable equivalence of 7.7%. The secondary reference material identified in this study has the potential to substantially improve equivalence between hepcidin measurement procedures and contributes to the establishment of a traceability chain that will ultimately allow standardization of hepcidin measurement results. © 2016 American Association for Clinical Chemistry.
High performance P-type thermoelectric materials and methods of preparation
NASA Technical Reports Server (NTRS)
Caillat, Thierry (Inventor); Borshchevsky, Alexander (Inventor); Fleurial, Jean-Pierre (Inventor)
2002-01-01
The present invention is embodied in high performance p-type thermoelectric materials having enhanced thermoelectric properties and the methods of preparing such materials. In one aspect of the invention, p-type semiconductors of formula Zn.sub.4-x A.sub.x Sb.sub.3-y B.sub.y wherein 0.ltoreq.x.ltoreq.4, A is a transition metal, B is a pnicogen, and 0.ltoreq.y.ltoreq.3 are formed for use in manufacturing thermoelectric devices with substantially enhanced operating characteristics and improved efficiency. Two methods of preparing p-type Zn.sub.4 Sb.sub.3 and related alloys of the present invention include a crystal growth method and a powder metallurgy method.
Ozaki, Vitor A.; Ghosh, Sujit K.; Goodwin, Barry K.; Shirota, Ricardo
2009-01-01
This article presents a statistical model of agricultural yield data based on a set of hierarchical Bayesian models that allows joint modeling of temporal and spatial autocorrelation. This method captures a comprehensive range of the various uncertainties involved in predicting crop insurance premium rates as opposed to the more traditional ad hoc, two-stage methods that are typically based on independent estimation and prediction. A panel data set of county-average yield data was analyzed for 290 counties in the State of Paraná (Brazil) for the period of 1990 through 2002. Posterior predictive criteria are used to evaluate different model specifications. This article provides substantial improvements in the statistical and actuarial methods often applied to the calculation of insurance premium rates. These improvements are especially relevant to situations where data are limited. PMID:19890450
Experimental design and statistical methods for improved hit detection in high-throughput screening.
Malo, Nathalie; Hanley, James A; Carlile, Graeme; Liu, Jing; Pelletier, Jerry; Thomas, David; Nadon, Robert
2010-09-01
Identification of active compounds in high-throughput screening (HTS) contexts can be substantially improved by applying classical experimental design and statistical inference principles to all phases of HTS studies. The authors present both experimental and simulated data to illustrate how true-positive rates can be maximized without increasing false-positive rates by the following analytical process. First, the use of robust data preprocessing methods reduces unwanted variation by removing row, column, and plate biases. Second, replicate measurements allow estimation of the magnitude of the remaining random error and the use of formal statistical models to benchmark putative hits relative to what is expected by chance. Receiver Operating Characteristic (ROC) analyses revealed superior power for data preprocessed by a trimmed-mean polish method combined with the RVM t-test, particularly for small- to moderate-sized biological hits.
Clare, John; McKinney, Shawn T; DePue, John E; Loftin, Cynthia S
2017-10-01
It is common to use multiple field sampling methods when implementing wildlife surveys to compare method efficacy or cost efficiency, integrate distinct pieces of information provided by separate methods, or evaluate method-specific biases and misclassification error. Existing models that combine information from multiple field methods or sampling devices permit rigorous comparison of method-specific detection parameters, enable estimation of additional parameters such as false-positive detection probability, and improve occurrence or abundance estimates, but with the assumption that the separate sampling methods produce detections independently of one another. This assumption is tenuous if methods are paired or deployed in close proximity simultaneously, a common practice that reduces the additional effort required to implement multiple methods and reduces the risk that differences between method-specific detection parameters are confounded by other environmental factors. We develop occupancy and spatial capture-recapture models that permit covariance between the detections produced by different methods, use simulation to compare estimator performance of the new models to models assuming independence, and provide an empirical application based on American marten (Martes americana) surveys using paired remote cameras, hair catches, and snow tracking. Simulation results indicate existing models that assume that methods independently detect organisms produce biased parameter estimates and substantially understate estimate uncertainty when this assumption is violated, while our reformulated models are robust to either methodological independence or covariance. Empirical results suggested that remote cameras and snow tracking had comparable probability of detecting present martens, but that snow tracking also produced false-positive marten detections that could potentially substantially bias distribution estimates if not corrected for. Remote cameras detected marten individuals more readily than passive hair catches. Inability to photographically distinguish individual sex did not appear to induce negative bias in camera density estimates; instead, hair catches appeared to produce detection competition between individuals that may have been a source of negative bias. Our model reformulations broaden the range of circumstances in which analyses incorporating multiple sources of information can be robustly used, and our empirical results demonstrate that using multiple field-methods can enhance inferences regarding ecological parameters of interest and improve understanding of how reliably survey methods sample these parameters. © 2017 by the Ecological Society of America.
Electrically-conductive proppant and methods for making and using same
Cannan, Chad; Roper, Todd; Savoy, Steve; Mitchell, Daniel R.
2016-09-06
Electrically-conductive sintered, substantially round and spherical particles and methods for producing such electrically-conductive sintered, substantially round and spherical particles from an alumina-containing raw material. Methods for using such electrically-conductive sintered, substantially round and spherical particles in hydraulic fracturing operations.
A systems approach to improving rural care in Ethiopia.
Bradley, Elizabeth H; Byam, Patrick; Alpern, Rachelle; Thompson, Jennifer W; Zerihun, Abraham; Abebe, Yigeremu; Abeb, Yigeremu; Curry, Leslie A
2012-01-01
Multiple interventions have been launched to improve the quality, access, and utilization of primary health care in rural, low-income settings; however, the success of these interventions varies substantially, even within single studies where the measured impact of interventions differs across sites, centers, and regions. Accordingly, we sought to examine the variation in impact of a health systems strengthening intervention and understand factors that might explain the variation in impact across primary health care units. We conducted a mixed methods positive deviance study of 20 Primary Health Care Units (PHCUs) in rural Ethiopia. Using longitudinal data from the Ethiopia Millennium Rural Initiative (EMRI), we identified PHCUs with consistently higher performance (n = 2), most improved performance (n = 3), or consistently lower performance (n = 2) in the provision of antenatal care, HIV testing in antenatal care, and skilled birth attendance rates. Using data from site visits and in-depth interviews (n = 51), we applied the constant comparative method of qualitative data analysis to identify key themes that distinguished PHCUs with different performance trajectories. Key themes that distinguished PHCUs were 1) managerial problem solving capacity, 2) relationship with the woreda (district) health office, and 3) community engagement. In higher performing PHCUs and those with the greatest improvement after the EMRI intervention, health center and health post staff were more able to solve day-to-day problems, staff had better relationships with the woreda health official, and PHCU communities' leadership, particularly religious leadership, were strongly engaged with the health improvement effort. Distance from the nearest city, quality of roads and transportation, and cultural norms did not differ substantially among PHCUs. Effective health strengthening efforts may require intensive development of managerial problem solving skills, strong relationships with government offices that oversee front-line providers, and committed community leadership to succeed.
Nuclear medicine and imaging research: Quantitative studies in radiopharmaceutical science
DOE Office of Scientific and Technical Information (OSTI.GOV)
Copper, M.; Beck, R.N.
1991-06-01
During the past three years the program has undergone a substantial revitalization. There has been no significant change in the scientific direction of this grant, in which emphasis continues to be placed on developing new or improved methods of obtaining quantitative data from radiotracer imaging studies. However, considerable scientific progress has been made in the three areas of interest: Radiochemistry, Quantitative Methodologies, and Experimental Methods and Feasibility Studies, resulting in a sharper focus of perspective and improved integration of the overall scientific effort. Changes in Faculty and staff, including development of new collaborations, have contributed to this, as has acquisitionmore » of additional and new equipment and renovations and expansion of the core facilities. 121 refs., 30 figs., 2 tabs.« less
The Evolution of Image-Free Robotic Assistance in Unicompartmental Knee Arthroplasty.
Lonner, Jess H; Moretti, Vincent M
2016-01-01
Semiautonomous robotic technology has been introduced to optimize accuracy of bone preparation, implant positioning, and soft tissue balance in unicompartmental knee arthroplasty (UKA), with the expectation that there will be a resultant improvement in implant durability and survivorship. Currently, roughly one-fifth of UKAs in the US are being performed with robotic assistance, and it is anticipated that there will be substantial growth in market penetration of robotics over the next decade. First-generation robotic technology improved substantially implant position compared to conventional methods; however, high capital costs, uncertainty regarding the value of advanced technologies, and the need for preoperative computed tomography (CT) scans were barriers to broader adoption. Newer image-free semiautonomous robotic technology optimizes both implant position and soft tissue balance, without the need for preoperative CT scans and with pricing and portability that make it suitable for use in an ambulatory surgery center setting, where approximately 40% of these systems are currently being utilized. This article will review the robotic experience for UKA, including rationale, system descriptions, and outcomes.
Variational method of determining effective moduli of polycrystals with tetragonal symmetry
Meister, R.; Peselnick, L.
1966-01-01
Variational principles have been applied to aggregates of randomly oriented pure-phase polycrystals having tetragonal symmetry. The bounds of the effective elastic moduli obtained in this way show a substantial improvement over the bounds obtained by means of the Voigt and Reuss assumptions. The Hill average is found to be a good approximation in most cases when compared to the bounds found from the variational method. The new bounds reduce in their limits to the Voigt and Reuss values. ?? 1966 The American Institute of Physics.
Friedman, L.
1962-01-01
method is described for operating a mass spectrometer to improve its resolution qualities and to extend its period of use substantially between cleanings. In this method, a small amount of a beta emitting gas such as hydrogen titride or carbon-14 methane is added to the sample being supplied to the spectrometer for investigation. The additive establishes leakage paths on the surface of the non-conducting film accumulating within the vacuum chamber of the spectrometer, thereby reducing the effect of an accumulated static charge on the electrostatic and magnetic fields established within the instrument. (AEC)
Effective-field renormalization-group method for Ising systems
NASA Astrophysics Data System (ADS)
Fittipaldi, I. P.; De Albuquerque, D. F.
1992-02-01
A new applicable effective-field renormalization-group (ERFG) scheme for computing critical properties of Ising spins systems is proposed and used to study the phase diagrams of a quenched bond-mixed spin Ising model on square and Kagomé lattices. The present EFRG approach yields results which improves substantially on those obtained from standard mean-field renormalization-group (MFRG) method. In particular, it is shown that the EFRG scheme correctly distinguishes the geometry of the lattice structure even when working with the smallest possible clusters, namely N'=1 and N=2.
Gombár, Melinda; Józsa, Éva; Braun, Mihály; Ősz, Katalin
2012-10-01
An inexpensive photoreactor using LED light sources and a fibre-optic CCD spectrophotometer as a detector was built by designing a special cell holder for standard 1.000 cm cuvettes. The use of this device was demonstrated by studying the aqueous photochemical reaction of 2,5-dichloro-1,4-benzoquinone. The developed method combines the highly quantitative data collection of CCD spectrophotometers with the possibility of illuminating the sample independently of the detecting light beam, which is a substantial improvement of the method using diode array spectrophotometers as photoreactors.
Wilson, Bethany J; Nicholas, Frank W; James, John W; Wade, Claire M; Thomson, Peter C
2013-01-01
Canine hip dysplasia (CHD) is a serious and common musculoskeletal disease of pedigree dogs and therefore represents both an important welfare concern and an imperative breeding priority. The typical heritability estimates for radiographic CHD traits suggest that the accuracy of breeding dog selection could be substantially improved by the use of estimated breeding values (EBVs) in place of selection based on phenotypes of individuals. The British Veterinary Association/Kennel Club scoring method is a complex measure composed of nine bilateral ordinal traits, intended to evaluate both early and late dysplastic changes. However, the ordinal nature of the traits may represent a technical challenge for calculation of EBVs using linear methods. The purpose of the current study was to calculate EBVs of British Veterinary Association/Kennel Club traits in the Australian population of German Shepherd Dogs, using linear (both as individual traits and a summed phenotype), binary and ordinal methods to determine the optimal method for EBV calculation. Ordinal EBVs correlated well with linear EBVs (r = 0.90-0.99) and somewhat well with EBVs for the sum of the individual traits (r = 0.58-0.92). Correlation of ordinal and binary EBVs varied widely (r = 0.24-0.99) depending on the trait and cut-point considered. The ordinal EBVs have increased accuracy (0.48-0.69) of selection compared with accuracies from individual phenotype-based selection (0.40-0.52). Despite the high correlations between linear and ordinal EBVs, the underlying relationship between EBVs calculated by the two methods was not always linear, leading us to suggest that ordinal models should be used wherever possible. As the population of German Shepherd Dogs which was studied was purportedly under selection for the traits studied, we examined the EBVs for evidence of a genetic trend in these traits and found substantial genetic improvement over time. This study suggests the use of ordinal EBVs could increase the rate of genetic improvement in this population.
Improvement attributes in healthcare: implications for integrated care.
Harnett, Patrick John
2018-04-16
Purpose Healthcare quality improvement is a key concern for policy makers, regulators, carers and service users. Despite a contemporary consensus among policy makers that integrated care represents a means to substantially improve service outcomes, progress has been slow. Difficulties achieving sustained improvement at scale imply that methods employed are not sufficient and that healthcare improvement attributes may be different when compared to prior reference domains. The purpose of this paper is to examine and synthesise key improvement attributes relevant to a complex healthcare change process, specifically integrated care. Design/methodology/approach This study is based on an integrative literature review on systemic improvement in healthcare. Findings A central theme emerging from the literature review indicates that implementing systemic change needs to address the relationship between vision, methods and participant social dynamics. Practical implications Accommodating personal and professional network dynamics is required for systemic improvement, especially among high autonomy individuals. This reinforces the need to recognise the change process as taking place in a complex adaptive system where personal/professional purpose/meaning is central to the process. Originality/value Shared personal/professional narratives are insufficiently recognised as a powerful change force, under-represented in linear and rational empirical improvement approaches.
Code of Federal Regulations, 2011 CFR
2011-04-01
... purpose of funding physical and management improvements. Modernization program. A PHA's program for... substantially the same kind does qualify, but reconstruction, substantial improvement in the quality or kind of... resident participation in each of the required program components. PHMAP. The Public Housing Management...
Image-optimized Coronal Magnetic Field Models
NASA Astrophysics Data System (ADS)
Jones, Shaela I.; Uritsky, Vadim; Davila, Joseph M.
2017-08-01
We have reported previously on a new method we are developing for using image-based information to improve global coronal magnetic field models. In that work, we presented early tests of the method, which proved its capability to improve global models based on flawed synoptic magnetograms, given excellent constraints on the field in the model volume. In this follow-up paper, we present the results of similar tests given field constraints of a nature that could realistically be obtained from quality white-light coronagraph images of the lower corona. We pay particular attention to difficulties associated with the line-of-sight projection of features outside of the assumed coronagraph image plane and the effect on the outcome of the optimization of errors in the localization of constraints. We find that substantial improvement in the model field can be achieved with these types of constraints, even when magnetic features in the images are located outside of the image plane.
Image-Optimized Coronal Magnetic Field Models
NASA Technical Reports Server (NTRS)
Jones, Shaela I.; Uritsky, Vadim; Davila, Joseph M.
2017-01-01
We have reported previously on a new method we are developing for using image-based information to improve global coronal magnetic field models. In that work we presented early tests of the method which proved its capability to improve global models based on flawed synoptic magnetograms, given excellent constraints on the field in the model volume. In this follow-up paper we present the results of similar tests given field constraints of a nature that could realistically be obtained from quality white-light coronagraph images of the lower corona. We pay particular attention to difficulties associated with the line-of-sight projection of features outside of the assumed coronagraph image plane, and the effect on the outcome of the optimization of errors in localization of constraints. We find that substantial improvement in the model field can be achieved with this type of constraints, even when magnetic features in the images are located outside of the image plane.
de Hoyo, Moises; Gonzalo-Skok, Oliver; Sañudo, Borja; Carrascal, Claudio; Plaza-Armas, Jose R; Camacho-Candil, Fernando; Otero-Esquina, Carlos
2016-02-01
The aim of this study was to analyze the effects of 3 different low/moderate load strength training methods (full-back squat [SQ], resisted sprint with sled towing [RS], and plyometric and specific drills training [PLYO]) on sprinting, jumping, and change of direction (COD) abilities in soccer players. Thirty-two young elite male Spanish soccer players participated in the study. Subjects performed 2 specific strength training sessions per week, in addition to their normal training sessions for 8 weeks. The full-back squat protocol consisted of 2-3 sets × 4-8 repetitions at 40-60% 1 repetition maximum (∼ 1.28-0.98 m · s(-1)). The resisted sprint training was compounded by 6-10 sets × 20-m loaded sprints (12.6% of body mass). The plyometric and specific drills training was based on 1-3 sets × 2-3 repetitions of 8 plyometric and speed/agility exercises. Testing sessions included a countermovement jump (CMJ), a 20-m sprint (10-m split time), a 50-m (30-m split time) sprint, and COD test (i.e., Zig-Zag test). Substantial improvements (likely to almost certainly) in CMJ (effect size [ES]: 0.50-0.57) and 30-50 m (ES: 0.45-0.84) were found in every group in comparison to pretest results. Moreover, players in PLYO and SQ groups also showed substantial enhancements (likely to very likely) in 0-50 m (ES: 0.46-0.60). In addition, 10-20 m was also improved (very likely) in the SQ group (ES: 0.61). Between-group analyses showed that improvements in 10-20 m (ES: 0.57) and 30-50 m (ES: 0.40) were likely greater in the SQ group than in the RS group. Also, 10-20 m (ES: 0.49) was substantially better in the SQ group than in the PLYO group. In conclusion, the present strength training methods used in this study seem to be effective to improve jumping and sprinting abilities, but COD might need other stimulus to achieve positive effects.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mankamo, T.; Kim, I.S.; Yang, Ji Wu
Failures in the auxiliary feedwater (AFW) system of pressurized water reactors (PWRs) are considered to involve substantial risk whether a decision is made to either continue power operation while repair is being done, or to shut down the plant to undertake repairs. Technical specification action requirements usually require immediate plant shutdown in the case of multiple failures in the system (in some cases, immediate repair of one train is required when all AFW trains fail). This paper presents a probabilistic risk assessment-based method to quantitatively evaluate and compare both the risks of continued power operation and of shutting the plantmore » down, given known failures in the system. The method is applied to the AFW system for four different PWRs. Results show that the risk of continued power operation and plant shutdown both are substantial, but the latter is larger than the former over the usual repair time. This was proven for four plants with different designs: two operating Westinghouse plants, one operating Asea-Brown Boveri Combustion Engineering Plant, and one of evolutionary design. The method can be used to analyze individual plant design and to improve AFW action requirements using risk-informed evaluations.« less
Davidov, Ori; Rosen, Sophia
2011-04-01
In medical studies, endpoints are often measured for each patient longitudinally. The mixed-effects model has been a useful tool for the analysis of such data. There are situations in which the parameters of the model are subject to some restrictions or constraints. For example, in hearing loss studies, we expect hearing to deteriorate with time. This means that hearing thresholds which reflect hearing acuity will, on average, increase over time. Therefore, the regression coefficients associated with the mean effect of time on hearing ability will be constrained. Such constraints should be accounted for in the analysis. We propose maximum likelihood estimation procedures, based on the expectation-conditional maximization either algorithm, to estimate the parameters of the model while accounting for the constraints on them. The proposed methods improve, in terms of mean square error, on the unconstrained estimators. In some settings, the improvement may be substantial. Hypotheses testing procedures that incorporate the constraints are developed. Specifically, likelihood ratio, Wald, and score tests are proposed and investigated. Their empirical significance levels and power are studied using simulations. It is shown that incorporating the constraints improves the mean squared error of the estimates and the power of the tests. These improvements may be substantial. The methodology is used to analyze a hearing loss study.
Improved PVDF membrane performance by doping extracellular polymeric substances of activated sludge.
Guan, Yan-Fang; Huang, Bao-Cheng; Qian, Chen; Wang, Long-Fei; Yu, Han-Qing
2017-04-15
Polyvinylidene fluoride (PVDF) membrane has been widely applied in water and wastewater treatment because of its high mechanical strength, thermal stability and chemical resistance. However, the hydrophobic nature of PVDF membrane makes it readily fouled, substantially reducing water flux and overall membrane rejection ability. In this work, an in-situ blending modifier, i.e., extracellular polymeric substances (EPS) from activated sludge, was used to enhance the anti-fouling ability of PVDF membrane. Results indicate that the pure water flux of the membrane and its anti-fouling performance were substantially improved by blending 8% EPS into the membrane. By introducing EPS, the membrane hydrophilicity was increased and the cross section morphology was changed when it interacted with polyvinl pyrrolidone, resulting in the formation of large cavities below the finger-like pores. In addition, the fraction of pores with a size of 100-500 nm increased, which was also beneficial to improving membrane performance. Surface thermodynamic calculations indicate the EPS-functionalized membrane had a higher cohesion free energy, implying its good pollutant rejection and anti-fouling ability. This work provides a simple, efficient and cost-effective method to improve membrane performance and also extends the applications of EPS. Copyright © 2017 Elsevier Ltd. All rights reserved.
Biological markers from Green River kerogen decomposition
NASA Astrophysics Data System (ADS)
Burnham, A. K.; Clarkson, J. E.; Singleton, M. F.; Wong, C. M.; Crawford, R. W.
1982-07-01
Isoprenoid and other carbon skeletons that are formed in living organisms and preserved essentially intact in ancient sediments are often called biological markers. The purpose of this paper is to develop improved methods of using isoprenoid hydrocarbons to relate petroleum or shale oil to its source rock. It is demonstrated that most, but not all, of the isoprenoid hydrocarbon structures are chemically bonded in kerogen (or to minerals) in Green River oil shale. The rate constant for thermally producing isoprenoid, cyclic, and aromatic hydrocarbons is substantially greater than for the bulk of shale oil. This may be related to the substantial quantity of CO 2 which is evolved coincident with the isoprenoid hydrocarbons but prior to substantial oil evolution. Although formation of isoprenoid alkenes is enhanced by rapid heating and high pyrolysis temperatures, the ratio of isoprenoid alkenes plus alkanes to normal alkenes plus alkanes is independent of heating rate. High-temperature laboratory pyrolysis experiments can thus be used to predict the distribution of aliphatic hydrocarbons in low temperature processes such as in situ shale oil production and perhaps petroleum formation. Finally, we demonstrate that significant variation in biological marker ratios occurs as a function of stratigraphy in the Green River formation. This information, combined with methods for measuring process yield from oil composition, enables one to relate time-dependent processing conditions to the corresponding time-dependent oil yield in a vertical modified- in situ retort even if there is a substantial and previously undetermined delay in drainage of shale oil from the retort.
Improving Medication Adherence in Cardiometabolic Disease
Ferdinand, Keith C.; Senatore, Fortunato Fred; Clayton-Jeter, Helene; Cryer, Dennis R.; Lewin, John C.; Nasser, Samar A.; Fiuzat, Mona; Califf, Robert M.
2017-01-01
Medication nonadherence, a major problem in cardiovascular disease (CVD), contributes yearly to approximately 125,000 preventable deaths, which is partly attributable to only about one-half of CVD patients consistently taking prescribed life-saving medications. Current interest has focused on how labeling and education influence adherence. This paper summarizes the scope of CVD nonadherence, describes key U.S. Food and Drug Administration initiatives, and identifies potential targets for improvement. We describe key adherence factors, methods, and technological applications for simplifying regimens and enhancing adherence, and 4 areas where additional collaborative research and implementation involving the regulatory system and clinical community could substantially reduce nonadherence: 1) identifying monitoring methods; 2) improving the evidence base to better understand adherence; 3) developing patient/health provider team-based engagement strategies; and 4) alleviating health disparities. Alignment of U.S. Food and Drug Administration approaches to dissemination of information about appropriate use with clinical practice could improve adherence, and thereby reduce CVD death and disability. PMID:28126162
Schmittdiel, Julie A; Desai, Jay; Schroeder, Emily B; Paolino, Andrea R; Nichols, Gregory A; Lawrence, Jean M; O'Connor, Patrick J; Ohnsorg, Kris A; Newton, Katherine M; Steiner, John F
2015-06-01
Engaging stakeholders in the research process has the potential to improve quality of care and the patient care experience. Online patient community surveys can elicit important topic areas for comparative effectiveness research. Stakeholder meetings with substantial patient representation, as well as representation from health care delivery systems and research funding agencies, are a valuable tool for selecting and refining pilot research and quality improvement projects. Giving patient stakeholders a deciding vote in selecting pilot research topics helps ensure their 'voice' is heard. Researchers and health care leaders should continue to develop best-practices and strategies for increasing patient involvement in comparative effectiveness and delivery science research.
Application of design sensitivity analysis for greater improvement on machine structural dynamics
NASA Technical Reports Server (NTRS)
Yoshimura, Masataka
1987-01-01
Methodologies are presented for greatly improving machine structural dynamics by using design sensitivity analyses and evaluative parameters. First, design sensitivity coefficients and evaluative parameters of structural dynamics are described. Next, the relations between the design sensitivity coefficients and the evaluative parameters are clarified. Then, design improvement procedures of structural dynamics are proposed for the following three cases: (1) addition of elastic structural members, (2) addition of mass elements, and (3) substantial charges of joint design variables. Cases (1) and (2) correspond to the changes of the initial framework or configuration, and (3) corresponds to the alteration of poor initial design variables. Finally, numerical examples are given for demonstrating the availability of the methods proposed.
Fluorinated diamond particles bonded in a filled fluorocarbon resin matrix
Taylor, G.W.; Roybal, H.E.
1983-11-14
A method of producing fluorinated diamond particles bonded in a filled fluorocarbon resin matrix. Simple hot pressing techniques permit the formation of such matrices from which diamond impregnated grinding tools and other articles of manufacture can be produced. Teflon fluorocarbon resins filled with Al/sub 2/O/sub 3/ yield grinding tools with substantially improved work-to-wear ratios over grinding wheels known in the art.
Terahertz wave electro-optic measurements with optical spectral filtering
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ilyakov, I. E., E-mail: igor-ilyakov@mail.ru; Shishkin, B. V.; Kitaeva, G. Kh.
We propose electro-optic detection techniques based on variations of the laser pulse spectrum induced during pulse co-propagation with terahertz wave radiation in a nonlinear crystal. Quantitative comparison with two other detection methods is made. Substantial improvement of the sensitivity compared to the standard electro-optic detection technique (at high frequencies) and to the previously shown technique based on laser pulse energy changes is demonstrated in experiment.
Analysis of documentary support for environmental restoration programs in Russia
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nechaev, A.F.; Projaev, V.V.
1995-12-31
Taking into account an importance of an adequate regulations for ensuring of radiological safety of the biosphere and for successful implementation of environmental restoration projects, contents of legislative and methodical documents as well as their comprehensitivity and substantiation are subjected to critical analysis. It is shown that there is much scope for further optimization of and improvements in regulatory basis both on Federal and regional levels.
Improved technique for one-way transformation of information
Cooper, J.A.
1987-05-11
Method and apparatus are provided for one-way transformation of data according to multiplication and/or exponentiation modulo a prime number. An implementation of the invention permits the one way residue transformation, useful in encryption and similar applications, to be implemented by n-bit computers substantially with no increase in difficulty or complexity over a natural transformation thereby, using a modulus which is a power of two. 9 figs.
ERIC Educational Resources Information Center
Welsh, Richard; Hall, Michelle
2018-01-01
Context: Given the growing popularity of the portfolio management model (PMM) as a method of improving education, it is important to examine how these market-based reforms are sustained over time and how the politics of sustaining this model have substantial policy implications. Purpose of Study: The purpose of this article is to examine important…
Fluorinated diamond particles bonded in a filled fluorocarbon resin matrix
Taylor, Gene W.; Roybal, Herman E.
1985-01-01
A method of producing fluorinated diamond particles bonded in a filled fluorocarbon resin matrix. Simple hot pressing techniques permit the formation of such matrices from which diamond impregnated grinding tools and other articles of manufacture can be produced. Teflon fluorocarbon resins filled with Al.sub.2 O.sub.3 yield grinding tools with substantially improved work-to-wear ratios over grinding wheels known in the art.
Numerical and Physical Aspects of Aerodynamic Flows
1992-01-15
accretion was also measured. detailed description of the IRT can be found in This test program also provided a new database for reference 4. code...Deflection lift flows and to develop a validation database 8 Slat Deflection with practical geometries/conditions for emerging computational methods. This...be substantially improved by their developers in the absence of a quality database at realistic conditions for a practical airfoil. The work reported
NASA Technical Reports Server (NTRS)
Ellison, Donald; Conway, Bruce; Englander, Jacob
2015-01-01
A significant body of work exists showing that providing a nonlinear programming (NLP) solver with expressions for the problem constraint gradient substantially increases the speed of program execution and can also improve the robustness of convergence, especially for local optimizers. Calculation of these derivatives is often accomplished through the computation of spacecraft's state transition matrix (STM). If the two-body gravitational model is employed as is often done in the context of preliminary design, closed form expressions for these derivatives may be provided. If a high fidelity dynamics model, that might include perturbing forces such as the gravitational effect from multiple third bodies and solar radiation pressure is used then these STM's must be computed numerically. We present a method for the power hardward model and a full ephemeris model. An adaptive-step embedded eight order Dormand-Prince numerical integrator is discussed and a method for the computation of the time of flight derivatives in this framework is presented. The use of these numerically calculated derivatieves offer a substantial improvement over finite differencing in the context of a global optimizer. Specifically the inclusion of these STM's into the low thrust missiondesign tool chain in use at NASA Goddard Spaceflight Center allows for an increased preliminary mission design cadence.
Simulated Annealing in the Variable Landscape
NASA Astrophysics Data System (ADS)
Hasegawa, Manabu; Kim, Chang Ju
An experimental analysis is conducted to test whether the appropriate introduction of the smoothness-temperature schedule enhances the optimizing ability of the MASSS method, the combination of the Metropolis algorithm (MA) and the search-space smoothing (SSS) method. The test is performed on two types of random traveling salesman problems. The results show that the optimization performance of the MA is substantially improved by a single smoothing alone and slightly more by a single smoothing with cooling and by a de-smoothing process with heating. The performance is compared to that of the parallel tempering method and a clear advantage of the idea of smoothing is observed depending on the problem.
Bradley, Elizabeth H; Brewster, Amanda L; McNatt, Zahirah; Linnander, Erika L; Cherlin, Emily; Fosburgh, Heather; Ting, Henry H; Curry, Leslie A
2018-01-01
Background Quality collaboratives are widely endorsed as a potentially effective method for translating and spreading best practices for acute myocardial infarction (AMI) care. Nevertheless, hospital success in improving performance through participation in collaboratives varies markedly. We sought to understand what distinguished hospitals that succeeded in shifting culture and reducing 30-day risk-standardised mortality rate (RSMR) after AMI through their participation in the Leadership Saves Lives (LSL) collaborative. Procedures We conducted a longitudinal, mixed methods intervention study of 10 hospitals over a 2-year period; data included surveys of 223 individuals (response rates 83%–94% depending on wave) and 393 in-depth interviews with clinical and management staff most engaged with the LSL intervention in the 10 hospitals. We measured change in culture and RSMR, and key aspects of working related to team membership, turnover, level of participation and approaches to conflict management. Main findings The six hospitals that experienced substantial culture change and greater reductions in RSMR demonstrated distinctions in: (1) effective inclusion of staff from different disciplines and levels in the organisational hierarchy in the team guiding improvement efforts (referred to as the ‘guiding coalition’ in each hospital); (2) authentic participation in the work of the guiding coalition; and (3) distinct patterns of managing conflict. Guiding coalition size and turnover were not associated with success (p values>0.05). In the six hospitals that experienced substantial positive culture change, staff indicated that the LSL learnings were already being applied to other improvement efforts. Principal conclusions Hospitals that were most successful in a national quality collaborative to shift hospital culture and reduce RSMR showed distinct patterns in membership diversity, authentic participation and capacity for conflict management. PMID:29101290
DOE Office of Scientific and Technical Information (OSTI.GOV)
Caldwell, W.S.; Conner, J.M.
Studies in our laboratory revealed artifactual formation of N-nitrosamines during trapping of mainstream and sidestream tobacco smoke by the method of Hoffmann and coworkers. Both volatile and tobacco-specific N-nitrosamines were produced. This artifact formation took place on the Cambridge filter, which is part of the collection train used in the previously published procedure. When the filter was treated with ascorbic acid before smoke collection, artifact formation was inhibited. The improved method resulting from these studies was applied to a comparative analysis of N-nitrosamines in smoke from cigarettes that heat, but do not burn, tobacco (the test cigarette) and several referencemore » cigarettes. Concentrations of volatile and tobacco-specific N-nitrosamines in both mainstream and sidestream smoke from the test cigarette were substantially lower than in the reference cigarettes.« less
Dama, James F; Rotskoff, Grant; Parrinello, Michele; Voth, Gregory A
2014-09-09
Well-tempered metadynamics has proven to be a practical and efficient adaptive enhanced sampling method for the computational study of biomolecular and materials systems. However, choosing its tunable parameter can be challenging and requires balancing a trade-off between fast escape from local metastable states and fast convergence of an overall free energy estimate. In this article, we present a new smoothly convergent variant of metadynamics, transition-tempered metadynamics, that removes that trade-off and is more robust to changes in its own single tunable parameter, resulting in substantial speed and accuracy improvements. The new method is specifically designed to study state-to-state transitions in which the states of greatest interest are known ahead of time, but transition mechanisms are not. The design is guided by a picture of adaptive enhanced sampling as a means to increase dynamical connectivity of a model's state space until percolation between all points of interest is reached, and it uses the degree of dynamical percolation to automatically tune the convergence rate. We apply the new method to Brownian dynamics on 48 random 1D surfaces, blocked alanine dipeptide in vacuo, and aqueous myoglobin, finding that transition-tempered metadynamics substantially and reproducibly improves upon well-tempered metadynamics in terms of first barrier crossing rate, convergence rate, and robustness to the choice of tuning parameter. Moreover, the trade-off between first barrier crossing rate and convergence rate is eliminated: the new method drives escape from an initial metastable state as fast as metadynamics without tempering, regardless of tuning.
Improving data retention in EEG research with children using child-centered eye tracking
Maguire, Mandy J.; Magnon, Grant; Fitzhugh, Anna E.
2014-01-01
Background Event Related Potentials (ERPs) elicited by visual stimuli have increased our understanding of developmental disorders and adult cognitive abilities for decades; however, these studies are very difficult with populations who cannot sustain visual attention such as infants and young children. Current methods for studying such populations include requiring a button response, which may be impossible for some participants, and experimenter monitoring, which is subject to error, highly variable, and spatially imprecise. New Method We developed a child-centered methodology to integrate EEG data acquisition and eye-tracking technologies that uses “attention-getters” in which stimulus display is contingent upon the child’s gaze. The goal was to increase the number of trials retained. Additionally, we used the eye-tracker to categorize and analyze the EEG data based on gaze to specific areas of the visual display, compared to analyzing based on stimulus presentation. Results Compared with Existing Methods The number of trials retained was substantially improved using the child-centered methodology compared to a button-press response in 7–8 year olds. In contrast, analyzing the EEG based on eye gaze to specific points within the visual display as opposed to stimulus presentation provided too few trials for reliable interpretation. Conclusions By using the linked EEG-eye-tracker we significantly increased data retention. With this method, studies can be completed with fewer participants and a wider range of populations. However, caution should be used when epoching based on participants’ eye gaze because, in this case, this technique provided substantially fewer trials. PMID:25251555
Pilic, Denisa; Höfs, Carolin; Weitmann, Sandra; Nöh, Frank; Fröhlich, Thorsten; Skopnik, Heino; Köhler, Henrik; Wenzl, Tobias G; Schmidt-Choudhury, Anjona
2011-09-01
Assessment of intra- and interobserver agreement in multiple intraluminal impedance (MII) measurement between investigators from different institutions. Twenty-four 18- to 24-hour MII tracings were randomly chosen from 4 different institutions (6 per center). Software-aided automatic analysis was performed. Each result was validated by 2 independent investigators from the 4 different centers (4 investigator combinations). For intraobserver agreement, 6 measurements were analyzed twice by the same investigator. Agreement between investigators was calculated using the Cohen kappa coefficient. Interobserver agreement: 13 measurements showed a perfect agreement (kappa > 0.8); 9 had a substantial (kappa 0.61-0.8), 1 a moderate (kappa coefficient 0.41 to 0.6), and 1 a fair agreement (kappa coefficient 0.11-0.4). Median kappa value was 0.83. Intraobserver agreement: 5 tracings showed perfect and 1 showed a substantial agreement. The median kappa value was 0.88. Most measurements showed substantial to perfect intra- and interobserver agreement. Still, we found a few outliers presumably caused by poorer signal quality in some tracings rather than being observer dependent. An improvement of analysis results may be achieved by using a standard analysis protocol, a standardized method for judging tracing quality, better training options for method users, and more interaction between investigators from different institutions.
Cheng, Dunlei; Branscum, Adam J; Stamey, James D
2010-07-01
To quantify the impact of ignoring misclassification of a response variable and measurement error in a covariate on statistical power, and to develop software for sample size and power analysis that accounts for these flaws in epidemiologic data. A Monte Carlo simulation-based procedure is developed to illustrate the differences in design requirements and inferences between analytic methods that properly account for misclassification and measurement error to those that do not in regression models for cross-sectional and cohort data. We found that failure to account for these flaws in epidemiologic data can lead to a substantial reduction in statistical power, over 25% in some cases. The proposed method substantially reduced bias by up to a ten-fold margin compared to naive estimates obtained by ignoring misclassification and mismeasurement. We recommend as routine practice that researchers account for errors in measurement of both response and covariate data when determining sample size, performing power calculations, or analyzing data from epidemiological studies. 2010 Elsevier Inc. All rights reserved.
Correcting for deformation in skin-based marker systems.
Alexander, E J; Andriacchi, T P
2001-03-01
A new technique is described that reduces error due to skin movement artifact in the opto-electronic measurement of in vivo skeletal motion. This work builds on a previously described point cluster technique marker set and estimation algorithm by extending the transformation equations to the general deformation case using a set of activity-dependent deformation models. Skin deformation during activities of daily living are modeled as consisting of a functional form defined over the observation interval (the deformation model) plus additive noise (modeling error). The method is described as an interval deformation technique. The method was tested using simulation trials with systematic and random components of deformation error introduced into marker position vectors. The technique was found to substantially outperform methods that require rigid-body assumptions. The method was tested in vivo on a patient fitted with an external fixation device (Ilizarov). Simultaneous measurements from markers placed on the Ilizarov device (fixed to bone) were compared to measurements derived from skin-based markers. The interval deformation technique reduced the errors in limb segment pose estimate by 33 and 25% compared to the classic rigid-body technique for position and orientation, respectively. This newly developed method has demonstrated that by accounting for the changing shape of the limb segment, a substantial improvement in the estimates of in vivo skeletal movement can be achieved.
van 't Klooster, Ronald; de Koning, Patrick J H; Dehnavi, Reza Alizadeh; Tamsma, Jouke T; de Roos, Albert; Reiber, Johan H C; van der Geest, Rob J
2012-01-01
To develop and validate an automated segmentation technique for the detection of the lumen and outer wall boundaries in MR vessel wall studies of the common carotid artery. A new segmentation method was developed using a three-dimensional (3D) deformable vessel model requiring only one single user interaction by combining 3D MR angiography (MRA) and 2D vessel wall images. This vessel model is a 3D cylindrical Non-Uniform Rational B-Spline (NURBS) surface which can be deformed to fit the underlying image data. Image data of 45 subjects was used to validate the method by comparing manual and automatic segmentations. Vessel wall thickness and volume measurements obtained by both methods were compared. Substantial agreement was observed between manual and automatic segmentation; over 85% of the vessel wall contours were segmented successfully. The interclass correlation was 0.690 for the vessel wall thickness and 0.793 for the vessel wall volume. Compared with manual image analysis, the automated method demonstrated improved interobserver agreement and inter-scan reproducibility. Additionally, the proposed automated image analysis approach was substantially faster. This new automated method can reduce analysis time and enhance reproducibility of the quantification of vessel wall dimensions in clinical studies. Copyright © 2011 Wiley Periodicals, Inc.
Cheung, C Y Maurice; Williams, Thomas C R; Poolman, Mark G; Fell, David A; Ratcliffe, R George; Sweetlove, Lee J
2013-09-01
Flux balance models of metabolism generally utilize synthesis of biomass as the main determinant of intracellular fluxes. However, the biomass constraint alone is not sufficient to predict realistic fluxes in central heterotrophic metabolism of plant cells because of the major demand on the energy budget due to transport costs and cell maintenance. This major limitation can be addressed by incorporating transport steps into the metabolic model and by implementing a procedure that uses Pareto optimality analysis to explore the trade-off between ATP and NADPH production for maintenance. This leads to a method for predicting cell maintenance costs on the basis of the measured flux ratio between the oxidative steps of the oxidative pentose phosphate pathway and glycolysis. We show that accounting for transport and maintenance costs substantially improves the accuracy of fluxes predicted from a flux balance model of heterotrophic Arabidopsis cells in culture, irrespective of the objective function used in the analysis. Moreover, when the new method was applied to cells under control, elevated temperature and hyper-osmotic conditions, only elevated temperature led to a substantial increase in cell maintenance costs. It is concluded that the hyper-osmotic conditions tested did not impose a metabolic stress, in as much as the metabolic network is not forced to devote more resources to cell maintenance. © 2013 The Authors The Plant Journal © 2013 John Wiley & Sons Ltd.
Data envelopment analysis in service quality evaluation: an empirical study
NASA Astrophysics Data System (ADS)
Najafi, Seyedvahid; Saati, Saber; Tavana, Madjid
2015-09-01
Service quality is often conceptualized as the comparison between service expectations and the actual performance perceptions. It enhances customer satisfaction, decreases customer defection, and promotes customer loyalty. Substantial literature has examined the concept of service quality, its dimensions, and measurement methods. We introduce the perceived service quality index (PSQI) as a single measure for evaluating the multiple-item service quality construct based on the SERVQUAL model. A slack-based measure (SBM) of efficiency with constant inputs is used to calculate the PSQI. In addition, a non-linear programming model based on the SBM is proposed to delineate an improvement guideline and improve service quality. An empirical study is conducted to assess the applicability of the method proposed in this study. A large number of studies have used DEA as a benchmarking tool to measure service quality. These models do not propose a coherent performance evaluation construct and consequently fail to deliver improvement guidelines for improving service quality. The DEA models proposed in this study are designed to evaluate and improve service quality within a comprehensive framework and without any dependency on external data.
Protein homology model refinement by large-scale energy optimization.
Park, Hahnbeom; Ovchinnikov, Sergey; Kim, David E; DiMaio, Frank; Baker, David
2018-03-20
Proteins fold to their lowest free-energy structures, and hence the most straightforward way to increase the accuracy of a partially incorrect protein structure model is to search for the lowest-energy nearby structure. This direct approach has met with little success for two reasons: first, energy function inaccuracies can lead to false energy minima, resulting in model degradation rather than improvement; and second, even with an accurate energy function, the search problem is formidable because the energy only drops considerably in the immediate vicinity of the global minimum, and there are a very large number of degrees of freedom. Here we describe a large-scale energy optimization-based refinement method that incorporates advances in both search and energy function accuracy that can substantially improve the accuracy of low-resolution homology models. The method refined low-resolution homology models into correct folds for 50 of 84 diverse protein families and generated improved models in recent blind structure prediction experiments. Analyses of the basis for these improvements reveal contributions from both the improvements in conformational sampling techniques and the energy function.
NASA Astrophysics Data System (ADS)
Takagi, Hiroshi; Wu, Wenjie
2016-03-01
Even though the maximum wind radius (R
Optimization of OT-MACH Filter Generation for Target Recognition
NASA Technical Reports Server (NTRS)
Johnson, Oliver C.; Edens, Weston; Lu, Thomas T.; Chao, Tien-Hsin
2009-01-01
An automatic Optimum Trade-off Maximum Average Correlation Height (OT-MACH) filter generator for use in a gray-scale optical correlator (GOC) has been developed for improved target detection at JPL. While the OT-MACH filter has been shown to be an optimal filter for target detection, actually solving for the optimum is too computationally intensive for multiple targets. Instead, an adaptive step gradient descent method was tested to iteratively optimize the three OT-MACH parameters, alpha, beta, and gamma. The feedback for the gradient descent method was a composite of the performance measures, correlation peak height and peak to side lobe ratio. The automated method generated and tested multiple filters in order to approach the optimal filter quicker and more reliably than the current manual method. Initial usage and testing has shown preliminary success at finding an approximation of the optimal filter, in terms of alpha, beta, gamma values. This corresponded to a substantial improvement in detection performance where the true positive rate increased for the same average false positives per image.
Gottschlich, Carsten; Schuhmacher, Dominic
2014-01-01
Finding solutions to the classical transportation problem is of great importance, since this optimization problem arises in many engineering and computer science applications. Especially the Earth Mover's Distance is used in a plethora of applications ranging from content-based image retrieval, shape matching, fingerprint recognition, object tracking and phishing web page detection to computing color differences in linguistics and biology. Our starting point is the well-known revised simplex algorithm, which iteratively improves a feasible solution to optimality. The Shortlist Method that we propose substantially reduces the number of candidates inspected for improving the solution, while at the same time balancing the number of pivots required. Tests on simulated benchmarks demonstrate a considerable reduction in computation time for the new method as compared to the usual revised simplex algorithm implemented with state-of-the-art initialization and pivot strategies. As a consequence, the Shortlist Method facilitates the computation of large scale transportation problems in viable time. In addition we describe a novel method for finding an initial feasible solution which we coin Modified Russell's Method.
Gottschlich, Carsten; Schuhmacher, Dominic
2014-01-01
Finding solutions to the classical transportation problem is of great importance, since this optimization problem arises in many engineering and computer science applications. Especially the Earth Mover's Distance is used in a plethora of applications ranging from content-based image retrieval, shape matching, fingerprint recognition, object tracking and phishing web page detection to computing color differences in linguistics and biology. Our starting point is the well-known revised simplex algorithm, which iteratively improves a feasible solution to optimality. The Shortlist Method that we propose substantially reduces the number of candidates inspected for improving the solution, while at the same time balancing the number of pivots required. Tests on simulated benchmarks demonstrate a considerable reduction in computation time for the new method as compared to the usual revised simplex algorithm implemented with state-of-the-art initialization and pivot strategies. As a consequence, the Shortlist Method facilitates the computation of large scale transportation problems in viable time. In addition we describe a novel method for finding an initial feasible solution which we coin Modified Russell's Method. PMID:25310106
Towards a Probabilistic Preliminary Design Criterion for Buckling Critical Composite Shells
NASA Technical Reports Server (NTRS)
Arbocz, Johann; Hilburger, Mark W.
2003-01-01
A probability-based analysis method for predicting buckling loads of compression-loaded laminated-composite shells is presented, and its potential as a basis for a new shell-stability design criterion is demonstrated and discussed. In particular, a database containing information about specimen geometry, material properties, and measured initial geometric imperfections for a selected group of laminated-composite cylindrical shells is used to calculate new buckling-load "knockdown factors". These knockdown factors are shown to be substantially improved, and hence much less conservative than the corresponding deterministic knockdown factors that are presently used by industry. The probability integral associated with the analysis is evaluated by using two methods; that is, by using the exact Monte Carlo method and by using an approximate First-Order Second- Moment method. A comparison of the results from these two methods indicates that the First-Order Second-Moment method yields results that are conservative for the shells considered. Furthermore, the results show that the improved, reliability-based knockdown factor presented always yields a safe estimate of the buckling load for the shells examined.
Deep learning and texture-based semantic label fusion for brain tumor segmentation
NASA Astrophysics Data System (ADS)
Vidyaratne, L.; Alam, M.; Shboul, Z.; Iftekharuddin, K. M.
2018-02-01
Brain tumor segmentation is a fundamental step in surgical treatment and therapy. Many hand-crafted and learning based methods have been proposed for automatic brain tumor segmentation from MRI. Studies have shown that these approaches have their inherent advantages and limitations. This work proposes a semantic label fusion algorithm by combining two representative state-of-the-art segmentation algorithms: texture based hand-crafted, and deep learning based methods to obtain robust tumor segmentation. We evaluate the proposed method using publicly available BRATS 2017 brain tumor segmentation challenge dataset. The results show that the proposed method offers improved segmentation by alleviating inherent weaknesses: extensive false positives in texture based method, and the false tumor tissue classification problem in deep learning method, respectively. Furthermore, we investigate the effect of patient's gender on the segmentation performance using a subset of validation dataset. Note the substantial improvement in brain tumor segmentation performance proposed in this work has recently enabled us to secure the first place by our group in overall patient survival prediction task at the BRATS 2017 challenge.
Deep Learning and Texture-Based Semantic Label Fusion for Brain Tumor Segmentation.
Vidyaratne, L; Alam, M; Shboul, Z; Iftekharuddin, K M
2018-01-01
Brain tumor segmentation is a fundamental step in surgical treatment and therapy. Many hand-crafted and learning based methods have been proposed for automatic brain tumor segmentation from MRI. Studies have shown that these approaches have their inherent advantages and limitations. This work proposes a semantic label fusion algorithm by combining two representative state-of-the-art segmentation algorithms: texture based hand-crafted, and deep learning based methods to obtain robust tumor segmentation. We evaluate the proposed method using publicly available BRATS 2017 brain tumor segmentation challenge dataset. The results show that the proposed method offers improved segmentation by alleviating inherent weaknesses: extensive false positives in texture based method, and the false tumor tissue classification problem in deep learning method, respectively. Furthermore, we investigate the effect of patient's gender on the segmentation performance using a subset of validation dataset. Note the substantial improvement in brain tumor segmentation performance proposed in this work has recently enabled us to secure the first place by our group in overall patient survival prediction task at the BRATS 2017 challenge.
Interactions between Flight Dynamics and Propulsion Systems of Air-Breathing Hypersonic Vehicles
2013-01-01
coupled with combustor – Combustor, component for subsonic or supersonic combustion – Nozzle , expands flow for high thrust and may provide lift... supersonic solution method that is used for both the inlet and nozzle components. The supersonic model SAMURI is a substantial improvement over previous models...purely supersonic inviscid flow. As a result, the model is also appropriate for other applications, including the nozzle , which is important 19 Figure
Austin, Peter C; Lee, Douglas S; Steyerberg, Ewout W; Tu, Jack V
2012-01-01
In biomedical research, the logistic regression model is the most commonly used method for predicting the probability of a binary outcome. While many clinical researchers have expressed an enthusiasm for regression trees, this method may have limited accuracy for predicting health outcomes. We aimed to evaluate the improvement that is achieved by using ensemble-based methods, including bootstrap aggregation (bagging) of regression trees, random forests, and boosted regression trees. We analyzed 30-day mortality in two large cohorts of patients hospitalized with either acute myocardial infarction (N = 16,230) or congestive heart failure (N = 15,848) in two distinct eras (1999–2001 and 2004–2005). We found that both the in-sample and out-of-sample prediction of ensemble methods offered substantial improvement in predicting cardiovascular mortality compared to conventional regression trees. However, conventional logistic regression models that incorporated restricted cubic smoothing splines had even better performance. We conclude that ensemble methods from the data mining and machine learning literature increase the predictive performance of regression trees, but may not lead to clear advantages over conventional logistic regression models for predicting short-term mortality in population-based samples of subjects with cardiovascular disease. PMID:22777999
Method for detecting trace impurities in gases
Freund, Samuel M.; Maier, II, William B.; Holland, Redus F.; Beattie, Willard H.
1981-01-01
A technique for considerably improving the sensitivity and specificity of infrared spectrometry as applied to quantitative determination of trace impurities in various carrier or solvent gases is presented. A gas to be examined for impurities is liquefied and infrared absorption spectra of the liquid are obtained. Spectral simplification and number densities of impurities in the optical path are substantially higher than are obtainable in similar gas-phase analyses. Carbon dioxide impurity (.about.2 ppm) present in commercial Xe and ppm levels of Freon 12 and vinyl chloride added to liquefied air are used to illustrate the method.
Method for detecting trace impurities in gases
Freund, S.M.; Maier, W.B. II; Holland, R.F.; Beattie, W.H.
A technique for considerably improving the sensitivity and specificity of infrared spectrometry as applied to quantitative determination of trace impurities in various carrier or solvent gases is presented. A gas to be examined for impurities is liquefied and infrared absorption spectra of the liquid are obtained. Spectral simplification and number densities of impurities in the optical path are substantially higher than are obtainable in similar gas-phase analyses. Carbon dioxide impurity (approx. 2 ppM) present in commercial Xe and ppM levels of Freon 12 and vinyl chloride added to liquefied air are used to illustrate the method.
Kazantzis, Nikolaos; Brownfield, Nicole R; Mosely, Livia; Usatoff, Alexsandra S; Flighty, Andrew J
2017-12-01
Treatment adherence has posed a substantial challenge not only for patients but also for the health profession for many decades. The last 5 years has witnessed significant attention toward adherence with cognitive behavioral therapy (CBT) homework for anxiety and depressive disorders, and adherence assessment methods have diversified. However, there remains a large component of the adherence process not assessed in CBT, with patient effort, engagement, and the known role for treatment appraisals and beliefs necessitating the pursuit of improved adherence assessment methods. Copyright © 2017 Elsevier Inc. All rights reserved.
New Method for Electrical Conductivity Temperature Compensation
McCleskey, R. Blaine
2013-01-01
Electrical conductivity (κ) measurements of natural waters are typically referenced to 25 °C (κ25) using standard temperature compensation factors (α). For acidic waters (pH < 4), this can result in a large κ25 error (δκ25). The more the sample temperature departs from 25 °C, the larger the potential δκ25. For pH < 4, the hydrogen ion transport number becomes substantial and its mode of transport is different from most other ions resulting in a different α. A new method for determining α as a function of pH and temperature is presented. Samples with varying amounts of H2SO4 and NaCl were used to develop the new α, which was then applied to 65 natural water samples including acid mine waters, geothermal waters, seawater, and stream waters. For each sample, the κ and pH were measured at several temperatures from 5 to 90 °C and κ25 was calculated. The δκ25 ranged from −11 to 9% for the new method as compared to −42 to 25% and −53 to 27% for the constant α (0.019) and ISO-7888 methods, respectively. The new method for determining α is a substantial improvement for acidic waters and performs as well as or better than the standard methods for circumneutral waters.
Cagliani, Alberto; Østerberg, Frederik W; Hansen, Ole; Shiv, Lior; Nielsen, Peter F; Petersen, Dirch H
2017-09-01
We present a breakthrough in micro-four-point probe (M4PP) metrology to substantially improve precision of transmission line (transfer length) type measurements by application of advanced electrode position correction. In particular, we demonstrate this methodology for the M4PP current-in-plane tunneling (CIPT) technique. The CIPT method has been a crucial tool in the development of magnetic tunnel junction (MTJ) stacks suitable for magnetic random-access memories for more than a decade. On two MTJ stacks, the measurement precision of resistance-area product and tunneling magnetoresistance was improved by up to a factor of 3.5 and the measurement reproducibility by up to a factor of 17, thanks to our improved position correction technique.
Schmittdiel, Julie A.; Desai, Jay; Schroeder, Emily B.; Paolino, Andrea R.; Nichols, Gregory A.; Lawrence, Jean M.; O’Connor, Patrick J.; Ohnsorg, Kris A.; Newton, Katherine M.; Steiner, John F.
2016-01-01
ABSTRACT/Implementation Lessons Engaging stakeholders in the research process has the potential to improve quality of care and the patient care experience.Online patient community surveys can elicit important topic areas for comparative effectiveness research.Stakeholder meetings with substantial patient representation, as well as representation from health care delivery systems and research funding agencies, are a valuable tool for selecting and refining pilot research and quality improvement projects.Giving patient stakeholders a deciding vote in selecting pilot research topics helps ensure their ‘voice’ is heard.Researchers and health care leaders should continue to develop best-practices and strategies for increasing patient involvement in comparative effectiveness and delivery science research. PMID:26179728
Efficient alignment-free DNA barcode analytics.
Kuksa, Pavel; Pavlovic, Vladimir
2009-11-10
In this work we consider barcode DNA analysis problems and address them using alternative, alignment-free methods and representations which model sequences as collections of short sequence fragments (features). The methods use fixed-length representations (spectrum) for barcode sequences to measure similarities or dissimilarities between sequences coming from the same or different species. The spectrum-based representation not only allows for accurate and computationally efficient species classification, but also opens possibility for accurate clustering analysis of putative species barcodes and identification of critical within-barcode loci distinguishing barcodes of different sample groups. New alignment-free methods provide highly accurate and fast DNA barcode-based identification and classification of species with substantial improvements in accuracy and speed over state-of-the-art barcode analysis methods. We evaluate our methods on problems of species classification and identification using barcodes, important and relevant analytical tasks in many practical applications (adverse species movement monitoring, sampling surveys for unknown or pathogenic species identification, biodiversity assessment, etc.) On several benchmark barcode datasets, including ACG, Astraptes, Hesperiidae, Fish larvae, and Birds of North America, proposed alignment-free methods considerably improve prediction accuracy compared to prior results. We also observe significant running time improvements over the state-of-the-art methods. Our results show that newly developed alignment-free methods for DNA barcoding can efficiently and with high accuracy identify specimens by examining only few barcode features, resulting in increased scalability and interpretability of current computational approaches to barcoding.
Federal Register 2010, 2011, 2012, 2013, 2014
2013-05-01
... SECURITIES AND EXCHANGE COMMISSION [Release No. 34-69450; File No. SR-NASDAQ-2013-031] Self... Member Organization To Attest That ``Substantially All'' Orders Submitted to the Retail Price Improvement... ``substantially all,'' rather than all, orders submitted to the Retail Price Improvement Program qualify as...
Efficient ICCG on a shared memory multiprocessor
NASA Technical Reports Server (NTRS)
Hammond, Steven W.; Schreiber, Robert
1989-01-01
Different approaches are discussed for exploiting parallelism in the ICCG (Incomplete Cholesky Conjugate Gradient) method for solving large sparse symmetric positive definite systems of equations on a shared memory parallel computer. Techniques for efficiently solving triangular systems and computing sparse matrix-vector products are explored. Three methods for scheduling the tasks in solving triangular systems are implemented on the Sequent Balance 21000. Sample problems that are representative of a large class of problems solved using iterative methods are used. We show that a static analysis to determine data dependences in the triangular solve can greatly improve its parallel efficiency. We also show that ignoring symmetry and storing the whole matrix can reduce solution time substantially.
Reducing respiratory motion artifacts in positron emission tomography through retrospective stacking
DOE Office of Scientific and Technical Information (OSTI.GOV)
Thorndyke, Brian; Schreibmann, Eduard; Koong, Albert
Respiratory motion artifacts in positron emission tomography (PET) imaging can alter lesion intensity profiles, and result in substantially reduced activity and contrast-to-noise ratios (CNRs). We propose a corrective algorithm, coined 'retrospective stacking' (RS), to restore image quality without requiring additional scan time. Retrospective stacking uses b-spline deformable image registration to combine amplitude-binned PET data along the entire respiratory cycle into a single respiratory end point. We applied the method to a phantom model consisting of a small, hot vial oscillating within a warm background, as well as to {sup 18}FDG-PET images of a pancreatic and a liver patient. Comparisons weremore » made using cross-section visualizations, activity profiles, and CNRs within the region of interest. Retrospective stacking was found to properly restore the lesion location and intensity profile in all cases. In addition, RS provided CNR improvements up to three-fold over gated images, and up to five-fold over ungated data. These phantom and patient studies demonstrate that RS can correct for lesion motion and deformation, while substantially improving tumor visibility and background noise.« less
Negreanu, L; Preda, C M; Ionescu, D; Ferechide, D
2015-01-01
Background. A substantial advance in digestive endoscopy that has been made during the last decade is represented by digital chromoendoscopy, which was developed as a quicker and sometimes better alternative to the gold standard of dye spraying. Fujifilm developed a virtual coloration technique called Flexible spectral Imaging Color Enhancement (FICE). FICE provides a better detection of lesions of "minimal" esophagitis, of dysplasia in Barrett's esophagus and of squamous cell esophageal cancer. The use of FICE resulted in an improvement in the visualization of the early gastric cancer, being less invasive, and time consuming than the classic dye methods. Current evidence does not support FICE for screening purposes in colon cancer but it definitely improves characterization of colonic lesions. Its use in inflammatory bowel disease is still controversial and in video capsule endoscopy is considered a substantial progress. Conclusions. The use of FICE endoscopy in routine clinical practice can increase the diagnostic yield and can provide a better characterization of lesions. Future studies to validate its use, the good choice of channels, and the "perfect indications" and to provide common definitions and classifications are necessary.
Crosson, Jesse C.; Stroebel, Christine; Scott, John G.; Stello, Brian; Crabtree, Benjamin F.
2005-01-01
PURPOSE Electronic medical record (EMR) systems offer substantial opportunities to organize and manage clinical data in ways that can potentially improve preventive health care, the management of chronic illness, and the financial health of primary care practices. The functionality of EMRs as implemented, however, can vary substantially from that envisaged by their designers and even from those who purchase the programs. The purpose of this study was to explore how unique aspects of a family medicine office culture affect the initial implementation of an EMR. METHODS As part of a larger study, we conducted a qualitative case study of a private family medicine practice that had recently purchased and implemented an EMR. We collected data using participant observation, in-depth interviews, and key informant interviews. After the initial data collection, we shared our observations with practice members and returned 1 year later to collect additional data. RESULTS Dysfunctional communication patterns, the distribution of formal and informal decision-making power, and internal conflicts limited the effective implementation and use of the EMR. The implementation and use of the EMR made tracking and monitoring of preventive health and chronic illness unwieldy and offered little or no improvement when compared with paper charts. CONCLUSIONS Implementing an EMR without an understanding of the systemic effects and communication and the decision-making processes within an office practice and without methods for bringing to the surface and addressing conflicts limits the opportunities for improved care offered by EMRs. Understanding how these common issues manifest within unique practice settings can enhance the effective implementation and use of EMRs. PMID:16046562
Code of Federal Regulations, 2014 CFR
2014-04-01
... from gutting and extensive reconstruction to the cure of substantial accumulation of deferred maintenance. Cosmetic improvements alone do not qualify as substantial rehabilitation under this definition...
Code of Federal Regulations, 2011 CFR
2011-04-01
... from gutting and extensive reconstruction to the cure of substantial accumulation of deferred maintenance. Cosmetic improvements alone do not qualify as substantial rehabilitation under this definition...
Code of Federal Regulations, 2012 CFR
2012-04-01
... from gutting and extensive reconstruction to the cure of substantial accumulation of deferred maintenance. Cosmetic improvements alone do not qualify as substantial rehabilitation under this definition...
Code of Federal Regulations, 2013 CFR
2013-04-01
... from gutting and extensive reconstruction to the cure of substantial accumulation of deferred maintenance. Cosmetic improvements alone do not qualify as substantial rehabilitation under this definition...
Reducing health risk assigned to organic emissions from a chemical weapons incinerator.
Laman, David M; Weiler, B Douglas; Skeen, Rodney S
2013-03-01
Organic emissions from a chemical weapons incinerator have been characterized with an improved set of analytical methods to reduce the human health risk assigned to operations of the facility. A gas chromatography/mass selective detection method with substantially reduced detection limits has been used in conjunction with scanning electron microscopy/energy dispersive X-ray spectrometry and Fourier transform infrared microscopy to improve the speciation of semi-volatile and non-volatile organics emitted from the incinerator. The reduced detection limits have allowed a significant reduction in the assumed polycyclic aromatic hydrocarbon (PAH) and aminobiphenyl (ABP) emission rates used as inputs to the human health risk assessment for the incinerator. A mean factor of 17 decrease in assigned human health risk is realized for six common local exposure scenarios as a result of the reduced PAH and ABP detection limits.
Scale-adaptive compressive tracking with feature integration
NASA Astrophysics Data System (ADS)
Liu, Wei; Li, Jicheng; Chen, Xiao; Li, Shuxin
2016-05-01
Numerous tracking-by-detection methods have been proposed for robust visual tracking, among which compressive tracking (CT) has obtained some promising results. A scale-adaptive CT method based on multifeature integration is presented to improve the robustness and accuracy of CT. We introduce a keypoint-based model to achieve the accurate scale estimation, which can additionally give a prior location of the target. Furthermore, by the high efficiency of data-independent random projection matrix, multiple features are integrated into an effective appearance model to construct the naïve Bayes classifier. At last, an adaptive update scheme is proposed to update the classifier conservatively. Experiments on various challenging sequences demonstrate substantial improvements by our proposed tracker over CT and other state-of-the-art trackers in terms of dealing with scale variation, abrupt motion, deformation, and illumination changes.
A Support Vector Machine-Based Gender Identification Using Speech Signal
NASA Astrophysics Data System (ADS)
Lee, Kye-Hwan; Kang, Sang-Ick; Kim, Deok-Hwan; Chang, Joon-Hyuk
We propose an effective voice-based gender identification method using a support vector machine (SVM). The SVM is a binary classification algorithm that classifies two groups by finding the voluntary nonlinear boundary in a feature space and is known to yield high classification performance. In the present work, we compare the identification performance of the SVM with that of a Gaussian mixture model (GMM)-based method using the mel frequency cepstral coefficients (MFCC). A novel approach of incorporating a features fusion scheme based on a combination of the MFCC and the fundamental frequency is proposed with the aim of improving the performance of gender identification. Experimental results demonstrate that the gender identification performance using the SVM is significantly better than that of the GMM-based scheme. Moreover, the performance is substantially improved when the proposed features fusion technique is applied.
Web usability testing with a Hispanic medically underserved population.
Moore, Mary; Bias, Randolph G; Prentice, Katherine; Fletcher, Robin; Vaughn, Terry
2009-04-01
Skilled website developers value usability testing to assure user needs are met. When the target audience differs substantially from the developers, it becomes essential to tailor both design and evaluation methods. In this study, researchers carried out a multifaceted usability evaluation of a website (Healthy Texas) designed for Hispanic audiences with lower computer literacy and lower health literacy. METHODS INCLUDED: (1) heuristic evaluation by a usability engineer, (2) remote end-user testing using WebEx software; and (3) face-to-face testing in a community center where use of the website was likely. Researchers found standard usability testing methods needed to be modified to provide interpreters, increased flexibility for time on task, presence of a trusted intermediary such as a librarian, and accommodation for family members who accompanied participants. Participants offered recommendations for website redesign, including simplified language, engaging and relevant graphics, culturally relevant examples, and clear navigation. User-centered design is especially important when website developers are not representative of the target audience. Failure to conduct appropriate usability testing with a representative audience can substantially reduce use and value of the website. This thorough course of usability testing identified improvements that benefit all users but become crucial when trying to reach an underserved audience.
Fox, C J; Taylor, M I; Pereyra, R; Villasana, M I; Rico, C
2005-03-01
Recent substantial declines in northeastern Atlantic cod stocks necessitate improved biological knowledge and the development of techniques to complement standard stock assessment methods (which largely depend on accurate commercial catch data). In 2003, an ichthyoplankton survey was undertaken in the Irish Sea and subsamples of 'cod-like' eggs were analysed using a TaqMan multiplex, PCR (polymerase chain reaction) assay (with specific probes for cod, haddock and whiting). The TaqMan method was readily applied to the large number of samples (n = 2770) generated during the survey and when combined with a manual DNA extraction protocol had a low failure rate of 6%. Of the early stage 'cod-like' eggs (1.2-1.75 mm diameter) positively identified: 34% were cod, 8% haddock and 58% whiting. As previous stock estimates based on egg surveys for Irish Sea cod assumed that the majority of 'cod-like' eggs were from cod, the TaqMan results confirm that there was probably substantial contamination by eggs of whiting and haddock that would have inflated estimates of the stock biomass.
Overview of iodine generation for oxygen-iodine lasers
NASA Astrophysics Data System (ADS)
Jirásek, Vít.
2012-01-01
A review of the methods for generation of iodine for oxygen-iodine lasers (OIL) is presented. The chemical and physical methods for production of both atomic (AI) and molecular (MI) iodine have been searched in order to improve the efficiency and/or technology of OILs. These trials were motivated by the estimations that a substantial part of singlet oxygen (SO) could be saved with these methods and the onset of the laser active medium will be accelerated. Vapour of MI can be generated by the evaporation of solid or pressurized liquid I2, or synthesized in situ by the reaction of Cl2 with either HI or CuI2. The chemical methods of generation of AI are based on the substitution of I atom in a molecule of HI or ICl by another halogen atom produced usually chemically. The discharge methods include the dissociation of various iodine compounds (organic iodides, I2, HI) in the RF, MW, DC-pulsed or DC-vortex stabilized discharge. Combined methods use discharge dissociation of molecules (H2, F2) to gain atoms which subsequently react to replace AI from the iodine compound. The chemical methods were quite successful in producing AI (up to the 100% yield), but the enhancement of the laser performance was not reported. The discharge methods had been subsequently improving and are today able to produce up to 0.4 mmol/s of AI at the RF power of 500 W. A substantial enhancement of the discharge- OIL performance (up to 40%) was reported. In the case of Chemical-OIL, the enhancement was reported only under the conditions of a low I2/O2 ratio, where the "standard" I2 dissociation by SO is slow. The small-signal gain up to 0.3 %/cm was achieved on the supersonic COIL using the HI dissociated in the RF discharge. Due to the complicated kinetics of the RI-I-I2-SO system and a strong coupling with the gas flow and mixing, the theoretical description of the problem is difficult. It, however, seems that we can expect the major improvement of the OIL performance for those systems, where the SO yield is rather low (DOIL) or for the high-pressure COIL, where the quenching processes are important and the shortage of the distance needed for the preparation of active media is essential.
A highly sensitive and versatile virus titration assay in the 96-well microplate format.
Borisevich, V; Nistler, R; Hudman, D; Yamshchikov, G; Seregin, A; Yamshchikov, V
2008-02-01
This report describes a fast, reproducible, inexpensive and convenient assay system for virus titration in the 96-well format. The micromethod substantially increases assay throughput and improves the data reproducibility. A highly simplified variant of virus quantification is based on immunohistochemical detection of virus amplification foci obtained without use of agarose or semisolid overlays. It can be incorporated into several types of routine virological assays successfully replacing the laborious and time-consuming conventional methods based on plaque formation under semisolid overlays. The method does not depend on the development of CPE and can be accommodated to assay viruses with substantial differences in growth properties. The use of enhanced immunohistochemical detection enabled a five- to six-fold reduction of the total assay time. The micromethod was specifically developed to take advantage of multichannel pipettor use to simplify handling of a large number of samples. The method performs well with an inexpensive low-power binocular, thus offering a routine assay system usable outside of specialized laboratory setting, such as for testing of clinical or field samples. When used in focus reduction-neutralization tests (FRNT), the method accommodates very small volumes of immune serum, which is often a decisive factor in experiments involving small rodent models.
Reliability of cervical vertebral maturation staging.
Rainey, Billie-Jean; Burnside, Girvan; Harrison, Jayne E
2016-07-01
Growth and its prediction are important for the success of many orthodontic treatments. The aim of this study was to determine the reliability of the cervical vertebral maturation (CVM) method for the assessment of mandibular growth. A group of 20 orthodontic clinicians, inexperienced in CVM staging, was trained to use the improved version of the CVM method for the assessment of mandibular growth with a teaching program. They independently assessed 72 consecutive lateral cephalograms, taken at Liverpool University Dental Hospital, on 2 occasions. The cephalograms were presented in 2 different random orders and interspersed with 11 additional images for standardization. The intraobserver and interobserver agreement values were evaluated using the weighted kappa statistic. The intraobserver and interobserver agreement values were substantial (weighted kappa, 0.6-0.8). The overall intraobserver agreement was 0.70 (SE, 0.01), with average agreement of 89%. The interobserver agreement values were 0.68 (SE, 0.03) for phase 1 and 0.66 (SE, 0.03) for phase 2, with average interobserver agreement of 88%. The intraobserver and interobserver agreement values of classifying the vertebral stages with the CVM method were substantial. These findings demonstrate that this method of CVM classification is reproducible and reliable. Copyright © 2016 American Association of Orthodontists. Published by Elsevier Inc. All rights reserved.
Image stretching on a curved surface to improve satellite gridding
NASA Technical Reports Server (NTRS)
Ormsby, J. P.
1975-01-01
A method for substantially reducing gridding errors due to satellite roll, pitch and yaw is given. A gimbal-mounted curved screen, scaled to 1:7,500,000, is used to stretch the satellite image whereby visible landmarks coincide with a projected map outline. The resulting rms position errors averaged 10.7 km as compared with 25.6 and 34.9 km for two samples of satellite imagery upon which image stretching was not performed.
2014-09-25
therapy. Pre - viously, losartan has been successfully used to reduce fibrosis and improve both muscle regeneration and function in several models of...efficacy of losartan has not yet been tested in a VML injury model. VML injury involves a substantial loss of muscle tissue that does not regenerate by...fibrosis development after VML injury in the rat tibialis anterior (TA) muscle. METHODS Experimental Design Male Lewis rats with VML were provided access
Aerobic conditioning for team sport athletes.
Stone, Nicholas M; Kilding, Andrew E
2009-01-01
Team sport athletes require a high level of aerobic fitness in order to generate and maintain power output during repeated high-intensity efforts and to recover. Research to date suggests that these components can be increased by regularly performing aerobic conditioning. Traditional aerobic conditioning, with minimal changes of direction and no skill component, has been demonstrated to effectively increase aerobic function within a 4- to 10-week period in team sport players. More importantly, traditional aerobic conditioning methods have been shown to increase team sport performance substantially. Many team sports require the upkeep of both aerobic fitness and sport-specific skills during a lengthy competitive season. Classic team sport trainings have been shown to evoke marginal increases/decreases in aerobic fitness. In recent years, aerobic conditioning methods have been designed to allow adequate intensities to be achieved to induce improvements in aerobic fitness whilst incorporating movement-specific and skill-specific tasks, e.g. small-sided games and dribbling circuits. Such 'sport-specific' conditioning methods have been demonstrated to promote increases in aerobic fitness, though careful consideration of player skill levels, current fitness, player numbers, field dimensions, game rules and availability of player encouragement is required. Whilst different conditioning methods appear equivalent in their ability to improve fitness, whether sport-specific conditioning is superior to other methods at improving actual game performance statistics requires further research.
36 CFR 51.55 - What must a concessioner do after substantial completion of the capital improvement?
Code of Federal Regulations, 2010 CFR
2010-07-01
... together with, if requested by the Director, a written certification from a certified public accountant... 36 Parks, Forests, and Public Property 1 2010-07-01 2010-07-01 false What must a concessioner do after substantial completion of the capital improvement? 51.55 Section 51.55 Parks, Forests, and Public...
Accelerating Time-Varying Hardware Volume Rendering Using TSP Trees and Color-Based Error Metrics
NASA Technical Reports Server (NTRS)
Ellsworth, David; Chiang, Ling-Jen; Shen, Han-Wei; Kwak, Dochan (Technical Monitor)
2000-01-01
This paper describes a new hardware volume rendering algorithm for time-varying data. The algorithm uses the Time-Space Partitioning (TSP) tree data structure to identify regions within the data that have spatial or temporal coherence. By using this coherence, the rendering algorithm can improve performance when the volume data is larger than the texture memory capacity by decreasing the amount of textures required. This coherence can also allow improved speed by appropriately rendering flat-shaded polygons instead of textured polygons, and by not rendering transparent regions. To reduce the polygonization overhead caused by the use of the hierarchical data structure, we introduce an optimization method using polygon templates. The paper also introduces new color-based error metrics, which more accurately identify coherent regions compared to the earlier scalar-based metrics. By showing experimental results from runs using different data sets and error metrics, we demonstrate that the new methods give substantial improvements in volume rendering performance.
Image-optimized Coronal Magnetic Field Models
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jones, Shaela I.; Uritsky, Vadim; Davila, Joseph M., E-mail: shaela.i.jones-mecholsky@nasa.gov, E-mail: shaela.i.jonesmecholsky@nasa.gov
We have reported previously on a new method we are developing for using image-based information to improve global coronal magnetic field models. In that work, we presented early tests of the method, which proved its capability to improve global models based on flawed synoptic magnetograms, given excellent constraints on the field in the model volume. In this follow-up paper, we present the results of similar tests given field constraints of a nature that could realistically be obtained from quality white-light coronagraph images of the lower corona. We pay particular attention to difficulties associated with the line-of-sight projection of features outsidemore » of the assumed coronagraph image plane and the effect on the outcome of the optimization of errors in the localization of constraints. We find that substantial improvement in the model field can be achieved with these types of constraints, even when magnetic features in the images are located outside of the image plane.« less
Grob, Koni
2005-01-01
The most important initiatives taken in Switzerland to reduce exposure of consumers to acrylamide are the separate sale of potatoes low in reducing sugars for roasting and frying, the optimization of the raw material and preparation of french fries, and campaigns to implement suitable preparation methods in the gastronomy and homes. Industry works on improving a range of other products. Although these measures can reduce high exposures by some 80%, they have little effect on the background exposure resulting from coffee, bread, and numerous other products for which no substantial improvement is in sight. At this stage, improvements should be achieved by supporting voluntary activity rather than legal limits. Committed and consistent risk communication is key, and the support of improvements presupposes innovative approaches.
Improving Incremental Balance in the GSI 3DVAR Analysis System
NASA Technical Reports Server (NTRS)
Errico, Ronald M.; Yang, Runhua; Kleist, Daryl T.; Parrish, David F.; Derber, John C.; Treadon, Russ
2008-01-01
The Gridpoint Statistical Interpolation (GSI) analysis system is a unified global/regional 3DVAR analysis code that has been under development for several years at the National Centers for Environmental Prediction (NCEP)/Environmental Modeling Center. It has recently been implemented into operations at NCEP in both the global and North American data assimilation systems (GDAS and NDAS). An important aspect of this development has been improving the balance of the analysis produced by GSI. The improved balance between variables has been achieved through the inclusion of a Tangent Linear Normal Mode Constraint (TLNMC). The TLNMC method has proven to be very robust and effective. The TLNMC as part of the global GSI system has resulted in substantial improvement in data assimilation both at NCEP and at the NASA Global Modeling and Assimilation Office (GMAO).
Improving Medication Adherence in Cardiometabolic Disease: Practical and Regulatory Implications.
Ferdinand, Keith C; Senatore, Fortunato Fred; Clayton-Jeter, Helene; Cryer, Dennis R; Lewin, John C; Nasser, Samar A; Fiuzat, Mona; Califf, Robert M
2017-01-31
Medication nonadherence, a major problem in cardiovascular disease (CVD), contributes yearly to approximately 125,000 preventable deaths, which is partly attributable to only about one-half of CVD patients consistently taking prescribed life-saving medications. Current interest has focused on how labeling and education influence adherence. This paper summarizes the scope of CVD nonadherence, describes key U.S. Food and Drug Administration initiatives, and identifies potential targets for improvement. We describe key adherence factors, methods, and technological applications for simplifying regimens and enhancing adherence, and 4 areas where additional collaborative research and implementation involving the regulatory system and clinical community could substantially reduce nonadherence: 1) identifying monitoring methods; 2) improving the evidence base to better understand adherence; 3) developing patient/health provider team-based engagement strategies; and 4) alleviating health disparities. Alignment of U.S. Food and Drug Administration approaches to dissemination of information about appropriate use with clinical practice could improve adherence, and thereby reduce CVD death and disability. Copyright © 2017 American College of Cardiology Foundation. Published by Elsevier Inc. All rights reserved.
Methods for reverberation suppression utilizing dual frequency band imaging.
Rau, Jochen M; Måsøy, Svein-Erik; Hansen, Rune; Angelsen, Bjørn; Tangen, Thor Andreas
2013-09-01
Reverberations impair the contrast resolution of diagnostic ultrasound images. Tissue harmonic imaging is a common method to reduce these artifacts, but does not remove all reverberations. Dual frequency band imaging (DBI), utilizing a low frequency pulse which manipulates propagation of the high frequency imaging pulse, has been proposed earlier for reverberation suppression. This article adds two different methods for reverberation suppression with DBI: the delay corrected subtraction (DCS) and the first order content weighting (FOCW) method. Both methods utilize the propagation delay of the imaging pulse of two transmissions with alternating manipulation pressure to extract information about its depth of first scattering. FOCW further utilizes this information to estimate the content of first order scattering in the received signal. Initial evaluation is presented where both methods are applied to simulated and in vivo data. Both methods yield visual and measurable substantial improvement in image contrast. Comparing DCS with FOCW, DCS produces sharper images and retains more details while FOCW achieves best suppression levels and, thus, highest image contrast. The measured improvement in contrast ranges from 8 to 27 dB for DCS and from 4 dB up to the dynamic range for FOCW.
A Reliable, Feasible Method to Observe Neighborhoods at High Spatial Resolution
Kepper, Maura M.; Sothern, Melinda S.; Theall, Katherine P.; Griffiths, Lauren A.; Scribner, Richard; Tseng, Tung-Sung; Schaettle, Paul; Cwik, Jessica M.; Felker-Kantor, Erica; Broyles, Stephanie T.
2016-01-01
Introduction Systematic social observation (SSO) methods traditionally measure neighborhoods at street level and have been performed reliably using virtual applications to increase feasibility. Research indicates that collection at even higher spatial resolution may better elucidate the health impact of neighborhood factors, but whether virtual applications can reliably capture social determinants of health at the smallest geographic resolution (parcel level) remains uncertain. This paper presents a novel, parcel-level SSO methodology and assesses whether this new method can be collected reliably using Google Street View and is feasible. Methods Multiple raters (N=5) observed 42 neighborhoods. In 2016, inter-rater reliability (observed agreement and kappa coefficient) was compared for four SSO methods: (1) street-level in person; (2) street-level virtual; (3) parcel-level in person; and (4) parcel-level virtual. Intra-rater reliability (observed agreement and kappa coefficient) was calculated to determine whether parcel-level methods produce results comparable to traditional street-level observation. Results Substantial levels of inter-rater agreement were documented across all four methods; all methods had >70% of items with at least substantial agreement. Only physical decay showed higher levels of agreement (83% of items with >75% agreement) for direct versus virtual rating source. Intra-rater agreement comparing street- versus parcel-level methods resulted in observed agreement >75% for all but one item (90%). Conclusions Results support the use of Google Street View as a reliable, feasible tool for performing SSO at the smallest geographic resolution. Validation of a new parcel-level method collected virtually may improve the assessment of social determinants contributing to disparities in health behaviors and outcomes. PMID:27989289
Grinde, Kelsey E.; Arbet, Jaron; Green, Alden; O'Connell, Michael; Valcarcel, Alessandra; Westra, Jason; Tintle, Nathan
2017-01-01
To date, gene-based rare variant testing approaches have focused on aggregating information across sets of variants to maximize statistical power in identifying genes showing significant association with diseases. Beyond identifying genes that are associated with diseases, the identification of causal variant(s) in those genes and estimation of their effect is crucial for planning replication studies and characterizing the genetic architecture of the locus. However, we illustrate that straightforward single-marker association statistics can suffer from substantial bias introduced by conditioning on gene-based test significance, due to the phenomenon often referred to as “winner's curse.” We illustrate the ramifications of this bias on variant effect size estimation and variant prioritization/ranking approaches, outline parameters of genetic architecture that affect this bias, and propose a bootstrap resampling method to correct for this bias. We find that our correction method significantly reduces the bias due to winner's curse (average two-fold decrease in bias, p < 2.2 × 10−6) and, consequently, substantially improves mean squared error and variant prioritization/ranking. The method is particularly helpful in adjustment for winner's curse effects when the initial gene-based test has low power and for relatively more common, non-causal variants. Adjustment for winner's curse is recommended for all post-hoc estimation and ranking of variants after a gene-based test. Further work is necessary to continue seeking ways to reduce bias and improve inference in post-hoc analysis of gene-based tests under a wide variety of genetic architectures. PMID:28959274
NASA Technical Reports Server (NTRS)
Park, Yeonjoon (Inventor); Choi, Sang Hyouk (Inventor); King, Glen C. (Inventor); Elliott, James R. (Inventor)
2012-01-01
Growth conditions are developed, based on a temperature-dependent alignment model, to enable formation of cubic group IV, group II-V and group II-VI crystals in the [111] orientation on the basal (0001) plane of trigonal crystal substrates, controlled such that the volume percentage of primary twin crystal is reduced from about 40% to about 0.3%, compared to the majority single crystal. The control of stacking faults in this and other embodiments can yield single crystalline semiconductors based on these materials that are substantially without defects, or improved thermoelectric materials with twinned crystals for phonon scattering while maintaining electrical integrity. These methods can selectively yield a cubic-on-trigonal epitaxial semiconductor material in which the cubic layer is substantially either directly aligned, or 60 degrees-rotated from, the underlying trigonal material.
Substantiation of basic scheme of grain cleaning machine for preparation of agricultural crops seeds
NASA Astrophysics Data System (ADS)
Giyevskiy, A. M.; Orobinsky, V. I.; Tarasenko, A. P.; Chernyshov, A. V.; Kurilov, D. O.
2018-03-01
The article presents data on the feasibility of the concept of a high-efficiency seed cleaner with the consistent use of the air flow in aspiration and the multi-tier placement of the sorting grids in grating mills. As a result of modeling, the directions for further improvement of air-screen seed cleaning machines have been identified: an increase in the proportion of sorting grids in the mills up to 70 ... 80% and an increase in the speed of the air flow in the channel of the pre-filter cleaning up to 8.0 m / s. Experiments have established the competence of using mathematical modeling of airflow in the pneumatic system with the use of a finite-volume method for solving hydrodynamic equations for substantiating the basic parameters of the pneumatic system.
Cylindrical geometry hall thruster
Raitses, Yevgeny; Fisch, Nathaniel J.
2002-01-01
An apparatus and method for thrusting plasma, utilizing a Hall thruster with a cylindrical geometry, wherein ions are accelerated in substantially the axial direction. The apparatus is suitable for operation at low power. It employs small size thruster components, including a ceramic channel, with the center pole piece of the conventional annular design thruster eliminated or greatly reduced. Efficient operation is accomplished through magnetic fields with a substantial radial component. The propellant gas is ionized at an optimal location in the thruster. A further improvement is accomplished by segmented electrodes, which produce localized voltage drops within the thruster at optimally prescribed locations. The apparatus differs from a conventional Hall thruster, which has an annular geometry, not well suited to scaling to small size, because the small size for an annular design has a great deal of surface area relative to the volume.
Scaling by shrinking: empowering single-cell ‘omics’ with microfluidic devices
Prakadan, Sanjay M.; Shalek, Alex K.; Weitz, David A.
2017-01-01
Recent advances in cellular profiling have demonstrated substantial heterogeneity in the behaviour of cells once deemed ‘identical’, challenging fundamental notions of cell ‘type’ and ‘state’. Not surprisingly, these findings have elicited substantial interest in deeply characterizing the diversity, interrelationships and plasticity among cellular phenotypes. To explore these questions, experimental platforms are needed that can extensively and controllably profile many individual cells. Here, microfluidic structures—whether valve-, droplet- or nanowell-based—have an important role because they can facilitate easy capture and processing of single cells and their components, reducing labour and costs relative to conventional plate-based methods while also improving consistency. In this article, we review the current state-of-the-art methodologies with respect to microfluidics for mammalian single-cell ‘omics’ and discuss challenges and future opportunities. PMID:28392571
Artifact reduction of different metallic implants in flat detector C-arm CT.
Hung, S-C; Wu, C-C; Lin, C-J; Guo, W-Y; Luo, C-B; Chang, F-C; Chang, C-Y
2014-07-01
Flat detector CT has been increasingly used as a follow-up examination after endovascular intervention. Metal artifact reduction has been successfully demonstrated in coil mass cases, but only in a small series. We attempted to objectively and subjectively evaluate the feasibility of metal artifact reduction with various metallic objects and coil lengths. We retrospectively reprocessed the flat detector CT data of 28 patients (15 men, 13 women; mean age, 55.6 years) after they underwent endovascular treatment (20 coiling ± stent placement, 6 liquid embolizers) or shunt drainage (n = 2) between January 2009 and November 2011 by using a metal artifact reduction correction algorithm. We measured CT value ranges and noise by using region-of-interest methods, and 2 experienced neuroradiologists rated the degrees of improved imaging quality and artifact reduction by comparing uncorrected and corrected images. After we applied the metal artifact reduction algorithm, the CT value ranges and the noise were substantially reduced (1815.3 ± 793.7 versus 231.7 ± 95.9 and 319.9 ± 136.6 versus 45.9 ± 14.0; both P < .001) regardless of the types of metallic objects and various sizes of coil masses. The rater study achieved an overall improvement of imaging quality and artifact reduction (85.7% and 78.6% of cases by 2 raters, respectively), with the greatest improvement in the coiling group, moderate improvement in the liquid embolizers, and the smallest improvement in ventricular shunting (overall agreement, 0.857). The metal artifact reduction algorithm substantially reduced artifacts and improved the objective image quality in every studied case. It also allowed improved diagnostic confidence in most cases. © 2014 by American Journal of Neuroradiology.
Empirical improvements for estimating earthquake response spectra with random‐vibration theory
Boore, David; Thompson, Eric M.
2012-01-01
The stochastic method of ground‐motion simulation is often used in combination with the random‐vibration theory to directly compute ground‐motion intensity measures, thereby bypassing the more computationally intensive time‐domain simulations. Key to the application of random‐vibration theory to simulate response spectra is determining the duration (Drms) used in computing the root‐mean‐square oscillator response. Boore and Joyner (1984) originally proposed an equation for Drms , which was improved upon by Liu and Pezeshk (1999). Though these equations are both substantial improvements over using the duration of the ground‐motion excitation for Drms , we document systematic differences between the ground‐motion intensity measures derived from the random‐vibration and time‐domain methods for both of these Drms equations. These differences are generally less than 10% for most magnitudes, distances, and periods of engineering interest. Given the systematic nature of the differences, however, we feel that improved equations are warranted. We empirically derive new equations from time‐domain simulations for eastern and western North America seismological models. The new equations improve the random‐vibration simulations over a wide range of magnitudes, distances, and oscillator periods.
Liu, Derek; Sloboda, Ron S
2014-05-01
Boyer and Mok proposed a fast calculation method employing the Fourier transform (FT), for which calculation time is independent of the number of seeds but seed placement is restricted to calculation grid points. Here an interpolation method is described enabling unrestricted seed placement while preserving the computational efficiency of the original method. The Iodine-125 seed dose kernel was sampled and selected values were modified to optimize interpolation accuracy for clinically relevant doses. For each seed, the kernel was shifted to the nearest grid point via convolution with a unit impulse, implemented in the Fourier domain. The remaining fractional shift was performed using a piecewise third-order Lagrange filter. Implementation of the interpolation method greatly improved FT-based dose calculation accuracy. The dose distribution was accurate to within 2% beyond 3 mm from each seed. Isodose contours were indistinguishable from explicit TG-43 calculation. Dose-volume metric errors were negligible. Computation time for the FT interpolation method was essentially the same as Boyer's method. A FT interpolation method for permanent prostate brachytherapy TG-43 dose calculation was developed which expands upon Boyer's original method and enables unrestricted seed placement. The proposed method substantially improves the clinically relevant dose accuracy with negligible additional computation cost, preserving the efficiency of the original method.
Application of Numerical Integration and Data Fusion in Unit Vector Method
NASA Astrophysics Data System (ADS)
Zhang, J.
2012-01-01
The Unit Vector Method (UVM) is a series of orbit determination methods which are designed by Purple Mountain Observatory (PMO) and have been applied extensively. It gets the conditional equations for different kinds of data by projecting the basic equation to different unit vectors, and it suits for weighted process for different kinds of data. The high-precision data can play a major role in orbit determination, and accuracy of orbit determination is improved obviously. The improved UVM (PUVM2) promoted the UVM from initial orbit determination to orbit improvement, and unified the initial orbit determination and orbit improvement dynamically. The precision and efficiency are improved further. In this thesis, further research work has been done based on the UVM: Firstly, for the improvement of methods and techniques for observation, the types and decision of the observational data are improved substantially, it is also asked to improve the decision of orbit determination. The analytical perturbation can not meet the requirement. So, the numerical integration for calculating the perturbation has been introduced into the UVM. The accuracy of dynamical model suits for the accuracy of the real data, and the condition equations of UVM are modified accordingly. The accuracy of orbit determination is improved further. Secondly, data fusion method has been introduced into the UVM. The convergence mechanism and the defect of weighted strategy have been made clear in original UVM. The problem has been solved in this method, the calculation of approximate state transition matrix is simplified and the weighted strategy has been improved for the data with different dimension and different precision. Results of orbit determination of simulation and real data show that the work of this thesis is effective: (1) After the numerical integration has been introduced into the UVM, the accuracy of orbit determination is improved obviously, and it suits for the high-accuracy data of available observation apparatus. Compare with the classical differential improvement with the numerical integration, its calculation speed is also improved obviously. (2) After data fusion method has been introduced into the UVM, weighted distribution accords rationally with the accuracy of different kinds of data, all data are fully used and the new method is also good at numerical stability and rational weighted distribution.
Peeters, Michael J; Vaidya, Varun A
2016-06-25
Objective. To describe an approach for assessing the Accreditation Council for Pharmacy Education's (ACPE) doctor of pharmacy (PharmD) Standard 4.4, which focuses on students' professional development. Methods. This investigation used mixed methods with triangulation of qualitative and quantitative data to assess professional development. Qualitative data came from an electronic developmental portfolio of professionalism and ethics, completed by PharmD students during their didactic studies. Quantitative confirmation came from the Defining Issues Test (DIT)-an assessment of pharmacists' professional development. Results. Qualitatively, students' development reflections described growth through this course series. Quantitatively, the 2015 PharmD class's DIT N2-scores illustrated positive development overall; the lower 50% had a large initial improvement compared to the upper 50%. Subsequently, the 2016 PharmD class confirmed these average initial improvements of students and also showed further substantial development among students thereafter. Conclusion. Applying an assessment for learning approach, triangulation of qualitative and quantitative assessments confirmed that PharmD students developed professionally during this course series.
Piccinini, Filippo; Balassa, Tamas; Szkalisity, Abel; Molnar, Csaba; Paavolainen, Lassi; Kujala, Kaisa; Buzas, Krisztina; Sarazova, Marie; Pietiainen, Vilja; Kutay, Ulrike; Smith, Kevin; Horvath, Peter
2017-06-28
High-content, imaging-based screens now routinely generate data on a scale that precludes manual verification and interrogation. Software applying machine learning has become an essential tool to automate analysis, but these methods require annotated examples to learn from. Efficiently exploring large datasets to find relevant examples remains a challenging bottleneck. Here, we present Advanced Cell Classifier (ACC), a graphical software package for phenotypic analysis that addresses these difficulties. ACC applies machine-learning and image-analysis methods to high-content data generated by large-scale, cell-based experiments. It features methods to mine microscopic image data, discover new phenotypes, and improve recognition performance. We demonstrate that these features substantially expedite the training process, successfully uncover rare phenotypes, and improve the accuracy of the analysis. ACC is extensively documented, designed to be user-friendly for researchers without machine-learning expertise, and distributed as a free open-source tool at www.cellclassifier.org. Copyright © 2017 Elsevier Inc. All rights reserved.
Zhang, Ruifen; Su, Dongxiao; Hou, Fangli; Liu, Lei; Huang, Fei; Dong, Lihong; Deng, Yuanyuan; Zhang, Yan; Wei, Zhencheng; Zhang, Mingwei
2017-08-01
To establish optimal ultra-high-pressure (UHP)-assisted extraction conditions for procyanidins from lychee pericarp, a response surface analysis method with four factors and three levels was adopted. The optimum conditions were as follows: 295 MPa pressure, 13 min pressure holding time, 16.0 mL/g liquid-to-solid ratio, and 70% ethanol concentration. Compared with conventional ethanol extraction and ultrasonic-assisted extraction methods, the yields of the total procyanidins, flavonoids, and phenolics extracted using the UHP process were significantly increased; consequently, the oxygen radical absorbance capacity and cellular antioxidant activity of UHP-assisted lychee pericarp extracts were substantially enhanced. LC-MS/MS and high-performance liquid chromatography quantification results for individual phenolic compounds revealed that the yield of procyanidin compounds, including epicatechin, procyanidin A2, and procyanidin B2, from lychee pericarp could be significantly improved by the UHP-assisted extraction process. This UHP-assisted extraction process is thus a practical method for the extraction of procyanidins from lychee pericarp.
Won, Sungho; Choi, Hosik; Park, Suyeon; Lee, Juyoung; Park, Changyi; Kwon, Sunghoon
2015-01-01
Owing to recent improvement of genotyping technology, large-scale genetic data can be utilized to identify disease susceptibility loci and this successful finding has substantially improved our understanding of complex diseases. However, in spite of these successes, most of the genetic effects for many complex diseases were found to be very small, which have been a big hurdle to build disease prediction model. Recently, many statistical methods based on penalized regressions have been proposed to tackle the so-called "large P and small N" problem. Penalized regressions including least absolute selection and shrinkage operator (LASSO) and ridge regression limit the space of parameters, and this constraint enables the estimation of effects for very large number of SNPs. Various extensions have been suggested, and, in this report, we compare their accuracy by applying them to several complex diseases. Our results show that penalized regressions are usually robust and provide better accuracy than the existing methods for at least diseases under consideration.
Interpretation of fingerprint image quality features extracted by self-organizing maps
NASA Astrophysics Data System (ADS)
Danov, Ivan; Olsen, Martin A.; Busch, Christoph
2014-05-01
Accurate prediction of fingerprint quality is of significant importance to any fingerprint-based biometric system. Ensuring high quality samples for both probe and reference can substantially improve the system's performance by lowering false non-matches, thus allowing finer adjustment of the decision threshold of the biometric system. Furthermore, the increasing usage of biometrics in mobile contexts demands development of lightweight methods for operational environment. A novel two-tier computationally efficient approach was recently proposed based on modelling block-wise fingerprint image data using Self-Organizing Map (SOM) to extract specific ridge pattern features, which are then used as an input to a Random Forests (RF) classifier trained to predict the quality score of a propagated sample. This paper conducts an investigative comparative analysis on a publicly available dataset for the improvement of the two-tier approach by proposing additionally three feature interpretation methods, based respectively on SOM, Generative Topographic Mapping and RF. The analysis shows that two of the proposed methods produce promising results on the given dataset.
ABSORPTION METHOD FOR SEPARATING METAL CATIONS
Tompkins, E.R.; Parker, G.W.
1959-03-10
An improved method is presented for the chromatographic separation of fission products wherein a substantial reduction in liquid volume is obtained. The process consists in contacting a solution containing fission products with a body of ion-exchange adsorbent to effect adsorption of fission product cations. The loaded exchange resin is then contacted with a small volume of a carboxylic acid eluant, thereby recovering the fission products. The fission product carrying eluate is acidified without increasing its volume to the volume of the original solution, and the acidified eluate is then used as a feed solution for a smaller body of ion-exchange resin effecting readsorption of the fission product cations.
[Peroperative peritoneal lavage and intra-abdominal instillation of antibiotics in an experiment].
Batalík, B; Mydlo, J
1991-03-01
The authors present the results of an experiment on dogs where lethal diffuse peritonitis was induced in the standard way and was treated only by peroperative peritoneal lavage during reoperation within 24 hours. In the first group (n = 10) this method alone reduced the mortality despite the adverse prognosis to 70%, after addition of an effective antibiotic and metronidazole into the last lot of the lavage solution in the second equally sized group to 10%. The results prove a marked therapeutic asset of this method which as part of comprehensive treatment of peritonitis improves substantially the final effect also under clinical conditions.
O'Connor, Sydney; Ayres, Alison; Cortellini, Lynelle; Rosand, Jonathan; Rosenthal, Eric; Kimberly, W Taylor
2012-08-01
Reliable and efficient data repositories are essential for the advancement of research in Neurocritical care. Various factors, such as the large volume of patients treated within the neuro ICU, their differing length and complexity of hospital stay, and the substantial amount of desired information can complicate the process of data collection. We adapted the tools of process improvement to the data collection and database design of a research repository for a Neuroscience intensive care unit. By the Shewhart-Deming method, we implemented an iterative approach to improve the process of data collection for each element. After an initial design phase, we re-evaluated all data fields that were challenging or time-consuming to collect. We then applied root-cause analysis to optimize the accuracy and ease of collection, and to determine the most efficient manner of collecting the maximal amount of data. During a 6-month period, we iteratively analyzed the process of data collection for various data elements. For example, the pre-admission medications were found to contain numerous inaccuracies after comparison with a gold standard (sensitivity 71% and specificity 94%). Also, our first method of tracking patient admissions and discharges contained higher than expected errors (sensitivity 94% and specificity 93%). In addition to increasing accuracy, we focused on improving efficiency. Through repeated incremental improvements, we reduced the number of subject records that required daily monitoring from 40 to 6 per day, and decreased daily effort from 4.5 to 1.5 h/day. By applying process improvement methods to the design of a Neuroscience ICU data repository, we achieved a threefold improvement in efficiency and increased accuracy. Although individual barriers to data collection will vary from institution to institution, a focus on process improvement is critical to overcoming these barriers.
Tiedeman, C.R.; Hill, M.C.; D'Agnese, F. A.; Faunt, C.C.
2003-01-01
Calibrated models of groundwater systems can provide substantial information for guiding data collection. This work considers using such models to guide hydrogeologic data collection for improving model predictions by identifying model parameters that are most important to the predictions. Identification of these important parameters can help guide collection of field data about parameter values and associated flow system features and can lead to improved predictions. Methods for identifying parameters important to predictions include prediction scaled sensitivities (PSS), which account for uncertainty on individual parameters as well as prediction sensitivity to parameters, and a new "value of improved information" (VOII) method presented here, which includes the effects of parameter correlation in addition to individual parameter uncertainty and prediction sensitivity. In this work, the PSS and VOII methods are demonstrated and evaluated using a model of the Death Valley regional groundwater flow system. The predictions of interest are advective transport paths originating at sites of past underground nuclear testing. Results show that for two paths evaluated the most important parameters include a subset of five or six of the 23 defined model parameters. Some of the parameters identified as most important are associated with flow system attributes that do not lie in the immediate vicinity of the paths. Results also indicate that the PSS and VOII methods can identify different important parameters. Because the methods emphasize somewhat different criteria for parameter importance, it is suggested that parameters identified by both methods be carefully considered in subsequent data collection efforts aimed at improving model predictions.
Oros Klein, Kathleen; Grinek, Stepan; Bernatsky, Sasha; Bouchard, Luigi; Ciampi, Antonio; Colmegna, Ines; Fortin, Jean-Philippe; Gao, Long; Hivert, Marie-France; Hudson, Marie; Kobor, Michael S; Labbe, Aurelie; MacIsaac, Julia L; Meaney, Michael J; Morin, Alexander M; O'Donnell, Kieran J; Pastinen, Tomi; Van Ijzendoorn, Marinus H; Voisin, Gregory; Greenwood, Celia M T
2016-02-15
DNA methylation patterns are well known to vary substantially across cell types or tissues. Hence, existing normalization methods may not be optimal if they do not take this into account. We therefore present a new R package for normalization of data from the Illumina Infinium Human Methylation450 BeadChip (Illumina 450 K) built on the concepts in the recently published funNorm method, and introducing cell-type or tissue-type flexibility. funtooNorm is relevant for data sets containing samples from two or more cell or tissue types. A visual display of cross-validated errors informs the choice of the optimal number of components in the normalization. Benefits of cell (tissue)-specific normalization are demonstrated in three data sets. Improvement can be substantial; it is strikingly better on chromosome X, where methylation patterns have unique inter-tissue variability. An R package is available at https://github.com/GreenwoodLab/funtooNorm, and has been submitted to Bioconductor at http://bioconductor.org. © The Author 2015. Published by Oxford University Press.
Process for the encapsulation and stabilization of radioactive, hazardous and mixed wastes
Colombo, Peter; Kalb, Paul D.; Heiser, III, John H.
1997-11-14
The present invention provides a method for encapsulating and stabilizing radioactive, hazardous and mixed wastes in a modified sulfur cement composition. The waste may be incinerator fly ash or bottom ash including radioactive contaminants, toxic metal salts and other wastes commonly found in refuse. The process may use glass fibers mixed into the composition to improve the tensile strength and a low concentration of anhydrous sodium sulfide to reduce toxic metal solubility. The present invention preferably includes a method for encapsulating radioactive, hazardous and mixed wastes by combining substantially anhydrous wastes, molten modified sulfur cement, preferably glass fibers, as well as anhydrous sodium sulfide or calcium hydroxide or sodium hydroxide in a heated double-planetary orbital mixer. The modified sulfur cement is preheated to about 135.degree..+-.5.degree. C., then the remaining substantially dry components are added and mixed to homogeneity. The homogeneous molten mixture is poured or extruded into a suitable mold. The mold is allowed to cool, while the mixture hardens, thereby immobilizing and encapsulating the contaminants present in the ash.
Magnetic resonance angiography: current status and future directions
2011-01-01
With recent improvement in hardware and software techniques, magnetic resonance angiography (MRA) has undergone significant changes in technique and approach. The advent of 3.0 T magnets has allowed reduction in exogenous contrast dose without compromising overall image quality. The use of novel intravascular contrast agents substantially increases the image windows and decreases contrast dose. Additionally, the lower risk and cost in non-contrast enhanced (NCE) MRA has sparked renewed interest in these methods. This article discusses the current state of both contrast-enhanced (CE) and NCE-MRA. New CE-MRA methods take advantage of dose reduction at 3.0 T, novel contrast agents, and parallel imaging methods. The risks of gadolinium-based contrast media, and the NCE-MRA methods of time-of-flight, steady-state free precession, and phase contrast are discussed. PMID:21388544
Vail, III, William B.
1993-01-01
Methods of operation of an apparatus having at least two pairs of voltage measurement electrodes vertically disposed in a cased well to measure the resistivity of adjacent geological formations from inside the cased well. During stationary measurements with the apparatus at a fixed vertical depth within the cased well, the invention herein discloses methods of operation which include a measurement step and subsequent first and second compensation steps respectively resulting in improved accuracy of measurement. First and second order errors of measurement are identified, and the measurement step and two compensation steps provide methods to substantially eliminate their influence on the results. A multiple frequency apparatus adapted to movement within the well is described which simultaneously provide the measurement and two compensation steps.
Serang, Oliver; Noble, William Stafford
2012-01-01
The problem of identifying the proteins in a complex mixture using tandem mass spectrometry can be framed as an inference problem on a graph that connects peptides to proteins. Several existing protein identification methods make use of statistical inference methods for graphical models, including expectation maximization, Markov chain Monte Carlo, and full marginalization coupled with approximation heuristics. We show that, for this problem, the majority of the cost of inference usually comes from a few highly connected subgraphs. Furthermore, we evaluate three different statistical inference methods using a common graphical model, and we demonstrate that junction tree inference substantially improves rates of convergence compared to existing methods. The python code used for this paper is available at http://noble.gs.washington.edu/proj/fido. PMID:22331862
Effect of canard position and wing leading-edge flap deflection on wing buffet at transonic speeds
NASA Technical Reports Server (NTRS)
Gloss, B. B.; Henderson, W. P.; Huffman, J. K.
1974-01-01
A generalized wind-tunnel model, with canard and wing planform typical of highly maneuverable aircraft, was tested. The addition of a canard above the wing chord plane, for the configuration with leading-edge flaps undeflected, produced substantially higher total configuration lift coefficients before buffet onset than the configuration with the canard off and leading-edge flaps undeflected. The wing buffet intensity was substantially lower for the canard-wing configuration than the wing-alone configuration. The low-canard configuration generally displayed the poorest buffet characteristics. Deflecting the wing leading-edge flaps substantially improved the wing buffet characteristics for canard-off configurations. The addition of the high canard did not appear to substantially improve the wing buffet characteristics of the wing with leading-edge flaps deflected.
2013-01-01
One-dimensional anodic titanium oxide (ATO) nanotube arrays hold great potential as photoanode for photoelectrochemical (PEC) water splitting. In this work, we report a facile and eco-friendly electrochemical hydrogenation method to modify the electronic and PEC properties of ATO nanotube films. The hydrogenated ATO (ATO-H) electrodes present a significantly improved photocurrent of 0.65 mA/cm2 in comparison with that of pristine ATO nanotubes (0.29 mA/cm2) recorded under air mass 1.5 global illumination. The incident photon-to-current efficiency measurement suggests that the enhanced photocurrent of ATO-H nanotubes is mainly ascribed to the improved photoactivity in the UV region. We propose that the electrochemical hydrogenation induced surface oxygen vacancies contribute to the substantially enhanced electrical conductivity and photoactivity. PMID:24047205
NASA Astrophysics Data System (ADS)
Mason, J. M.; Fahy, F. J.
1988-07-01
Double-leaf partitions are often utilized in situations requiring low weight structures with high transmission loss, an example of current interest being the fuselage walls of propeller-driven aircraft. In this case, acoustic excitation is periodic and, if one of the frequencies of excitation lies in the region of the fundamental mass-air-mass frequency of the partition, insulation performance is considerably less than desired. The potential effectiveness of tuned Helmholtz resonators connected to the partition cavity is investigated as a method of improving transmission loss. This is demonstrated by a simple theoretical model and then experimentally verified. Results show that substantial improvements may be obtained at and around the mass-air-mass frequency for a total resonator volume 15 percent of the cavity volume.
NASA Astrophysics Data System (ADS)
Gibergans-Báguena, J.; Llasat, M. C.
2007-12-01
The objective of this paper is to present the improvement of quantitative forecasting of daily rainfall in Catalonia (NE Spain) from an analogues technique, taking into account synoptic and local data. This method is based on an analogues sorting technique: meteorological situations similar to the current one, in terms of 700 and 1000 hPa geopotential fields at 00 UTC, complemented with the inclusion of some thermodynamic parameters extracted from an historical data file. Thermodynamic analysis acts as a highly discriminating feature for situations in which the synoptic situation fails to explain either atmospheric phenomena or rainfall distribution. This is the case in heavy rainfall situations, where the existence of instability and high water vapor content is essential. With the objective of including these vertical thermodynamic features, information provided by the Palma de Mallorca radiosounding (Spain) has been used. Previously, a selection of the most discriminating thermodynamic parameters for the daily rainfall was made, and then the analogues technique applied to them. Finally, three analog forecasting methods were applied for the quantitative daily rainfall forecasting in Catalonia. The first one is based on analogies from geopotential fields to synoptic scale; the second one is exclusively based on the search of similarity from local thermodynamic information and the third method combines the other two methods. The results show that this last method provides a substantial improvement of quantitative rainfall estimation.
Method for nanoencapsulation of aerogels and nanoencapsulated aerogels produced by such method
NASA Technical Reports Server (NTRS)
Sullivan, Thomas A. (Inventor)
2007-01-01
A method for increasing the compressive modulus of aerogels comprising: providing aerogel substrate comprising a bubble matrix in a chamber; providing monomer to the chamber, the monomer comprising vapor phase monomer which polymerizes substantially free of polymerization byproducts; depositing monomer from the vapor phase onto the surface of the aerogel substrate under deposition conditions effective to produce a vapor pressure sufficient to cause the vapor phase monomer to penetrate into the bubble matrix and deposit onto the surface of the aerogel substrate, producing a substantially uniform monomer film; and, polymerizing the substantially uniform monomer film under polymerization conditions effective to produce polymer coated aerogel comprising a substantially uniform polymer coating substantially free of polymerization byproducts.Polymer coated aerogel comprising aerogel substrate comprising a substantially uniform polymer coating, said polymer coated aerogel comprising porosity and having a compressive modulus greater than the compressive modulus of the aerogel substrate, as measured by a 100 lb. load cell at 1 mm/minute in the linear range of 20% to 40% compression.
Side-welded fast response sheathed thermocouple
Carr, K.R.
A method of fabricating the measuring junction of a grounded-junction sheathed thermocouple to obtain fast time response and good thermal cycling performance is provided. Slots are tooled or machined into the sheath wall at the measuring junction, the thermocouple wires are laser-welded into the slots. A thin metal closure cap is then laser-welded over the end of the sheath. Compared to a conventional grounded-junction thermocouple, the response time is 4 to 5 times faster and the thermal shock and cycling capabilities are substantially improved.
Internal dosimetry monitoring equipment: Present and future
DOE Office of Scientific and Technical Information (OSTI.GOV)
Selby, J.; Carbaugh, E.H.; Lynch, T.P.
1993-09-01
We have attempted to characterize the current and future status of in vivo and in vitro measurement programs coupled with the associated radioanalytical methods and workplace monitoring. Developments in these areas must be carefully integrated by internal dosimetrists, radiochemists and field health physicists. Their goal should be uniform improvement rather than to focus on one specific area (e.g., dose modeling) to the neglect of other areas where the measurement capabilities are substantially less sophisticated and, therefore, the potential source of error is greatest.
[E-learning and the continuing professional development in medicine].
De Fiore, Luca
2010-06-01
E-learning is widely used in continuing medical education but three main problems still face health decision makers: the substantial heterogeneity among the characteristics of the web-based educational projects; the concerns about the e-learning effectiveness; the variety of outcomes used to evaluate the effectiveness. Systematic reviews suggest e-learning has effectiveness similar to traditional educational methods.The attention should now be given to how and when can we use e-learning to improve the health workers' performance and better healthcare.
Method of improving BeO as a thermoluminescent detector
Gammage, Richard B.; Thorngate, John H.; Christian, Danny J.
1980-01-01
Measurements of radiation exposure below 1 mR are possible with a BeO ceramic thermoluminescent detector (TLD) by treating the TL signal in a manner that discriminates against an interferring pyroelectric incandescence (PI). This is accomplished by differentiating the signals electronically to cause the composite signal to cross the baseline. A zero-crossing detector then senses and clips the negative-going portion of the signal. The resultant signal is integrated, producing a result wherein the true TL signal is substantially greater than the PI signal.
Process for preparing metal-carbide-containing microspheres from metal-loaded resin beads
Beatty, Ronald L.
1976-01-01
An improved method for treating metal-loaded resin microspheres is described which comprises heating a metal-loaded resin charge in an inert atmosphere at a pre-carbide-forming temperature under such conditions as to produce a microsphere composition having sufficient carbon as to create a substantially continuous carbon matrix and a metal-carbide or an oxide-carbide mixture as a dispersed phase(s) during carbide-forming conditions, and then heating the thus treated charge to a carbide-forming temperature.
1983-05-01
occur. 4) It is also true that during a given time period, at a given base, not all of the people in the sample will actually be available for testing...taken sample sizes into consideration, we currently estimate that with few exceptions, we will have adequate samples to perform the analysis of simple ...aalanced Half Sample Repli- cations (BHSA). His analyses of simple cases have shown that this method is substantially more efficient than the
Side-welded fast response sheathed thermocouple
Carr, Kenneth R.
1981-01-01
A method of fabricating the measuring junction of a grounded-junction sheathed thermocouple to obtain fast time response and good thermal cycling performance is provided. Slots are tooled or machined into the sheath wall at the measuring junction, the thermocouple wires are laser-welded into the slots. A thin metal closure cap is then laser-welded over the end of the sheath. Compared to a conventional grounded-junction thermocouple, the response time is 4-5 times faster and the thermal shock and cycling capabilities are substantially improved.
Components, Assembly and Electrochemical Properties of Three-Dimensional Battery Architectures
2016-03-01
batteries is directed at our project on 3-D lithium - ion batteries where improvements in materials and fabrication methods are expected to facilitate...reporting period, we focused on new materials and electrode array fabrication processes for 3-D lithium - ion batteries and made substantial progress. In...to facilitate the assembly of a full 3-D lithium - ion battery system. a Pattern silicon dioxide etch I I I I I mask b DRIE etch silicon posts c I I
Polynuclear aromatic hydrocarbons for fullerene synthesis in flames
Alford, J. Michael; Diener, Michael D.
2006-12-19
This invention provides improved methods for combustion synthesis of carbon nanomaterials, including fullerenes, employing multiple-ring aromatic hydrocarbon fuels selected for high carbon conversion to extractable fullerenes. The multiple-ring aromatic hydrocarbon fuels include those that contain polynuclear aromatic hydrocarbons. More specifically, multiple-ring aromatic hydrocarbon fuels contain a substantial amount of indene, methylnapthalenes or mixtures thereof. Coal tar and petroleum distillate fractions provide low cost hydrocarbon fuels containing polynuclear aromatic hydrocarbons, including without limitation, indene, methylnapthalenes or mixtures thereof.
Global Monitoring of Water Supply and Sanitation: History, Methods and Future Challenges
Bartram, Jamie; Brocklehurst, Clarissa; Fisher, Michael B.; Luyendijk, Rolf; Hossain, Rifat; Wardlaw, Tessa; Gordon, Bruce
2014-01-01
International monitoring of drinking water and sanitation shapes awareness of countries’ needs and informs policy, implementation and research efforts to extend and improve services. The Millennium Development Goals established global targets for drinking water and sanitation access; progress towards these targets, facilitated by international monitoring, has contributed to reducing the global disease burden and increasing quality of life. The experiences of the MDG period generated important lessons about the strengths and limitations of current approaches to defining and monitoring access to drinking water and sanitation. The methods by which the Joint Monitoring Programme (JMP) of WHO and UNICEF tracks access and progress are based on analysis of data from household surveys and linear regression modelling of these results over time. These methods provide nationally-representative and internationally-comparable insights into the drinking water and sanitation facilities used by populations worldwide, but also have substantial limitations: current methods do not address water quality, equity of access, or extra-household services. Improved statistical methods are needed to better model temporal trends. This article describes and critically reviews JMP methods in detail for the first time. It also explores the impact of, and future directions for, international monitoring of drinking water and sanitation. PMID:25116635
Reinforcement learning algorithms for robotic navigation in dynamic environments.
Yen, Gary G; Hickey, Travis W
2004-04-01
The purpose of this study was to examine improvements to reinforcement learning (RL) algorithms in order to successfully interact within dynamic environments. The scope of the research was that of RL algorithms as applied to robotic navigation. Proposed improvements include: addition of a forgetting mechanism, use of feature based state inputs, and hierarchical structuring of an RL agent. Simulations were performed to evaluate the individual merits and flaws of each proposal, to compare proposed methods to prior established methods, and to compare proposed methods to theoretically optimal solutions. Incorporation of a forgetting mechanism did considerably improve the learning times of RL agents in a dynamic environment. However, direct implementation of a feature-based RL agent did not result in any performance enhancements, as pure feature-based navigation results in a lack of positional awareness, and the inability of the agent to determine the location of the goal state. Inclusion of a hierarchical structure in an RL agent resulted in significantly improved performance, specifically when one layer of the hierarchy included a feature-based agent for obstacle avoidance, and a standard RL agent for global navigation. In summary, the inclusion of a forgetting mechanism, and the use of a hierarchically structured RL agent offer substantially increased performance when compared to traditional RL agents navigating in a dynamic environment.
Hidden Markov induced Dynamic Bayesian Network for recovering time evolving gene regulatory networks
NASA Astrophysics Data System (ADS)
Zhu, Shijia; Wang, Yadong
2015-12-01
Dynamic Bayesian Networks (DBN) have been widely used to recover gene regulatory relationships from time-series data in computational systems biology. Its standard assumption is ‘stationarity’, and therefore, several research efforts have been recently proposed to relax this restriction. However, those methods suffer from three challenges: long running time, low accuracy and reliance on parameter settings. To address these problems, we propose a novel non-stationary DBN model by extending each hidden node of Hidden Markov Model into a DBN (called HMDBN), which properly handles the underlying time-evolving networks. Correspondingly, an improved structural EM algorithm is proposed to learn the HMDBN. It dramatically reduces searching space, thereby substantially improving computational efficiency. Additionally, we derived a novel generalized Bayesian Information Criterion under the non-stationary assumption (called BWBIC), which can help significantly improve the reconstruction accuracy and largely reduce over-fitting. Moreover, the re-estimation formulas for all parameters of our model are derived, enabling us to avoid reliance on parameter settings. Compared to the state-of-the-art methods, the experimental evaluation of our proposed method on both synthetic and real biological data demonstrates more stably high prediction accuracy and significantly improved computation efficiency, even with no prior knowledge and parameter settings.
Measuring the scale dependence of intrinsic alignments using multiple shear estimates
NASA Astrophysics Data System (ADS)
Leonard, C. Danielle; Mandelbaum, Rachel
2018-06-01
We present a new method for measuring the scale dependence of the intrinsic alignment (IA) contamination to the galaxy-galaxy lensing signal, which takes advantage of multiple shear estimation methods applied to the same source galaxy sample. By exploiting the resulting correlation of both shape noise and cosmic variance, our method can provide an increase in the signal-to-noise of the measured IA signal as compared to methods which rely on the difference of the lensing signal from multiple photometric redshift bins. For a galaxy-galaxy lensing measurement which uses LSST sources and DESI lenses, the signal-to-noise on the IA signal from our method is predicted to improve by a factor of ˜2 relative to the method of Blazek et al. (2012), for pairs of shear estimates which yield substantially different measured IA amplitudes and highly correlated shape noise terms. We show that statistical error necessarily dominates the measurement of intrinsic alignments using our method. We also consider a physically motivated extension of the Blazek et al. (2012) method which assumes that all nearby galaxy pairs, rather than only excess pairs, are subject to IA. In this case, the signal-to-noise of the method of Blazek et al. (2012) is improved.
Parametric State Space Structuring
NASA Technical Reports Server (NTRS)
Ciardo, Gianfranco; Tilgner, Marco
1997-01-01
Structured approaches based on Kronecker operators for the description and solution of the infinitesimal generator of a continuous-time Markov chains are receiving increasing interest. However, their main advantage, a substantial reduction in the memory requirements during the numerical solution, comes at a price. Methods based on the "potential state space" allocate a probability vector that might be much larger than actually needed. Methods based on the "actual state space", instead, have an additional logarithmic overhead. We present an approach that realizes the advantages of both methods with none of their disadvantages, by partitioning the local state spaces of each submodel. We apply our results to a model of software rendezvous, and show how they reduce memory requirements while, at the same time, improving the efficiency of the computation.
Collender, Philip A.; Kirby, Amy E.; Addiss, David G.; Freeman, Matthew C.; Remais, Justin V.
2015-01-01
Limiting the environmental transmission of soil-transmitted helminths (STH), which infect 1.5 billion people worldwide, will require sensitive, reliable, and cost effective methods to detect and quantify STH in the environment. We review the state of the art of STH quantification in soil, biosolids, water, produce, and vegetation with respect to four major methodological issues: environmental sampling; recovery of STH from environmental matrices; quantification of recovered STH; and viability assessment of STH ova. We conclude that methods for sampling and recovering STH require substantial advances to provide reliable measurements for STH control. Recent innovations in the use of automated image identification and developments in molecular genetic assays offer considerable promise for improving quantification and viability assessment. PMID:26440788
Efficient alignment-free DNA barcode analytics
Kuksa, Pavel; Pavlovic, Vladimir
2009-01-01
Background In this work we consider barcode DNA analysis problems and address them using alternative, alignment-free methods and representations which model sequences as collections of short sequence fragments (features). The methods use fixed-length representations (spectrum) for barcode sequences to measure similarities or dissimilarities between sequences coming from the same or different species. The spectrum-based representation not only allows for accurate and computationally efficient species classification, but also opens possibility for accurate clustering analysis of putative species barcodes and identification of critical within-barcode loci distinguishing barcodes of different sample groups. Results New alignment-free methods provide highly accurate and fast DNA barcode-based identification and classification of species with substantial improvements in accuracy and speed over state-of-the-art barcode analysis methods. We evaluate our methods on problems of species classification and identification using barcodes, important and relevant analytical tasks in many practical applications (adverse species movement monitoring, sampling surveys for unknown or pathogenic species identification, biodiversity assessment, etc.) On several benchmark barcode datasets, including ACG, Astraptes, Hesperiidae, Fish larvae, and Birds of North America, proposed alignment-free methods considerably improve prediction accuracy compared to prior results. We also observe significant running time improvements over the state-of-the-art methods. Conclusion Our results show that newly developed alignment-free methods for DNA barcoding can efficiently and with high accuracy identify specimens by examining only few barcode features, resulting in increased scalability and interpretability of current computational approaches to barcoding. PMID:19900305
Recent advances in the microbiological diagnosis of bloodstream infections.
Florio, Walter; Morici, Paola; Ghelardi, Emilia; Barnini, Simona; Lupetti, Antonella
2018-05-01
Rapid identification (ID) and antimicrobial susceptibility testing (AST) of the causative agent(s) of bloodstream infections (BSIs) are essential for the prompt administration of an effective antimicrobial therapy, which can result in clinical and financial benefits. Immediately after blood sampling, empirical antimicrobial therapy, chosen on clinical and epidemiological data, is administered. When ID and AST results are available, the clinician decides whether to continue or streamline the antimicrobial therapy, based on the results of the in vitro antimicrobial susceptibility profile of the pathogen. The aim of the present study is to review and discuss the experimental data, advantages, and drawbacks of recently developed technological advances of culture-based and molecular methods for the diagnosis of BSI (including mass spectrometry, magnetic resonance, PCR-based methods, direct inoculation methods, and peptide nucleic acid fluorescence in situ hybridization), the understanding of which could provide new perspectives to improve and fasten the diagnosis and treatment of septic patients. Although blood culture remains the gold standard to diagnose BSIs, newly developed methods can significantly shorten the turnaround time of reliable microbial ID and AST, thus substantially improving the diagnostic yield.
Coupled forward-backward trajectory approach for nonequilibrium electron-ion dynamics
NASA Astrophysics Data System (ADS)
Sato, Shunsuke A.; Kelly, Aaron; Rubio, Angel
2018-04-01
We introduce a simple ansatz for the wave function of a many-body system based on coupled forward and backward propagating semiclassical trajectories. This method is primarily aimed at, but not limited to, treating nonequilibrium dynamics in electron-phonon systems. The time evolution of the system is obtained from the Euler-Lagrange variational principle, and we show that this ansatz yields Ehrenfest mean-field theory in the limit that the forward and backward trajectories are orthogonal, and in the limit that they coalesce. We investigate accuracy and performance of this method by simulating electronic relaxation in the spin-boson model and the Holstein model. Although this method involves only pairs of semiclassical trajectories, it shows a substantial improvement over mean-field theory, capturing quantum coherence of nuclear dynamics as well as electron-nuclear correlations. This improvement is particularly evident in nonadiabatic systems, where the accuracy of this coupled trajectory method extends well beyond the perturbative electron-phonon coupling regime. This approach thus provides an attractive route forward to the ab initio description of relaxation processes, such as thermalization, in condensed phase systems.
Gowda, Charitha; Dong, Shiming; Potter, Rachel C; Dombkowski, Kevin J; Stokley, Shannon; Dempsey, Amanda F
2013-01-01
Immunization information systems (IISs) are valuable surveillance tools; however, population relocation may introduce bias when determining immunization coverage. We explored alternative methods for estimating the vaccine-eligible population when calculating adolescent immunization levels using a statewide IIS. We performed a retrospective analysis of the Michigan State Care Improvement Registry (MCIR) for all adolescents aged 11-18 years registered in the MCIR as of October 2010. We explored four methods for determining denominators: (1) including all adolescents with MCIR records, (2) excluding adolescents with out-of-state residence, (3) further excluding those without MCIR activity ≥ 10 years prior to the evaluation date, and (4) using a denominator based on U.S. Census data. We estimated state- and county-specific coverage levels for four adolescent vaccines. We found a 20% difference in estimated vaccination coverage between the most inclusive and restrictive denominator populations. Although there was some variability among the four methods in vaccination at the state level (2%-11%), greater variation occurred at the county level (up to 21%). This variation was substantial enough to potentially impact public health assessments of immunization programs. Generally, vaccines with higher coverage levels had greater absolute variation, as did counties with smaller populations. At the county level, using the four denominator calculation methods resulted in substantial differences in estimated adolescent immunization rates that were less apparent when aggregated at the state level. Further research is needed to ascertain the most appropriate method for estimating vaccine coverage levels using IIS data.
Research and development for improved lead-salt diode lasers
NASA Technical Reports Server (NTRS)
Butler, J. F.
1976-01-01
A substantial increase in output power levels for lead-salt diode lasers, through the development of improved fabrication methods, as demonstrated. The goal of 1 mW of CW, single-mode, single-ended power output, was achieved, with exceptional devices exhibiting values greater than 8 mW. It was found that the current tuning rate could be controlled by adjusting the p-n junction depth, allowing the tuning rate to be optimized for particular applications. An unexpected phenomenon was encountered when crystal composition was observed to be significantly altered by annealing at temperatures as low as 600 C; the composition was changed by transport of material through the vapor phase. This effect caused problems in obtaining diode lasers with the desired operating characteristics. It was discovered that the present packaging method introduces gross damaging effects in the laser crystal through pressure applied by the C-bend.
Van Oudenhove, Laurence; Devreese, Bart
2013-06-01
Proteomics has evolved substantially since its early days, some 20 years ago. In this mini-review, we aim to provide an overview of general methodologies and more recent developments in mass spectrometric approaches used for relative and absolute quantitation of proteins. Enhancement of sensitivity of the mass spectrometers as well as improved sample preparation and protein fractionation methods are resulting in a more comprehensive analysis of proteomes. We also document some upcoming trends for quantitative proteomics such as the use of label-free quantification methods. Hopefully, microbiologists will continue to explore proteomics as a tool in their research to understand the adaptation of microorganisms to their ever changing environment. We encourage them to incorporate some of the described new developments in mass spectrometry to facilitate their analyses and improve the general knowledge of the fascinating world of microorganisms.
NASA Technical Reports Server (NTRS)
Zong, Jin-Ho; Szekely, Julian; Schwartz, Elliot
1992-01-01
An improved computational technique for calculating the electromagnetic force field, the power absorption and the deformation of an electromagnetically levitated metal sample is described. The technique is based on the volume integral method, but represents a substantial refinement; the coordinate transformation employed allows the efficient treatment of a broad class of rotationally symmetrical bodies. Computed results are presented to represent the behavior of levitation melted metal samples in a multi-coil, multi-frequency levitation unit to be used in microgravity experiments. The theoretical predictions are compared with both analytical solutions and with the results or previous computational efforts for the spherical samples and the agreement has been very good. The treatment of problems involving deformed surfaces and actually predicting the deformed shape of the specimens breaks new ground and should be the major usefulness of the proposed method.
Application of hands-on simulation games to improve classroom experience
NASA Astrophysics Data System (ADS)
Hamzeh, Farook; Theokaris, Christina; Rouhana, Carel; Abbas, Yara
2017-09-01
While many construction companies claim substantial productivity and profit gains when applying lean construction principles, it remains a challenge to teach these principles in a classroom. Lean construction emphasises collaborative processes and integrated delivery practices. Consequently, new teaching methods that nurture such values should form the basis of lean construction education. One of the proposed methods is 'hands-on team simulation games' which can be employed to replicate various real-life processes, projects, or systems for the purpose of teaching, analysing, and understanding. This study aims at assessing this simulation games and understanding their impact on students' learning and satisfaction. Surveys and tests are administered to assess changes in student's perception of their learning styles and their understanding of key lean construction concepts. Results show a positive student reaction to hands-on simulation games, provide pedagogical insights, and highlight suggestions for improvement.
Accelerated simulation of stochastic particle removal processes in particle-resolved aerosol models
DOE Office of Scientific and Technical Information (OSTI.GOV)
Curtis, J.H.; Michelotti, M.D.; Riemer, N.
2016-10-01
Stochastic particle-resolved methods have proven useful for simulating multi-dimensional systems such as composition-resolved aerosol size distributions. While particle-resolved methods have substantial benefits for highly detailed simulations, these techniques suffer from high computational cost, motivating efforts to improve their algorithmic efficiency. Here we formulate an algorithm for accelerating particle removal processes by aggregating particles of similar size into bins. We present the Binned Algorithm for particle removal processes and analyze its performance with application to the atmospherically relevant process of aerosol dry deposition. We show that the Binned Algorithm can dramatically improve the efficiency of particle removals, particularly for low removalmore » rates, and that computational cost is reduced without introducing additional error. In simulations of aerosol particle removal by dry deposition in atmospherically relevant conditions, we demonstrate about 50-times increase in algorithm efficiency.« less
Hartman, Joshua D; Balaji, Ashwin; Beran, Gregory J O
2017-12-12
Fragment-based methods predict nuclear magnetic resonance (NMR) chemical shielding tensors in molecular crystals with high accuracy and computational efficiency. Such methods typically employ electrostatic embedding to mimic the crystalline environment, and the quality of the results can be sensitive to the embedding treatment. To improve the quality of this embedding environment for fragment-based molecular crystal property calculations, we borrow ideas from the embedded ion method to incorporate self-consistently polarized Madelung field effects. The self-consistent reproduction of the Madelung potential (SCRMP) model developed here constructs an array of point charges that incorporates self-consistent lattice polarization and which reproduces the Madelung potential at all atomic sites involved in the quantum mechanical region of the system. The performance of fragment- and cluster-based 1 H, 13 C, 14 N, and 17 O chemical shift predictions using SCRMP and density functionals like PBE and PBE0 are assessed. The improved embedding model results in substantial improvements in the predicted 17 O chemical shifts and modest improvements in the 15 N ones. Finally, the performance of the model is demonstrated by examining the assignment of the two oxygen chemical shifts in the challenging γ-polymorph of glycine. Overall, the SCRMP-embedded NMR chemical shift predictions are on par with or more accurate than those obtained with the widely used gauge-including projector augmented wave (GIPAW) model.
Shields, T P; Mollova, E; Ste Marie, L; Hansen, M R; Pardi, A
1999-01-01
An improved method is presented for the preparation of milligram quantities of homogenous-length RNAs suitable for nuclear magnetic resonance or X-ray crystallographic structural studies. Heterogeneous-length RNA transcripts are processed with a hammerhead ribozyme to yield homogenous-length products that are then readily purified by anion exchange high-performance liquid chromatography. This procedure eliminates the need for denaturing polyacrylamide gel electrophoresis, which is the most laborious step in the standard procedure for large-scale production of RNA by in vitro transcription. The hammerhead processing of the heterogeneous-length RNA transcripts also substantially improves the overall yield and purity of the desired RNA product. PMID:10496226
Improving KPCA Online Extraction by Orthonormalization in the Feature Space.
Souza Filho, Joao B O; Diniz, Paulo S R
2018-04-01
Recently, some online kernel principal component analysis (KPCA) techniques based on the generalized Hebbian algorithm (GHA) were proposed for use in large data sets, defining kernel components using concise dictionaries automatically extracted from data. This brief proposes two new online KPCA extraction algorithms, exploiting orthogonalized versions of the GHA rule. In both the cases, the orthogonalization of kernel components is achieved by the inclusion of some low complexity additional steps to the kernel Hebbian algorithm, thus not substantially affecting the computational cost of the algorithm. Results show improved convergence speed and accuracy of components extracted by the proposed methods, as compared with the state-of-the-art online KPCA extraction algorithms.
NASA Technical Reports Server (NTRS)
Jones, Harry W.
2016-01-01
A review of two papers on improving the International Space Station (ISS) Oxygen Generation Assembly (OGA) shows that it would not save substantial mass on a Mars transit. The ISS OGA requires redesign for satisfactory operation, even for the ISS. The planned improvements of the OGA for ISS would not be sufficient to make it suitable for Mars, because Mars transit life support has significantly different requirements than ISS. The OGA for Mars should have lower mass, better reliability and maintainability, greater safety, radiation hardening, and capability for quiescent operation. NASA's methodical, disciplined systems engineering process should be used to develop the appropriate system.
Composite material reinforced with atomized quasicrystalline particles and method of making same
Biner, Suleyman B.; Sordelet, Daniel J.; Lograsso, Barbara K.; Anderson, Iver E.
1998-12-22
A composite material comprises an aluminum or aluminum alloy matrix having generally spherical, atomized quasicrystalline aluminum-transition metal alloy reinforcement particles disposed in the matrix to improve mechanical properties. A composite article can be made by consolidating generally spherical, atomized quaiscrystalline aluminum-transition metal alloy particles and aluminum or aluminum alloy particles to form a body that is cold and/or hot reduced to form composite products, such as composite plate or sheet, with interfacial bonding between the quasicrystalline particles and the aluminum or aluminum alloy matrix without damage (e.g. cracking or shape change) of the reinforcement particles. The cold and/or hot worked compositehibits substantially improved yield strength, tensile strength, Young's modulus (stiffness).
Bradley, Elizabeth H; Brewster, Amanda L; McNatt, Zahirah; Linnander, Erika L; Cherlin, Emily; Fosburgh, Heather; Ting, Henry H; Curry, Leslie A
2018-03-01
Quality collaboratives are widely endorsed as a potentially effective method for translating and spreading best practices for acute myocardial infarction (AMI) care. Nevertheless, hospital success in improving performance through participation in collaboratives varies markedly. We sought to understand what distinguished hospitals that succeeded in shifting culture and reducing 30-day risk-standardised mortality rate (RSMR) after AMI through their participation in the Leadership Saves Lives (LSL) collaborative. We conducted a longitudinal, mixed methods intervention study of 10 hospitals over a 2-year period; data included surveys of 223 individuals (response rates 83%-94% depending on wave) and 393 in-depth interviews with clinical and management staff most engaged with the LSL intervention in the 10 hospitals. We measured change in culture and RSMR, and key aspects of working related to team membership, turnover, level of participation and approaches to conflict management. The six hospitals that experienced substantial culture change and greater reductions in RSMR demonstrated distinctions in: (1) effective inclusion of staff from different disciplines and levels in the organisational hierarchy in the team guiding improvement efforts (referred to as the 'guiding coalition' in each hospital); (2) authentic participation in the work of the guiding coalition; and (3) distinct patterns of managing conflict. Guiding coalition size and turnover were not associated with success (p values>0.05). In the six hospitals that experienced substantial positive culture change, staff indicated that the LSL learnings were already being applied to other improvement efforts. Hospitals that were most successful in a national quality collaborative to shift hospital culture and reduce RSMR showed distinct patterns in membership diversity, authentic participation and capacity for conflict management. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2018. All rights reserved. No commercial use is permitted unless otherwise expressly granted.
Daniels, Noah M; Hosur, Raghavendra; Berger, Bonnie; Cowen, Lenore J
2012-05-01
One of the most successful methods to date for recognizing protein sequences that are evolutionarily related has been profile hidden Markov models (HMMs). However, these models do not capture pairwise statistical preferences of residues that are hydrogen bonded in beta sheets. These dependencies have been partially captured in the HMM setting by simulated evolution in the training phase and can be fully captured by Markov random fields (MRFs). However, the MRFs can be computationally prohibitive when beta strands are interleaved in complex topologies. We introduce SMURFLite, a method that combines both simplified MRFs and simulated evolution to substantially improve remote homology detection for beta structures. Unlike previous MRF-based methods, SMURFLite is computationally feasible on any beta-structural motif. We test SMURFLite on all propeller and barrel folds in the mainly-beta class of the SCOP hierarchy in stringent cross-validation experiments. We show a mean 26% (median 16%) improvement in area under curve (AUC) for beta-structural motif recognition as compared with HMMER (a well-known HMM method) and a mean 33% (median 19%) improvement as compared with RAPTOR (a well-known threading method) and even a mean 18% (median 10%) improvement in AUC over HHPred (a profile-profile HMM method), despite HHpred's use of extensive additional training data. We demonstrate SMURFLite's ability to scale to whole genomes by running a SMURFLite library of 207 beta-structural SCOP superfamilies against the entire genome of Thermotoga maritima, and make over a 100 new fold predictions. Availability and implementaion: A webserver that runs SMURFLite is available at: http://smurf.cs.tufts.edu/smurflite/
Apparatuses and methods for generating electric fields
Scott, Jill R; McJunkin, Timothy R; Tremblay, Paul L
2013-08-06
Apparatuses and methods relating to generating an electric field are disclosed. An electric field generator may include a semiconductive material configured in a physical shape substantially different from a shape of an electric field to be generated thereby. The electric field is generated when a voltage drop exists across the semiconductive material. A method for generating an electric field may include applying a voltage to a shaped semiconductive material to generate a complex, substantially nonlinear electric field. The shape of the complex, substantially nonlinear electric field may be configured for directing charged particles to a desired location. Other apparatuses and methods are disclosed.
NASTRAN internal improvements for 1992 release
NASA Technical Reports Server (NTRS)
Chan, Gordon C.
1992-01-01
The 1992 NASTRAN release incorporates a number of improvements transparent to users. The NASTRAN executable was made smaller by 70 pct. for the RISC base Unix machines by linking NASTRAN into a single program, freeing some 33 megabytes of system disc space that can be used by NASTRAN for solving larger problems. Some basic matrix operations, such as forward-backward substitution (FBS), multiply-add (MPYAD), matrix transpose, and fast eigensolution extraction routine (FEER), have been made more efficient by including new methods, new logic, new I/O techniques, and, in some cases, new subroutines. Some of the improvements provide ground work ready for system vectorization. These are finite element basic operations, and are used repeatedly in a finite element program such as NASTRAN. Any improvements on these basic operations can be translated into substantial cost and cpu time savings. NASTRAN is also discussed in various computer platforms.
An Improved Rank Correlation Effect Size Statistic for Single-Case Designs: Baseline Corrected Tau.
Tarlow, Kevin R
2017-07-01
Measuring treatment effects when an individual's pretreatment performance is improving poses a challenge for single-case experimental designs. It may be difficult to determine whether improvement is due to the treatment or due to the preexisting baseline trend. Tau- U is a popular single-case effect size statistic that purports to control for baseline trend. However, despite its strengths, Tau- U has substantial limitations: Its values are inflated and not bound between -1 and +1, it cannot be visually graphed, and its relatively weak method of trend control leads to unacceptable levels of Type I error wherein ineffective treatments appear effective. An improved effect size statistic based on rank correlation and robust regression, Baseline Corrected Tau, is proposed and field-tested with both published and simulated single-case time series. A web-based calculator for Baseline Corrected Tau is also introduced for use by single-case investigators.
Basic research for the geodynamics program
NASA Technical Reports Server (NTRS)
Mueller, Ivan I.
1988-01-01
Additional results are presented concerning a study that considers improvements over present Earth Rotation Parameter (ERP) determination methods by directly combining observations from various space geodetic systems in one adjustment. Earlier results are extended, showing that in addition to slight improvements in accuracy, substantial (a factor of three or more) improvements in precision and significant reductions in correlations between various parameters can be obtained (by combining Lunar Laser Ranging - LLR, Satellite Laser Ranging - SLR to Lageos, and Very Long Baseline Interferometry - VLBI data in one adjustment) as compared to results from individual systems. Smaller improvements are also seen over the weighted means of the individual system results. Although data transmission would not be significantly reduced, negligible additional computer time would be required if (standardized) normal equations were available from individual solutions. Suggestions for future work and implications for the New Earth Rotation Service (IERS) are also presented.
Enhanced Wireless Power Transmission Using Strong Paramagnetic Response.
Ahn, Dukju; Kiani, Mehdi; Ghovanloo, Maysam
2014-03-01
A method of quasi-static magnetic resonant coupling has been presented for improving the power transmission efficiency (PTE) in near-field wireless power transmission, which improves upon the state of the art. The traditional source resonator on the transmitter side is equipped with an additional resonator with a resonance frequency that is tuned substantially higher than the magnetic field excitation frequency. This additional resonator enhances the magnetic dipole moment and the effective permeability of the power transmitter, owing to a phenomenon known as the strong paramagnetic response. Both theoretical calculations and experimental results show increased PTE due to amplification of the effective permeability. In measurements, the PTE was improved from 57.8% to 64.2% at the nominal distance of 15 cm when the effective permeability was 2.6. The power delivered to load was also improved significantly, with the same 10 V excitation voltage, from 0.38 to 5.26 W.
Upgrades for the CMS simulation
Lange, D. J.; Hildreth, M.; Ivantchenko, V. N.; ...
2015-05-22
Over the past several years, the CMS experiment has made significant changes to its detector simulation application. The geometry has been generalized to include modifications being made to the CMS detector for 2015 operations, as well as model improvements to the simulation geometry of the current CMS detector and the implementation of a number of approved and possible future detector configurations. These include both completely new tracker and calorimetry systems. We have completed the transition to Geant4 version 10, we have made significant progress in reducing the CPU resources required to run our Geant4 simulation. These have been achieved throughmore » both technical improvements and through numerical techniques. Substantial speed improvements have been achieved without changing the physics validation benchmarks that the experiment uses to validate our simulation application for use in production. As a result, we will discuss the methods that we implemented and the corresponding demonstrated performance improvements deployed for our 2015 simulation application.« less
Hill, Mary C.
1985-01-01
The purpose of this study was to develop a methodology to be used to investigate the aquifer characteristics and water supply potential of an aquifer system. In particular, the geohydrology of northern Long Valley, New Jersey, was investigated. Geohydrologic data were collected and analyzed to characterize the site. Analysis was accomplished by interpreting the available data and by using a numerical simulation of the watertable aquifer. Special attention was given to the estimation of hydraulic conductivity values and hydraulic conductivity structure which together define the hydraulic conductivity of the modeled aquifer. Hydraulic conductivity and all other aspects of the system were first estimated using the trial-and-error method of calibration. The estimation of hydraulic conductivity was improved using a least squares method to estimate hydraulic conductivity values and by improvements in the parameter structure. These efforts improved the calibration of the model far more than a preceding period of similar effort using the trial-and-error method of calibration. In addition, the proposed method provides statistical information on the reliability of estimated hydraulic conductivity values, calculated heads, and calculated flows. The methodology developed and applied in this work proved to be of substantial value in the evaluation of the aquifer considered.
Needleman, Jack; Pearson, Marjorie L; Upenieks, Valda V; Yee, Tracy; Wolstein, Joelle; Parkerton, Melissa
2016-02-01
Process improvement stresses the importance of engaging frontline staff in implementing new processes and methods. Yet questions remain on how to incorporate these activities into the workday of hospital staff or how to create and maintain its commitment. In a 15-month American Organization of Nurse Executives collaborative involving frontline medical/surgical staff from 67 hospitals, Transforming Care at the Bedside (TCAB) was evaluated to assess whether participating units successfully implemented recommended change processes, engaged staff, implemented innovations, and generated support from hospital leadership and staff. In a mixed-methods analysis, multiple data sources, including leader surveys, unit staff surveys, administrative data, time study data, and collaborative documents were used. All units reported establishing unit-based teams, of which >90% succeeded in conducting tests of change, with unit staff selecting topics and making decisions on adoption. Fifty-five percent of unit staff reported participating in unit meetings, and 64%, in tests of change. Unit managers reported substantial increase in staff support for the initiative. An average 36 tests of change were conducted per unit, with 46% of tested innovations sustained, and 20% spread to other units. Some 95% of managers and 97% of chief nursing officers believed that the program had made unit staff more likely to initiate change. Among staff, 83% would encourage adoption of the initiative. Given the strong positive assessment of TCAB, evidence of substantial engagement of staff in the work, and the high volume of innovations tested, implemented, and sustained, TCAB appears to be a productive model for organizing and implementing a program of frontline-led improvement.
Practice-Tailored Facilitation to Improve Pediatric Preventive Care Delivery: A Randomized Trial
Schiltz, Nicholas K.; Sattar, Abdus; Stange, Kurt C.; Nevar, Ann H.; Davey, Christina; Ferretti, Gerald A.; Howell, Diana E.; Strosaker, Robyn; Vavrek, Pamela; Bader, Samantha; Ruhe, Mary C.; Cuttler, Leona
2014-01-01
OBJECTIVE: Evolving primary care models require methods to help practices achieve quality standards. This study assessed the effectiveness of a Practice-Tailored Facilitation Intervention for improving delivery of 3 pediatric preventive services. METHODS: In this cluster-randomized trial, a practice facilitator implemented practice-tailored rapid-cycle feedback/change strategies for improving obesity screening/counseling, lead screening, and dental fluoride varnish application. Thirty practices were randomized to Early or Late Intervention, and outcomes assessed for 16 419 well-child visits. A multidisciplinary team characterized facilitation processes by using comparative case study methods. RESULTS: Baseline performance was as follows: for Obesity: 3.5% successful performance in Early and 6.3% in Late practices, P = .74; Lead: 62.2% and 77.8% success, respectively, P = .11; and Fluoride: <0.1% success for all practices. Four months after randomization, performance rose in Early practices, to 82.8% for Obesity, 86.3% for Lead, and 89.1% for Fluoride, all P < .001 for improvement compared with Late practices’ control time. During the full 6-month intervention, care improved versus baseline in all practices, for Obesity for Early practices to 86.5%, and for Late practices 88.9%; for Lead for Early practices to 87.5% and Late practices 94.5%; and for Fluoride, for Early practices to 78.9% and Late practices 81.9%, all P < .001 compared with baseline. Improvements were sustained 2 months after intervention. Successful facilitation involved multidisciplinary support, rapid-cycle problem solving feedback, and ongoing relationship-building, allowing individualizing facilitation approach and intensity based on 3 levels of practice need. CONCLUSIONS: Practice-tailored Facilitation Intervention can lead to substantial, simultaneous, and sustained improvements in 3 domains, and holds promise as a broad-based method to advance pediatric preventive care. PMID:24799539
Strickler, Jeffery C; Lopiano, Kenneth K
2016-11-01
This study profiles an innovative approach to capture patient satisfaction data from emergency department (ED) patients by implementing an electronic survey method. This study compares responders to nonresponders. Our hypothesis is that the cohort of survey respondents will be similar to nonresponders in terms of the key characteristics of age, gender, race, ethnicity, ED disposition, and payor status. This study is a cross-sectional design using secondary data from the database and provides an opportunity for univariate analysis of the key characteristics for each group. The data elements will be abstracted from the database and compared with the same key characteristics from a similar sample from the database on nonresponders to the ED satisfaction survey. Age showed a statistically significant difference between responders and nonresponders. Comparison by disposition status showed no substantial difference between responders and nonresponders. Gender distribution showed a greater number of female than male responders. Race distribution showed a greater number and response by white and Asian patients as compared with African Americans. A review of ethnicity showed fewer Hispanics responded. An evaluation by payor classification showed greater number and response rate by those with a commercial or Workers Comp payor source. The response rate by Medicare recipients was stronger than expected; however, the response rate by Medicaid recipients and self-pay could be a concern for underrepresentation by lower socioeconomic groups. Finally, the evaluation of the method of notification showed that notification by both e-mail and text substantially improved response rates. The evaluation of key characteristics showed no difference related to disposition, but differences related to age, gender, race, ethnicity, and payor classification. These results point to a potential concern for underrepresentation by lower socioeconomic groups. The results showed that notification by both e-mail and text substantially improved response rates.
Alloy substantially free of dendrites and method of forming the same
DOE Office of Scientific and Technical Information (OSTI.GOV)
de Figueredo, Anacleto M.; Apelian, Diran; Findon, Matt M.
2009-04-07
Described herein are alloys substantially free of dendrites. A method includes forming an alloy substantially free of dendrites. A superheated alloy is cooled to form a nucleated alloy. The temperature of the nucleated alloy is controlled to prevent the nuclei from melting. The nucleated alloy is mixed to distribute the nuclei throughout the alloy. The nucleated alloy is cooled with nuclei distributed throughout.
NASA Technical Reports Server (NTRS)
Ribaya, Bryan P. (Inventor); Nguyen, Cattien V. (Inventor)
2013-01-01
An electron gun, an electron source for an electron gun, an extractor for an electron gun, and a respective method for producing the electron gun, the electron source and the extractor are disclosed. Embodiments provide an electron source utilizing a carbon nanotube (CNT) bonded to a substrate for increased stability, reliability, and durability. An extractor with an aperture in a conductive material is used to extract electrons from the electron source, where the aperture may substantially align with the CNT of the electron source when the extractor and electron source are mated to form the electron gun. The electron source and extractor may have alignment features for aligning the electron source and the extractor, thereby bringing the aperture and CNT into substantial alignment when assembled. The alignment features may provide and maintain this alignment during operation to improve the field emission characteristics and overall system stability of the electron gun.
NASA Technical Reports Server (NTRS)
Nguyen, Cattien V. (Inventor); Ribaya, Bryan P. (Inventor)
2010-01-01
An electron gun, an electron source for an electron gun, an extractor for an electron gun, and a respective method for producing the electron gun, the electron source and the extractor are disclosed. Embodiments provide an electron source utilizing a carbon nanotube (CNT) bonded to a substrate for increased stability, reliability, and durability. An extractor with an aperture in a conductive material is used to extract electrons from the electron source, where the aperture may substantially align with the CNT of the electron source when the extractor and electron source are mated to form the electron gun. The electron source and extractor may have alignment features for aligning the electron source and the extractor, thereby bringing the aperture and CNT into substantial alignment when assembled. The alignment features may provide and maintain this alignment during operation to improve the field emission characteristics and overall system stability of the electron gun.
Robust Combining of Disparate Classifiers Through Order Statistics
NASA Technical Reports Server (NTRS)
Tumer, Kagan; Ghosh, Joydeep
2001-01-01
Integrating the outputs of multiple classifiers via combiners or meta-learners has led to substantial improvements in several difficult pattern recognition problems. In this article we investigate a family of combiners based on order statistics, for robust handling of situations where there are large discrepancies in performance of individual classifiers. Based on a mathematical modeling of how the decision boundaries are affected by order statistic combiners, we derive expressions for the reductions in error expected when simple output combination methods based on the the median, the maximum and in general, the ith order statistic, are used. Furthermore, we analyze the trim and spread combiners, both based on linear combinations of the ordered classifier outputs, and show that in the presence of uneven classifier performance, they often provide substantial gains over both linear and simple order statistics combiners. Experimental results on both real world data and standard public domain data sets corroborate these findings.
Roach, Jay A [Idaho Falls, ID; Richardson, John G [Idaho Falls, ID; Raivo, Brian D [Idaho Falls, ID; Soelberg, Nicholas R [Idaho Falls, ID
2008-06-17
Apparatus and methods of operation are provided for a cold-crucible-induction melter for vitrifying waste wherein a single induction power supply may be used to effect a selected thermal distribution by independently energizing at least two inductors. Also, a bottom drain assembly may be heated by an inductor and may include an electrically resistive heater. The bottom drain assembly may be cooled to solidify molten material passing therethrough to prevent discharge of molten material therefrom. Configurations are provided wherein the induction flux skin depth substantially corresponds with the central longitudinal axis of the crucible. Further, the drain tube may be positioned within the induction flux skin depth in relation to material within the crucible or may be substantially aligned with a direction of flow of molten material within the crucible. An improved head design including four shells forming thermal radiation shields and at least two gas-cooled plenums is also disclosed.
Operating an induction melter apparatus
Roach, Jay A.; Richardson, John G.; Raivo, Brian D.; Soelberg, Nicholas R.
2006-01-31
Apparatus and methods of operation are provided for a cold-crucible-induction melter for vitrifying waste wherein a single induction power supply may be used to effect a selected thermal distribution by independently energizing at least two inductors. Also, a bottom drain assembly may be heated by an inductor and may include an electrically resistive heater. The bottom drain assembly may be cooled to solidify molten material passing therethrough to prevent discharge of molten material therefrom. Configurations are provided wherein the induction flux skin depth substantially corresponds with the central longitudinal axis of the crucible. Further, the drain tube may be positioned within the induction flux skin depth in relation to material within the crucible or may be substantially aligned with a direction of flow of molten material within the crucible. An improved head design including four shells forming thermal radiation shields and at least two gas-cooled plenums is also disclosed.
Stakeholders' perception of the nutrition and health claim regulation.
de Boer, Alie; Bast, Aalt
2015-05-01
In 2007, the Nutrition and Health Claim Regulation (NHCR) entered into force, which required scientific substantiation of health claims. In the field of antioxidants, most proposed claims were negatively assessed by the European Food Safety Authority (EFSA). This study reviews the perception of the NHCR of 14 Dutch stakeholders to unravel the grounds for disproving the putative health claims. Most claims are shown to be refused based on the quality of scientific substantiation, due to usage of scientific methods on which no consensus has been reached and the differences in expectations and requirements. Three themes exemplify the need for improvement in applying the NHCR: (i) enforcement; (ii) methodology; and (iii) perceived impact of the NHCR. With highly diverging perceptions of stakeholders, the current effectiveness of the NHCR can be questioned. The views of different stakeholders on these themes help to focus the discussion on the NCHR in capturing health effects.
Improved Method of Purifying Carbon Nanotubes
NASA Technical Reports Server (NTRS)
Delzeit, Lance D.
2004-01-01
An improved method of removing the residues of fabrication from carbon nanotubes has been invented. These residues comprise amorphous carbon and metal particles that are produced during the growth process. Prior methods of removing the residues include a variety of processes that involved the use of halogens, oxygen, or air in both thermal and plasma processes. Each of the prior methods entails one or more disadvantages, including non-selectivity (removal or damage of nanotubes in addition to removal of the residues), the need to dispose of toxic wastes, and/or processing times as long as 24 hours or more. In contrast, the process described here does not include the use of toxic chemicals, the generation of toxic wastes, causes little or no damage to the carbon nanotubes, and involves processing times of less than 1 hour. In the improved method, purification is accomplished by flowing water vapor through the reaction chamber at elevated temperatures and ambient pressures. The impurities are converted to gaseous waste products by the selective hydrogenation and hydroxylation by the water in a reaction chamber. This process could be performed either immediately after growth or in a post-growth purification process. The water used needs to be substantially free of oxygen and can be obtained by a repeated freeze-pump-thaw process. The presence of oxygen will non-selectively attach the carbon nanotubes in addition to the amorphous carbon.
Advances in ethanol production using immobilized cell systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Margaritis, A.; Merchant, F.J.A.
The application of immobilized cell systems for the production of ethanol has resulted in substantial improvements in the efficiency of the process when compared to the traditional free cell system. In this review, the various methods of cell immobilization employed in ethanol production systems have been described in detail. Their salient features, performance characteristics, advantages and limitations have been critically assessed. More recently, these immobilized cell systems have also been employed for the production of ethanol from non-conventional feedstocks such as Jerusalem artichoke extracts, cheese whey, cellulose, cellobiose and xylose. Ethanol production by immobilized yeast and bacterial cells has beenmore » attempted in various bioreactor types. Although most of these studies have been carried out using laboratory scale prototype bioreactors, it appears that only fluidized bed, horizontally packed bed bioreactors and tower fermenters may find application on scale-up. Several studies have indicated that upon immobilization, yeast cells performing ethanol fermentation exhibit more favourable physiological and metabolic properties. This, in addition to substantial improvements in ethanol productivities by immobilized cell systems, is indicative of the fact that future developments in the production of ethanol and alcoholic beverages will be directed towards the use of immobilized cell systems. 291 references.« less
Pope, C Arden; Ezzati, Majid; Dockery, Douglas W
2015-10-01
During the period of 1980-2000, the US obtained substantial reductions in air pollution and improvements in life expectancy (LE). Multiple factors contributed to improved health. This report explores and illustrates trade-offs between income, air pollution, and LE. Both improved air quality and income growth contributed to LE gains - without evidence of substantial negative tradeoffs between air pollution and income. Cleaner air may be considered an "economic good" with contributions to health, wellbeing, and human capital. Copyright © 2015 Elsevier Inc. All rights reserved.
Dunn, Charlton; Subbaraman, Maria R.
1989-01-01
An improvement in an inducer for a pump wherein the inducer includes a hub, a plurality of radially extending substantially helical blades and a wall member extending about and encompassing an outer periphery of the blades. The improvement comprises forming adjacent pairs of blades and the hub to provide a substantially rectangular cross-sectional flow area which cross-sectional flow area decreases from the inlet end of the inducer to a discharge end of the inducer, resulting in increased inducer efficiency improved suction performance, reduced susceptibility to cavitation, reduced susceptibility to hub separation and reduced fabrication costs.
An Alternate Method for Estimating Dynamic Height from XBT Profiles Using Empirical Vertical Modes
NASA Technical Reports Server (NTRS)
Lagerloef, Gary S. E.
1994-01-01
A technique is presented that applies modal decomposition to estimate dynamic height (0-450 db) from Expendable BathyThermograph (XBT) temperature profiles. Salinity-Temperature-Depth (STD) data are used to establish empirical relationships between vertically integrated temperature profiles and empirical dynamic height modes. These are then applied to XBT data to estimate dynamic height. A standard error of 0.028 dynamic meters is obtained for the waters of the Gulf of Alaska- an ocean region subject to substantial freshwater buoyancy forcing and with a T-S relationship that has considerable scatter. The residual error is a substantial improvement relative to the conventional T-S correlation technique when applied to this region. Systematic errors between estimated and true dynamic height were evaluated. The 20-year-long time series at Ocean Station P (50 deg N, 145 deg W) indicated weak variations in the error interannually, but not seasonally. There were no evident systematic alongshore variations in the error in the ocean boundary current regime near the perimeter of the Alaska gyre. The results prove satisfactory for the purpose of this work, which is to generate dynamic height from XBT data for coanalysis with satellite altimeter data, given that the altimeter height precision is likewise on the order of 2-3 cm. While the technique has not been applied to other ocean regions where the T-S relation has less scatter, it is suggested that it could provide some improvement over previously applied methods, as well.
Holve, Erin; Lopez, Marianne Hamilton; Scott, Lisa; Segal, Courtney
2012-09-01
BACKGROUND & SIGNIFICANCE: The AcademyHealth Electronic Data Methods Forum aims to advance the national dialogue on the use of electronic clinical data (ECD) for comparative effectiveness research (CER), patient-centered outcomes research, and quality improvement by facilitating exchange and collaboration among eleven research projects and external stakeholders. AcademyHealth conducted a mixed-method needs assessment with the Electronic Data Methods Forum's key stakeholders to assess: stakeholder views on developing new infrastructure for CER using ECD; current gaps in knowledge with respect to CER; and expectations for a learning health system. AcademyHealth conducted 50 stakeholder interviews between August 2011 and November 2011 with participants from the following seven stakeholder groups: government, business/payer, industry, healthcare delivery, patient/consumer, nonprofit/policy and research. With input from key collaborators, AcademyHealth designed a semi-structured interview guide and a short survey. Reviewers used the qualitative data analysis software NVivo to code the transcripts and to identify and manage complex concepts. Quantitative data from the questionnaire has been integrated with the final analysis as relevant. The analysis of recurring concepts in the interviews focus on five central themes: stakeholders have substantial expectations for CER using ECD, both with respect to addressing the limitations of traditional research studies, and generating meaningful evidence for decision-making and improving patient outcomes; stakeholders are aware of many challenges related to implementing CER with ECD, including the need to develop appropriate governance, assess and manage data quality, and develop methods to address confounding in observational data; stakeholders continue to struggle to define 'patient-centeredness' in CER using ECD, adding complexity to attaining this goal; stakeholders express that improving translation and dissemination of CER, and how research can be 'useful' at the point of care, can help mitigate negative perceptions of the CER 'brand'; and stakeholders perceive a need for a substantial 'culture shift' to facilitate collaborative science and new ways of conducting biomedical and outcomes research. Many stakeholders proposed approaches or solutions they felt might address the challenges identified.
McBride, P E; Massoth, K M; Underbakke, G; Solberg, L I; Beasley, J W; Plane, M B
1996-10-01
Recruitment of community primary care practices for studies to improve health service delivery is important to many health care organizations. Prior studies have focused on individual physician recruitment or academic settings. This descriptive study evaluated the efficiency and utility of three different recruitment methods to encourage community practice participation in a preventive services research trial. Primary care practices in four midwestern states were recruited using different sources for initial mailings (physician lists, practice lists, and a managed care organization's primary care network) and different recruiting methods. Outcome measures included response rates, participation rates, and comparative costs of each method. Of the 86 eligible practices contacted, 52 (60%) consented to participate. Mailing to individual physicians was the most cumbersome and expensive method and had the lowest response rate. Initial contacts with practice medical directors increased the participation rate substantially, and practice recruitment meetings improved both study participation and practice-project communication. Experience with these three methods suggests that the most efficient way to recruit practices for participation in a preventive services research trial involves targeted mailings and phone calls to medical directors, followed by on-site practice meetings.
NASA Astrophysics Data System (ADS)
Wei, Zhongbao; Tseng, King Jet; Wai, Nyunt; Lim, Tuti Mariana; Skyllas-Kazacos, Maria
2016-11-01
Reliable state estimate depends largely on an accurate battery model. However, the parameters of battery model are time varying with operating condition variation and battery aging. The existing co-estimation methods address the model uncertainty by integrating the online model identification with state estimate and have shown improved accuracy. However, the cross interference may arise from the integrated framework to compromise numerical stability and accuracy. Thus this paper proposes the decoupling of model identification and state estimate to eliminate the possibility of cross interference. The model parameters are online adapted with the recursive least squares (RLS) method, based on which a novel joint estimator based on extended Kalman Filter (EKF) is formulated to estimate the state of charge (SOC) and capacity concurrently. The proposed joint estimator effectively compresses the filter order which leads to substantial improvement in the computational efficiency and numerical stability. Lab scale experiment on vanadium redox flow battery shows that the proposed method is highly authentic with good robustness to varying operating conditions and battery aging. The proposed method is further compared with some existing methods and shown to be superior in terms of accuracy, convergence speed, and computational cost.
Nanoscopy for nanoscience: how super-resolution microscopy extends imaging for nanotechnology.
Johnson, Sam A
2015-01-01
Imaging methods have presented scientists with powerful means of investigation for centuries. The ability to resolve structures using light microscopes is though limited to around 200 nm. Fluorescence-based super-resolution light microscopy techniques of several principles and methods have emerged in recent years and offer great potential to extend the capabilities of microscopy. This resolution improvement is especially promising for nanoscience where the imaging of nanoscale structures is inherently restricted by the resolution limit of standard forms of light microscopy. Resolution can be improved by several distinct approaches including structured illumination microscopy, stimulated emission depletion, and single-molecule positioning methods such as photoactivated localization microscopy and stochastic optical reconstruction microscopy and several derivative variations of each of these. These methods involve substantial differences in the resolutions achievable in the different axes, speed of acquisition, compatibility with different labels, ease of use, hardware complexity, and compatibility with live biological samples. The field of super-resolution imaging and its application to nanotechnology is relatively new and still rapidly developing. An overview of how these methods may be used with nanomaterials is presented with some examples of pioneering uses of these approaches. © 2014 Wiley Periodicals, Inc.
Long-time dynamics through parallel trajectory splicing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Perez, Danny; Cubuk, Ekin D.; Waterland, Amos
2015-11-24
Simulating the atomistic evolution of materials over long time scales is a longstanding challenge, especially for complex systems where the distribution of barrier heights is very heterogeneous. Such systems are difficult to investigate using conventional long-time scale techniques, and the fact that they tend to remain trapped in small regions of configuration space for extended periods of time strongly limits the physical insights gained from short simulations. We introduce a novel simulation technique, Parallel Trajectory Splicing (ParSplice), that aims at addressing this problem through the timewise parallelization of long trajectories. The computational efficiency of ParSplice stems from a speculation strategymore » whereby predictions of the future evolution of the system are leveraged to increase the amount of work that can be concurrently performed at any one time, hence improving the scalability of the method. ParSplice is also able to accurately account for, and potentially reuse, a substantial fraction of the computational work invested in the simulation. We validate the method on a simple Ag surface system and demonstrate substantial increases in efficiency compared to previous methods. As a result, we then demonstrate the power of ParSplice through the study of topology changes in Ag 42Cu 13 core–shell nanoparticles.« less
Gowda, Charitha; Dong, Shiming; Potter, Rachel C.; Dombkowski, Kevin J.; Stokley, Shannon
2013-01-01
Objective Immunization information systems (IISs) are valuable surveillance tools; however, population relocation may introduce bias when determining immunization coverage. We explored alternative methods for estimating the vaccine-eligible population when calculating adolescent immunization levels using a statewide IIS. Methods We performed a retrospective analysis of the Michigan State Care Improvement Registry (MCIR) for all adolescents aged 11–18 years registered in the MCIR as of October 2010. We explored four methods for determining denominators: (1) including all adolescents with MCIR records, (2) excluding adolescents with out-of-state residence, (3) further excluding those without MCIR activity ≥10 years prior to the evaluation date, and (4) using a denominator based on U.S. Census data. We estimated state- and county-specific coverage levels for four adolescent vaccines. Results We found a 20% difference in estimated vaccination coverage between the most inclusive and restrictive denominator populations. Although there was some variability among the four methods in vaccination at the state level (2%–11%), greater variation occurred at the county level (up to 21%). This variation was substantial enough to potentially impact public health assessments of immunization programs. Generally, vaccines with higher coverage levels had greater absolute variation, as did counties with smaller populations. Conclusion At the county level, using the four denominator calculation methods resulted in substantial differences in estimated adolescent immunization rates that were less apparent when aggregated at the state level. Further research is needed to ascertain the most appropriate method for estimating vaccine coverage levels using IIS data. PMID:24179260
Method of sealing an ultracapacitor substantially free of water
Chapman-Irwin, Patricia; Feist, Thomas Paul
2002-04-02
A method of sealing an ultracapacitor substantially free of water is disclosed. The method includes providing a multilayer cell comprising two solid, non porous current collectors, separated by two porous electrodes with a separator between the two electrodes, sealing the cell with a reclosable hermetic closure. Water inside the closure is dissociated by an applied voltage to the cell and escapes in the form of hydrogen and oxygen when the closure is unmated, the closure is then mated to hermetically seal the cell which is substantially free of water.
Mickan, Sharon; Willcox, Merlin; Roberts, Nia; Bergström, Anna; Mant, David
2016-01-01
Background Africa bears 24% of the global burden of disease but has only 3% of the world’s health workers. Substantial variation in health worker performance adds to the negative impact of this significant shortfall. We therefore sought to identify interventions implemented in sub-Saharan African aiming to improve health worker performance and the contextual factors likely to influence local effectiveness. Methods and Findings A systematic search for randomised controlled trials of interventions to improve health worker performance undertaken in sub-Saharan Africa identified 41 eligible trials. Data were extracted to define the interventions’ components, calculate the absolute improvement in performance achieved, and document the likelihood of bias. Within-study variability in effect was extracted where reported. Statements about contextual factors likely to have modified effect were subjected to thematic analysis. Interventions to improve health worker performance can be very effective. Two of the three trials assessing mortality impact showed significant reductions in death rates (age<5 case fatality 5% versus 10%, p<0.01; maternal in-hospital mortality 6.8/1000 versus 10.3/1000; p<0.05). Eight of twelve trials focusing on prescribing had a statistically significant positive effect, achieving an absolute improvement varying from 9% to 48%. However, reported range of improvement between centres within trials varied substantially, in many cases exceeding the mean effect. Nine contextual themes were identified as modifiers of intervention effect across studies; most frequently cited were supply-line failures, inadequate supervision or management, and failure to follow-up training interventions with ongoing support, in addition to staff turnover. Conclusions Interventions to improve performance of existing staff and service quality have the potential to improve patient care in underserved settings. But in order to implement interventions effectively, policy makers need to understand and address the contextual factors which can contribute to differences in local effect. Researchers therefore must recognise the importance of reporting how context may modify effect size. PMID:26731097
Navigation Algorithms for the SeaWiFS Mission
NASA Technical Reports Server (NTRS)
Hooker, Stanford B. (Editor); Firestone, Elaine R. (Editor); Patt, Frederick S.; McClain, Charles R. (Technical Monitor)
2002-01-01
The navigation algorithms for the Sea-viewing Wide Field-of-view Sensor (SeaWiFS) were designed to meet the requirement of 1-pixel accuracy-a standard deviation (sigma) of 2. The objective has been to extract the best possible accuracy from the spacecraft telemetry and avoid the need for costly manual renavigation or geometric rectification. The requirement is addressed by postprocessing of both the Global Positioning System (GPS) receiver and Attitude Control System (ACS) data in the spacecraft telemetry stream. The navigation algorithms described are separated into four areas: orbit processing, attitude sensor processing, attitude determination, and final navigation processing. There has been substantial modification during the mission of the attitude determination and attitude sensor processing algorithms. For the former, the basic approach was completely changed during the first year of the mission, from a single-frame deterministic method to a Kalman smoother. This was done for several reasons: a) to improve the overall accuracy of the attitude determination, particularly near the sub-solar point; b) to reduce discontinuities; c) to support the single-ACS-string spacecraft operation that was started after the first mission year, which causes gaps in attitude sensor coverage; and d) to handle data quality problems (which became evident after launch) in the direct-broadcast data. The changes to the attitude sensor processing algorithms primarily involved the development of a model for the Earth horizon height, also needed for single-string operation; the incorporation of improved sensor calibration data; and improved data quality checking and smoothing to handle the data quality issues. The attitude sensor alignments have also been revised multiple times, generally in conjunction with the other changes. The orbit and final navigation processing algorithms have remained largely unchanged during the mission, aside from refinements to data quality checking. Although further improvements are certainly possible, future evolution of the algorithms is expected to be limited to refinements of the methods presented here, and no substantial changes are anticipated.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Abdullin, I.Sh.; Bragin, V.E.; Bykanov, A.N.
Gas discharge plasma modification of polymer materials and metals is one of the known physical approaches for improving of materials biocompatibility in ophthalmology and surgery. The surface treatment in RF discharges can be effectively realized in the discharge afterglow and in the discharge region itself too. This modification method is more convenient and produces more uniform surfaces in comparison with other discharge types. The carried out experiments and published up to now results show that interaction of UV radiation, fluxes of ions, electrons and metastable particles with material`s surface changes chemical composition and surface structure. The exerting of these agentsmore » on the sample surface produces the following effects. There are processes of physical and plasma-chemical surface etching producing effective surface cleaning of different types of contaminations. It may be surface contaminations by hydrocarbons because of preliminary surface contacts with biological or physical bodies. It may be surface contaminations caused by characteristic properties of chemical technology too. There is a surface layer with thickness from some angstroms up to few hundreds of angstroms. The chemical content and structure of this layer is distinguished from the bulk polymer properties. The presence of such {open_quotes}technological{close_quotes} contaminations produces the layer of material substantially differing from the base polymer. The basic layer physical and chemical properties for example, gas permeation rate may substantially differ from the base polymer. Attempts to clean the surface from these contaminations by chemical methods (solutions) have not been successful and produced contaminations of more deep polymer layers. So the plasma cleaning is the most profitable method of polymer treatment for removing the surface contaminations. The improving of wettability occurs during this stage of treatment.« less
Automated image alignment for 2D gel electrophoresis in a high-throughput proteomics pipeline.
Dowsey, Andrew W; Dunn, Michael J; Yang, Guang-Zhong
2008-04-01
The quest for high-throughput proteomics has revealed a number of challenges in recent years. Whilst substantial improvements in automated protein separation with liquid chromatography and mass spectrometry (LC/MS), aka 'shotgun' proteomics, have been achieved, large-scale open initiatives such as the Human Proteome Organization (HUPO) Brain Proteome Project have shown that maximal proteome coverage is only possible when LC/MS is complemented by 2D gel electrophoresis (2-DE) studies. Moreover, both separation methods require automated alignment and differential analysis to relieve the bioinformatics bottleneck and so make high-throughput protein biomarker discovery a reality. The purpose of this article is to describe a fully automatic image alignment framework for the integration of 2-DE into a high-throughput differential expression proteomics pipeline. The proposed method is based on robust automated image normalization (RAIN) to circumvent the drawbacks of traditional approaches. These use symbolic representation at the very early stages of the analysis, which introduces persistent errors due to inaccuracies in modelling and alignment. In RAIN, a third-order volume-invariant B-spline model is incorporated into a multi-resolution schema to correct for geometric and expression inhomogeneity at multiple scales. The normalized images can then be compared directly in the image domain for quantitative differential analysis. Through evaluation against an existing state-of-the-art method on real and synthetically warped 2D gels, the proposed analysis framework demonstrates substantial improvements in matching accuracy and differential sensitivity. High-throughput analysis is established through an accelerated GPGPU (general purpose computation on graphics cards) implementation. Supplementary material, software and images used in the validation are available at http://www.proteomegrid.org/rain/.
Protocol vulnerability detection based on network traffic analysis and binary reverse engineering.
Wen, Shameng; Meng, Qingkun; Feng, Chao; Tang, Chaojing
2017-01-01
Network protocol vulnerability detection plays an important role in many domains, including protocol security analysis, application security, and network intrusion detection. In this study, by analyzing the general fuzzing method of network protocols, we propose a novel approach that combines network traffic analysis with the binary reverse engineering method. For network traffic analysis, the block-based protocol description language is introduced to construct test scripts, while the binary reverse engineering method employs the genetic algorithm with a fitness function designed to focus on code coverage. This combination leads to a substantial improvement in fuzz testing for network protocols. We build a prototype system and use it to test several real-world network protocol implementations. The experimental results show that the proposed approach detects vulnerabilities more efficiently and effectively than general fuzzing methods such as SPIKE.
CASE STUDY RESEARCH: THE VIEW FROM COMPLEXITY SCIENCE
Anderson, Ruth; Crabtree, Benjamin F.; Steele, David J.; McDaniel, Reuben R.
2005-01-01
Many wonder why there has been so little change in care quality, despite substantial quality improvement efforts. Questioning why current approaches are not making true changes draws attention to the organization as a source of answers. We bring together the case study method and complexity science to suggest new ways to study health care organizations. The case study provides a method for studying systems. Complexity theory suggests that keys to understanding the system are contained in patterns of relationships and interactions among the system’s agents. We propose some of the “objects” of study that are implicated by complexity theory and discuss how studying these using case methods may provide useful maps of the system. We offer complexity theory, partnered with case study method, as a place to begin the daunting task of studying a system as an integrated whole. PMID:15802542
Improved parallel data partitioning by nested dissection with applications to information retrieval.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wolf, Michael M.; Chevalier, Cedric; Boman, Erik Gunnar
The computational work in many information retrieval and analysis algorithms is based on sparse linear algebra. Sparse matrix-vector multiplication is a common kernel in many of these computations. Thus, an important related combinatorial problem in parallel computing is how to distribute the matrix and the vectors among processors so as to minimize the communication cost. We focus on minimizing the total communication volume while keeping the computation balanced across processes. In [1], the first two authors presented a new 2D partitioning method, the nested dissection partitioning algorithm. In this paper, we improve on that algorithm and show that it ismore » a good option for data partitioning in information retrieval. We also show partitioning time can be substantially reduced by using the SCOTCH software, and quality improves in some cases, too.« less
Preconditioned augmented Lagrangian formulation for nearly incompressible cardiac mechanics.
Campos, Joventino Oliveira; Dos Santos, Rodrigo Weber; Sundnes, Joakim; Rocha, Bernardo Martins
2018-04-01
Computational modeling of the heart is a subject of substantial medical and scientific interest, which may contribute to increase the understanding of several phenomena associated with cardiac physiological and pathological states. Modeling the mechanics of the heart have led to considerable insights, but it still represents a complex and a demanding computational problem, especially in a strongly coupled electromechanical setting. Passive cardiac tissue is commonly modeled as hyperelastic and is characterized by quasi-incompressible, orthotropic, and nonlinear material behavior. These factors are known to be very challenging for the numerical solution of the model. The near-incompressibility is known to cause numerical issues such as the well-known locking phenomenon and ill-conditioning of the stiffness matrix. In this work, the augmented Lagrangian method is used to handle the nearly incompressible condition. This approach can potentially improve computational performance by reducing the condition number of the stiffness matrix and thereby improving the convergence of iterative solvers. We also improve the performance of iterative solvers by the use of an algebraic multigrid preconditioner. Numerical results of the augmented Lagrangian method combined with a preconditioned iterative solver for a cardiac mechanics benchmark suite are presented to show its improved performance. Copyright © 2017 John Wiley & Sons, Ltd.
Quality improvement on the acute inpatient psychiatry unit using the model for improvement.
Singh, Kuldeep; Sanderson, Joshua; Galarneau, David; Keister, Thomas; Hickman, Dean
2013-01-01
A need exists for constant evaluation and modification of processes within healthcare systems to achieve quality improvement. One common approach is the Model for Improvement that can be used to clearly define aims, measures, and changes that are then implemented through a plan-do-study-act (PDSA) cycle. This approach is a commonly used method for improving quality in a wide range of fields. The Model for Improvement allows for a systematic process that can be revised at set time intervals to achieve a desired result. We used the Model for Improvement in an acute psychiatry unit (APU) to improve the screening incidence of abnormal involuntary movements in eligible patients-those starting or continuing on standing neuroleptics-with the Abnormal Involuntary Movement Scale (AIMS). After 8 weeks of using the Model for Improvement, both of the participating inpatient services in the APU showed substantial overall improvement in screening for abnormal involuntary movements using the AIMS. Crucial aspects of a successful quality improvement initiative based on the Model for Improvement are well-defined goals, process measures, and structured PDSA cycles. Success also requires communication, organization, and participation of the entire team.
Efficient Fluid Dynamic Design Optimization Using Cartesian Grids
NASA Technical Reports Server (NTRS)
Dadone, A.; Grossman, B.; Sellers, Bill (Technical Monitor)
2004-01-01
This report is subdivided in three parts. The first one reviews a new approach to the computation of inviscid flows using Cartesian grid methods. The crux of the method is the curvature-corrected symmetry technique (CCST) developed by the present authors for body-fitted grids. The method introduces ghost cells near the boundaries whose values are developed from an assumed flow-field model in vicinity of the wall consisting of a vortex flow, which satisfies the normal momentum equation and the non-penetration condition. The CCST boundary condition was shown to be substantially more accurate than traditional boundary condition approaches. This improved boundary condition is adapted to a Cartesian mesh formulation, which we call the Ghost Body-Cell Method (GBCM). In this approach, all cell centers exterior to the body are computed with fluxes at the four surrounding cell edges. There is no need for special treatment corresponding to cut cells which complicate other Cartesian mesh methods.
Advances in explosives analysis—part I. animal, chemical, ion, and mechanical methods
Brown, Kathryn E.; Greenfield, Margo T.; McGrane, Shawn D.; ...
2015-10-13
The number and capability of explosives detection and analysis methods have increased substantially since the publication of the Analytical and Bioanalytical Chemistry special issue devoted to Explosives Analysis (Moore and Goodpaster, Anal Bioanal Chem 395(2):245–246, 2009). We review and critically evaluate the latest (the past five years) important advances in explosives detection, with details of the improvements over previous methods, and suggest possible avenues towards further advances in, e.g., stand-off distance, detection limit, selectivity, and penetration through camouflage or packaging. The review consists of two parts. Moreover, Part I, reviews methods based on animals, chemicals (including colorimetry, molecularly imprinted polymers,more » electrochemistry, and immunochemistry), ions (both ion-mobility spectrometry and mass spectrometry), and mechanical devices. Part II will review methods based on photons, from very energetic photons including X-rays and gamma rays down to the terahertz range, and neutrons.« less
NASA Technical Reports Server (NTRS)
Motiwalla, S. K.
1973-01-01
Using the first and the second derivative of flutter velocity with respect to the parameters, the velocity hypersurface is made quadratic. This greatly simplifies the numerical procedure developed for determining the values of the design parameters such that a specified flutter velocity constraint is satisfied and the total structural mass is near a relative minimum. A search procedure is presented utilizing two gradient search methods and a gradient projection method. The procedure is applied to the design of a box beam, using finite-element representation. The results indicate that the procedure developed yields substantial design improvement satisfying the specified constraint and does converge to near a local optimum.
Potential benefits of magnetic suspension and balance systems
NASA Technical Reports Server (NTRS)
Lawing, Pierce L.; Dress, David A.; Kilgore, Robert A.
1987-01-01
The potential of Magnetic Suspension and Balance Systems (MSBS) to improve conventional wind tunnel testing techniques is discussed. Topics include: elimination of model geometry distortion and support interference to improve the measurement accuracy of aerodynamic coefficients; removal of testing restrictions due to supports; improved dynamic stability data; and stores separation testing. Substantial increases in wind tunnel productivity are anticipated due to the coalescence of these improvements. Specific improvements in testing methods for missiles, helicopters, fighter aircraft, twin fuselage transports and bombers, state separation, water tunnels, and automobiles are also forecast. In a more speculative vein, new wind tunnel test techniques are envisioned as a result of applying MSBS, including free-flight computer trajectories in the test section, pilot-in-the-loop and designer-in-the-loop testing, shipboard missile launch simulation, and optimization of hybrid hypersonic configurations. Also addressed are potential applications of MSBS to such diverse technologies as medical research and practice, industrial robotics, space weaponry, and ore processing in space.
NASA Astrophysics Data System (ADS)
Boyer, T.; Locarnini, R. A.; Mishonov, A. V.; Reagan, J. R.; Seidov, D.; Zweng, M.; Levitus, S.
2017-12-01
Ocean heat uptake is the major factor in sequestering the Earth's Energy Imbalance (EEI). Since 2000, the National Centers for Environmental Information (NCEI) have been estimating historical ocean heat content (OHC) changes back to the 1950s, as well as monitoring recent OHC. Over these years, through worldwide community efforts, methods of calculating OHC have substantially improved. Similarly, estimation of the uncertainty of ocean heat content calculations provide new insight into how well EEI estimates can be constrained using in situ measurements and models. The changing ocean observing system, especially with the near-global year-round coverage afforded by Argo, has also allowed more confidence in regional and global OHC estimates and provided a benchmark for better understanding of historical OHC changes. NCEI is incorporating knowledge gained through these global efforts into the basic methods, instrument bias corrections, uncertainty measurements, and temporal and spatial resolution capabilities of historic OHC change estimation and recent monitoring. The nature of these improvements and their consequences for estimation of OHC in relation to the EEI will be discussed.
Quantitative evaluation of pairs and RS steganalysis
NASA Astrophysics Data System (ADS)
Ker, Andrew D.
2004-06-01
We give initial results from a new project which performs statistically accurate evaluation of the reliability of image steganalysis algorithms. The focus here is on the Pairs and RS methods, for detection of simple LSB steganography in grayscale bitmaps, due to Fridrich et al. Using libraries totalling around 30,000 images we have measured the performance of these methods and suggest changes which lead to significant improvements. Particular results from the project presented here include notes on the distribution of the RS statistic, the relative merits of different "masks" used in the RS algorithm, the effect on reliability when previously compressed cover images are used, and the effect of repeating steganalysis on the transposed image. We also discuss improvements to the Pairs algorithm, restricting it to spatially close pairs of pixels, which leads to a substantial performance improvement, even to the extent of surpassing the RS statistic which was previously thought superior for grayscale images. We also describe some of the questions for a general methodology of evaluation of steganalysis, and potential pitfalls caused by the differences between uncompressed, compressed, and resampled cover images.
Analysis and optimization of population annealing
NASA Astrophysics Data System (ADS)
Amey, Christopher; Machta, Jonathan
2018-03-01
Population annealing is an easily parallelizable sequential Monte Carlo algorithm that is well suited for simulating the equilibrium properties of systems with rough free-energy landscapes. In this work we seek to understand and improve the performance of population annealing. We derive several useful relations between quantities that describe the performance of population annealing and use these relations to suggest methods to optimize the algorithm. These optimization methods were tested by performing large-scale simulations of the three-dimensional (3D) Edwards-Anderson (Ising) spin glass and measuring several observables. The optimization methods were found to substantially decrease the amount of computational work necessary as compared to previously used, unoptimized versions of population annealing. We also obtain more accurate values of several important observables for the 3D Edwards-Anderson model.
Collender, Philip A; Kirby, Amy E; Addiss, David G; Freeman, Matthew C; Remais, Justin V
2015-12-01
Limiting the environmental transmission of soil-transmitted helminths (STHs), which infect 1.5 billion people worldwide, will require sensitive, reliable, and cost-effective methods to detect and quantify STHs in the environment. We review the state-of-the-art of STH quantification in soil, biosolids, water, produce, and vegetation with regard to four major methodological issues: environmental sampling; recovery of STHs from environmental matrices; quantification of recovered STHs; and viability assessment of STH ova. We conclude that methods for sampling and recovering STHs require substantial advances to provide reliable measurements for STH control. Recent innovations in the use of automated image identification and developments in molecular genetic assays offer considerable promise for improving quantification and viability assessment. Copyright © 2015 Elsevier Ltd. All rights reserved.
Bain, Rob ES; Wright, Jim A; Yang, Hong; Pedley, Steve; Bartram, Jamie K
2012-01-01
Abstract Objective To determine how data on water source quality affect assessments of progress towards the 2015 Millennium Development Goal (MDG) target on access to safe drinking-water. Methods Data from five countries on whether drinking-water sources complied with World Health Organization water quality guidelines on contamination with thermotolerant coliform bacteria, arsenic, fluoride and nitrates in 2004 and 2005 were obtained from the Rapid Assessment of Drinking-Water Quality project. These data were used to adjust estimates of the proportion of the population with access to safe drinking-water at the MDG baseline in 1990 and in 2008 made by the Joint Monitoring Programme for Water Supply and Sanitation, which classified all improved sources as safe. Findings Taking account of data on water source quality resulted in substantially lower estimates of the percentage of the population with access to safe drinking-water in 2008 in four of the five study countries: the absolute reduction was 11% in Ethiopia, 16% in Nicaragua, 15% in Nigeria and 7% in Tajikistan. There was only a slight reduction in Jordan. Microbial contamination was more common than chemical contamination. Conclusion The criterion used by the MDG indicator to determine whether a water source is safe can lead to substantial overestimates of the population with access to safe drinking-water and, consequently, also overestimates the progress made towards the 2015 MDG target. Monitoring drinking-water supplies by recording both access to water sources and their safety would be a substantial improvement. PMID:22461718
GRAPHIC REANALYSIS OF THE TWO NINDS-TPA TRIALS CONFIRMS SUBSTANTIAL TREATMENT BENEFIT
Saver, Jeffrey L.; Gornbein, Jeffrey; Starkman, Sidney
2010-01-01
Background of Comment/Review Multiple statistical analyses of the two NINDS-TPA Trials have confirmed study findings of benefit of fibrinolytic therapy. A recent graphic analysis departed from best practices in the visual display of quantitative information by failing to take into account the skewed functional importance NIH Stroke Scale raw scores and by scaling change axes at up to twenty times the range achievable by individual patients. Methods Using the publicly available datasets of the 2 NINDS-TPA Trials, we generated a variety of figures appropriate to the characteristics of acute stroke trial data. Results A diverse array of figures all visually delineated substantial benefits of fibrinolytic therapy, including: bar charts of normalized gain and loss; stacked bar, bar, and matrix plots of clinically relevant ordinal ranks; a time series stacked line plot of continuous scale disability weights; and line plot, bubble chart, and person icon array graphs of joint outcome table analysis. The achievable change figure showed substantially greater improvement among TPA than placebo patients, median 66.7% (IQR 0–92.0) vs 50.0% (IQR −7.1 – 80.0), p=0.003. Conclusions On average, under 3 hour patients treated with TPA recovered two-thirds while placebo patients improved only half of the way towards fully normal. Graphical analyses of the two NINDS-TPA trials, when performed according to best practices, is a useful means of conveying details about patient response to therapy not fully delineated by summary statistics, and confirms a valuable treatment benefit of under 3 hour fibrinolytic therapy in acute stroke. PMID:20829518
Remote patient monitoring in chronic heart failure.
Palaniswamy, Chandrasekar; Mishkin, Aaron; Aronow, Wilbert S; Kalra, Ankur; Frishman, William H
2013-01-01
Heart failure (HF) poses a significant economic burden on our health-care resources with very high readmission rates. Remote monitoring has a substantial potential to improve the management and outcome of patients with HF. Readmission for decompensated HF is often preceded by a stage of subclinical hemodynamic decompensation, where therapeutic interventions would prevent subsequent clinical decompensation and hospitalization. Various methods of remote patient monitoring include structured telephone support, advanced telemonitoring technologies, remote monitoring of patients with implanted cardiac devices such as pacemakers and defibrillators, and implantable hemodynamic monitors. Current data examining the efficacy of remote monitoring technologies in improving outcomes have shown inconsistent results. Various medicolegal and financial issues need to be addressed before widespread implementation of this exciting technology can take place.
Eating Better for Less: A National Discount Program for Healthy Food Purchases in South Africa
An, Ruopeng; Patel, Deepak; Segal, Darren; Sturm, Roland
2012-01-01
Background Improving diet quality is a key health promotion strategy. The HealthyFood program provides up to a 25% discount on selected food items to about 260,000 households across South Africa. Objectives Examine whether reducing prices for healthy food purchases leads to changes in self-reported measures of food consumption and weight status. Methods Repeated surveys of about 350,000 HealthyFood participants and nonparticipants. Results Program participation is associated with more consumption of fruits/vegetables and wholegrain foods, and less consumption of high sugar/salt foods, fried foods, processed meats, and fast-food. There is no strong evidence that participation reduces obesity. Conclusions A substantial price intervention might be effective in improving diets. PMID:22943101
Image fidelity improvement in digital holographic microscopy using optical phase conjugation
NASA Astrophysics Data System (ADS)
Chan, Huang-Tian; Chew, Yang-Kun; Shiu, Min-Tzung; Chang, Chi-Ching
2018-01-01
With respect to digital holography, techniques in suppressing noises derived from reference arm are maturely developed. However, techniques for the object counterpart are not being well developed. Optical phase conjugation technique was believed to be a promising method for this interest. A 0°-cut BaTiO3 photorefractive crystal was involved in self-pumped phase conjugation scheme, and was employed to in-line digital holographic microscopy, in both transmission-type and reflection-type configuration. On pure physical compensation basis, results revealed that the image fidelity was improved substantially with 2.9096 times decrease in noise level and 3.5486 times increase in the ability to discriminate noise on average, by suppressing the scattering noise prior to recording stage.
Oshchepkov, Sergey; Bril, Andrey; Yokota, Tatsuya; Yoshida, Yukio; Blumenstock, Thomas; Deutscher, Nicholas M; Dohe, Susanne; Macatangay, Ronald; Morino, Isamu; Notholt, Justus; Rettinger, Markus; Petri, Christof; Schneider, Matthias; Sussman, Ralf; Uchino, Osamu; Velazco, Voltaire; Wunch, Debra; Belikov, Dmitry
2013-02-20
This paper presents an improved photon path length probability density function method that permits simultaneous retrievals of column-average greenhouse gas mole fractions and light path modifications through the atmosphere when processing high-resolution radiance spectra acquired from space. We primarily describe the methodology and retrieval setup and then apply them to the processing of spectra measured by the Greenhouse gases Observing SATellite (GOSAT). We have demonstrated substantial improvements of the data processing with simultaneous carbon dioxide and light path retrievals and reasonable agreement of the satellite-based retrievals against ground-based Fourier transform spectrometer measurements provided by the Total Carbon Column Observing Network (TCCON).
METHOD AND APPARATUS FOR IMPROVING PERFORMANCE OF A FAST REACTOR
Koch, L.J.
1959-01-20
A specific arrangement of the fertile material and fissionable material in the active portion of a fast reactor to achieve improvement in performance and to effectively lower the operating temperatures in the center of the reactor is described. According to this invention a group of fuel elements containing fissionable material are assembled to form a hollow fuel core. Elements containing a fertile material, such as depleted uranium, are inserted into the interior of the fuel core to form a central blanket. Additional elemenis of fertile material are arranged about the fuel core to form outer blankets which in tunn are surrounded by a reflector. This arrangement of fuel core and blankets results in substantial flattening of the flux pattern.
Composite material reinforced with atomized quasicrystalline particles and method of making same
Biner, S.B.; Sordelet, D.J.; Lograsso, B.K.; Anderson, I.E.
1998-12-22
A composite material comprises an aluminum or aluminum alloy matrix having generally spherical, atomized quasicrystalline aluminum-transition metal alloy reinforcement particles disposed in the matrix to improve mechanical properties. A composite article can be made by consolidating generally spherical, atomized quasicrystalline aluminum-transition metal alloy particles and aluminum or aluminum alloy particles to form a body that is cold and/or hot reduced to form composite products, such as composite plate or sheet, with interfacial bonding between the quasicrystalline particles and the aluminum or aluminum alloy matrix without damage (e.g. cracking or shape change) of the reinforcement particles. The cold and/or hot worked composite exhibits substantially improved yield strength, tensile strength, Young`s modulus (stiffness). 3 figs.
Thermal control system and method for a passive solar storage wall
Ortega, Joseph K. E.
1984-01-01
The invention provides a system and method for controlling the storing and elease of thermal energy from a thermal storage wall wherein said wall is capable of storing thermal energy from insolation of solar radiation. The system and method includes a device such as a plurality of louvers spaced a predetermined distance from the thermal wall for regulating the release of thermal energy from the thermal wall. This regulating device is made from a material which is substantially transparent to the incoming solar radiation so that when it is in any operative position, the thermal storage wall substantially receives all of the impacting solar radiation. The material in the regulating device is further capable of being substantially opaque to thermal energy so that when the device is substantially closed, thermal release of energy from the storage wall is substantially minimized. An adjustment device is interconnected with the regulating mechanism for selectively opening and closing it in order to regulate the release of thermal energy from the wall.
Ballistocardiogram Artifact Removal with a Reference Layer and Standard EEG Cap
Luo, Qingfei; Huang, Xiaoshan; Glover, Gary H.
2014-01-01
Background In simultaneous EEG-fMRI, the EEG recordings are severely contaminated by ballistocardiogram (BCG) artifacts, which are caused by cardiac pulsations. To reconstruct and remove the BCG artifacts, one promising method is to measure the artifacts in the absence of EEG signal by placing a group of electrodes (BCG electrodes) on a conductive layer (reference layer) insulated from the scalp. However, current BCG reference layer (BRL) methods either use a customized EEG cap composed of electrode pairs, or need to construct the custom reference layer through additional model-building experiments for each EEG-fMRI experiment. These requirements have limited the versatility and efficiency of BRL. The aim of this study is to propose a more practical and efficient BRL method and compare its performance with the most popular BCG removal method, the optimal basis sets (OBS) algorithm. New Method By designing the reference layer as a permanent and reusable cap, the new BRL method is able to be used with a standard EEG cap, and no extra experiments and preparations are needed to use the BRL in an EEG-fMRI experiment. Results The BRL method effectively removed the BCG artifacts from both oscillatory and evoked potential scalp recordings and recovered the EEG signal. Comparison with Existing Method Compared to the OBS, this new BRL method improved the contrast-to-noise ratios of the alpha-wave, visual, and auditory evoked potential signals by 101%, 76%, and 75% respectively, employing 160 BCG electrodes. Using only 20 BCG electrodes, the BRL improved the EEG signal by 74%/26%/41% respectively. Conclusion The proposed method can substantially improve the EEG signal quality compared with traditional methods. PMID:24960423
Three-grid accelerator system for an ion propulsion engine
NASA Technical Reports Server (NTRS)
Brophy, John R. (Inventor)
1994-01-01
An apparatus is presented for an ion engine comprising a three-grid accelerator system with the decelerator grid biased negative of the beam plasma. This arrangement substantially reduces the charge-exchange ion current reaching the accelerator grid at high tank pressures, which minimizes erosion of the accelerator grid due to charge exchange ion sputtering, known to be the major accelerator grid wear mechanism. An improved method for life testing ion engines is also provided using the disclosed apparatus. In addition, the invention can also be applied in materials processing.
Conditioning laboratory cats to handling and transport.
Gruen, Margaret E; Thomson, Andrea E; Clary, Gillian P; Hamilton, Alexandra K; Hudson, Lola C; Meeker, Rick B; Sherman, Barbara L
2013-10-01
As research subjects, cats have contributed substantially to our understanding of biological systems, from the development of mammalian visual pathways to the pathophysiology of feline immunodeficiency virus as a model for human immunodeficiency virus. Few studies have evaluated humane methods for managing cats in laboratory animal facilities, however, in order to reduce fear responses and improve their welfare. The authors describe a behavioral protocol used in their laboratory to condition cats to handling and transport. Such behavioral conditioning benefits the welfare of the cats, the safety of animal technicians and the quality of feline research data.
Sanders, David M.; Decker, Derek E.
1999-01-01
Optical patterns and lithographic techniques are used as part of a process to embed parallel and evenly spaced conductors in the non-planar surfaces of an insulator to produce high gradient insulators. The approach extends the size that high gradient insulating structures can be fabricated as well as improves the performance of those insulators by reducing the scale of the alternating parallel lines of insulator and conductor along the surface. This fabrication approach also substantially decreases the cost required to produce high gradient insulators.
Wang, Jin; Lin, Yaochen; Pinault, Mathieu; Filoramo, Arianna; Fabert, Marc; Ratier, Bernard; Bouclé, Johann; Herlin-Boime, Nathalie
2015-01-14
This paper presents the continuous-flowand single-step synthesis of a TiO2/MWCNT (multiwall carbon nanotubes) nanohybrid material. The synthesis method allows achieving high coverage and intimate interface between the TiO2particles and MWCNTs, together with a highly homogeneous distribution of nanotubes within the oxide. Such materials used as active layer in theporous photoelectrode of solid-state dye-sensitized solar cells leads to a substantial performance improvement (20%) as compared to reference devices.
Adaptive Swarm Balancing Algorithms for rare-event prediction in imbalanced healthcare data
Wong, Raymond K.; Mohammed, Sabah; Fiaidhi, Jinan; Sung, Yunsick
2017-01-01
Clinical data analysis and forecasting have made substantial contributions to disease control, prevention and detection. However, such data usually suffer from highly imbalanced samples in class distributions. In this paper, we aim to formulate effective methods to rebalance binary imbalanced dataset, where the positive samples take up only the minority. We investigate two different meta-heuristic algorithms, particle swarm optimization and bat algorithm, and apply them to empower the effects of synthetic minority over-sampling technique (SMOTE) for pre-processing the datasets. One approach is to process the full dataset as a whole. The other is to split up the dataset and adaptively process it one segment at a time. The experimental results reported in this paper reveal that the performance improvements obtained by the former methods are not scalable to larger data scales. The latter methods, which we call Adaptive Swarm Balancing Algorithms, lead to significant efficiency and effectiveness improvements on large datasets while the first method is invalid. We also find it more consistent with the practice of the typical large imbalanced medical datasets. We further use the meta-heuristic algorithms to optimize two key parameters of SMOTE. The proposed methods lead to more credible performances of the classifier, and shortening the run time compared to brute-force method. PMID:28753613
Pitsiladis, Yannis P; Durussel, Jérôme; Rabin, Olivier
2014-05-01
Administration of recombinant human erythropoietin (rHumanEPO) improves sporting performance and hence is frequently subject to abuse by athletes, although rHumanEPO is prohibited by the WADA. Approaches to detect rHumanEPO doping have improved significantly in recent years but remain imperfect. A new transcriptomic-based longitudinal screening approach is being developed that has the potential to improve the analytical performance of current detection methods. In particular, studies are being funded by WADA to identify a 'molecular signature' of rHumanEPO doping and preliminary results are promising. In the first systematic study to be conducted, the expression of hundreds of genes were found to be altered by rHumanEPO with numerous gene transcripts being differentially expressed after the first injection and further transcripts profoundly upregulated during and subsequently downregulated up to 4 weeks postadministration of the drug; with the same transcriptomic pattern observed in all participants. The identification of a blood 'molecular signature' of rHumanEPO administration is the strongest evidence to date that gene biomarkers have the potential to substantially improve the analytical performance of current antidoping methods such as the Athlete Biological Passport for rHumanEPO detection. Given the early promise of transcriptomics, research using an 'omics'-based approach involving genomics, transcriptomics, proteomics and metabolomics should be intensified in order to achieve improved detection of rHumanEPO and other doping substances and methods difficult to detect such a recombinant human growth hormone and blood transfusions.
Dunn, C.; Subbaraman, M.R.
1989-06-13
An improvement in an inducer for a pump is disclosed wherein the inducer includes a hub, a plurality of radially extending substantially helical blades and a wall member extending about and encompassing an outer periphery of the blades. The improvement comprises forming adjacent pairs of blades and the hub to provide a substantially rectangular cross-sectional flow area which cross-sectional flow area decreases from the inlet end of the inducer to a discharge end of the inducer, resulting in increased inducer efficiency improved suction performance, reduced susceptibility to cavitation, reduced susceptibility to hub separation and reduced fabrication costs. 11 figs.
Jung, Youngkyoo; Samsonov, Alexey A; Bydder, Mark; Block, Walter F.
2011-01-01
Purpose To remove phase inconsistencies between multiple echoes, an algorithm using a radial acquisition to provide inherent phase and magnitude information for self correction was developed. The information also allows simultaneous support for parallel imaging for multiple coil acquisitions. Materials and Methods Without a separate field map acquisition, a phase estimate from each echo in multiple echo train was generated. When using a multiple channel coil, magnitude and phase estimates from each echo provide in-vivo coil sensitivities. An algorithm based on the conjugate gradient method uses these estimates to simultaneously remove phase inconsistencies between echoes, and in the case of multiple coil acquisition, simultaneously provides parallel imaging benefits. The algorithm is demonstrated on single channel, multiple channel, and undersampled data. Results Substantial image quality improvements were demonstrated. Signal dropouts were completely removed and undersampling artifacts were well suppressed. Conclusion The suggested algorithm is able to remove phase cancellation and undersampling artifacts simultaneously and to improve image quality of multiecho radial imaging, the important technique for fast 3D MRI data acquisition. PMID:21448967
Day, Ryan; Qu, Xiaotao; Swanson, Rosemarie; Bohannan, Zach; Bliss, Robert
2011-01-01
Abstract Most current template-based structure prediction methods concentrate on finding the correct backbone conformation and then packing sidechains within that backbone. Our packing-based method derives distance constraints from conserved relative packing groups (RPGs). In our refinement approach, the RPGs provide a level of resolution that restrains global topology while allowing conformational sampling. In this study, we test our template-based structure prediction method using 51 prediction units from CASP7 experiments. RPG-based constraints are able to substantially improve approximately two-thirds of starting templates. Upon deeper investigation, we find that true positive spatial constraints, especially those non-local in sequence, derived from the RPGs were important to building nearer native models. Surprisingly, the fraction of incorrect or false positive constraints does not strongly influence the quality of the final candidate. This result indicates that our RPG-based true positive constraints sample the self-consistent, cooperative interactions of the native structure. The lack of such reinforcing cooperativity explains the weaker effect of false positive constraints. Generally, these findings are encouraging indications that RPGs will improve template-based structure prediction. PMID:21210729
Chen, Minyong; Shi, Xiaofeng; Duke, Rebecca M.; Ruse, Cristian I.; Dai, Nan; Taron, Christopher H.; Samuelson, James C.
2017-01-01
A method for selective and comprehensive enrichment of N-linked glycopeptides was developed to facilitate detection of micro-heterogeneity of N-glycosylation. The method takes advantage of the inherent properties of Fbs1, which functions within the ubiquitin-mediated degradation system to recognize the common core pentasaccharide motif (Man3GlcNAc2) of N-linked glycoproteins. We show that Fbs1 is able to bind diverse types of N-linked glycomolecules; however, wild-type Fbs1 preferentially binds high-mannose-containing glycans. We identified Fbs1 variants through mutagenesis and plasmid display selection, which possess higher affinity and improved recovery of complex N-glycomolecules. In particular, we demonstrate that the Fbs1 GYR variant may be employed for substantially unbiased enrichment of N-linked glycopeptides from human serum. Most importantly, this highly efficient N-glycopeptide enrichment method enables the simultaneous determination of N-glycan composition and N-glycosites with a deeper coverage (compared to lectin enrichment) and improves large-scale N-glycoproteomics studies due to greatly reduced sample complexity. PMID:28534482
Boosting antibody developability through rational sequence optimization.
Seeliger, Daniel; Schulz, Patrick; Litzenburger, Tobias; Spitz, Julia; Hoerer, Stefan; Blech, Michaela; Enenkel, Barbara; Studts, Joey M; Garidel, Patrick; Karow, Anne R
2015-01-01
The application of monoclonal antibodies as commercial therapeutics poses substantial demands on stability and properties of an antibody. Therapeutic molecules that exhibit favorable properties increase the success rate in development. However, it is not yet fully understood how the protein sequences of an antibody translates into favorable in vitro molecule properties. In this work, computational design strategies based on heuristic sequence analysis were used to systematically modify an antibody that exhibited a tendency to precipitation in vitro. The resulting series of closely related antibodies showed improved stability as assessed by biophysical methods and long-term stability experiments. As a notable observation, expression levels also improved in comparison with the wild-type candidate. The methods employed to optimize the protein sequences, as well as the biophysical data used to determine the effect on stability under conditions commonly used in the formulation of therapeutic proteins, are described. Together, the experimental and computational data led to consistent conclusions regarding the effect of the introduced mutations. Our approach exemplifies how computational methods can be used to guide antibody optimization for increased stability.
Method of managing interference during delay recovery on a train system
Gordon, Susanna P.; Evans, John A.
2005-12-27
The present invention provides methods for preventing low train voltages and managing interference, thereby improving the efficiency, reliability, and passenger comfort associated with commuter trains. An algorithm implementing neural network technology is used to predict low voltages before they occur. Once voltages are predicted, then multiple trains can be controlled to prevent low voltage events. Further, algorithms for managing inference are presented in the present invention. Different types of interference problems are addressed in the present invention such as "Interference During Acceleration", "Interference Near Station Stops", and "Interference During Delay Recovery." Managing such interference avoids unnecessary brake/acceleration cycles during acceleration, immediately before station stops, and after substantial delays. Algorithms are demonstrated to avoid oscillatory brake/acceleration cycles due to interference and to smooth the trajectories of closely following trains. This is achieved by maintaining sufficient following distances to avoid unnecessary braking/accelerating. These methods generate smooth train trajectories, making for a more comfortable ride, and improve train motor reliability by avoiding unnecessary mode-changes between propulsion and braking. These algorithms can also have a favorable impact on traction power system requirements and energy consumption.
Galaxy And Mass Assembly (GAMA): AUTOZ spectral redshift measurements, confidence and errors
NASA Astrophysics Data System (ADS)
Baldry, I. K.; Alpaslan, M.; Bauer, A. E.; Bland-Hawthorn, J.; Brough, S.; Cluver, M. E.; Croom, S. M.; Davies, L. J. M.; Driver, S. P.; Gunawardhana, M. L. P.; Holwerda, B. W.; Hopkins, A. M.; Kelvin, L. S.; Liske, J.; López-Sánchez, Á. R.; Loveday, J.; Norberg, P.; Peacock, J.; Robotham, A. S. G.; Taylor, E. N.
2014-07-01
The Galaxy And Mass Assembly (GAMA) survey has obtained spectra of over 230 000 targets using the Anglo-Australian Telescope. To homogenize the redshift measurements and improve the reliability, a fully automatic redshift code was developed (AUTOZ). The measurements were made using a cross-correlation method for both the absorption- and the emission-line spectra. Large deviations in the high-pass-filtered spectra are partially clipped in order to be robust against uncorrected artefacts and to reduce the weight given to single-line matches. A single figure of merit (FOM) was developed that puts all template matches on to a similar confidence scale. The redshift confidence as a function of the FOM was fitted with a tanh function using a maximum likelihood method applied to repeat observations of targets. The method could be adapted to provide robust automatic redshifts for other large galaxy redshift surveys. For the GAMA survey, there was a substantial improvement in the reliability of assigned redshifts and in the lowering of redshift uncertainties with a median velocity uncertainty of 33 km s-1.
Zolper, John C.; Sherwin, Marc E.; Baca, Albert G.
2000-01-01
A method for making compound semiconductor devices including the use of a p-type dopant is disclosed wherein the dopant is co-implanted with an n-type donor species at the time the n-channel is formed and a single anneal at moderate temperature is then performed. Also disclosed are devices manufactured using the method. In the preferred embodiment n-MESFETs and other similar field effect transistor devices are manufactured using C ions co-implanted with Si atoms in GaAs to form an n-channel. C exhibits a unique characteristic in the context of the invention in that it exhibits a low activation efficiency (typically, 50% or less) as a p-type dopant, and consequently, it acts to sharpen the Si n-channel by compensating Si donors in the region of the Si-channel tail, but does not contribute substantially to the acceptor concentration in the buried p region. As a result, the invention provides for improved field effect semiconductor and related devices with enhancement of both DC and high-frequency performance.
Use of Monoclonal Antibodies for the Diagnosis of T-cell Malignancies: Applications and Limitations.
Hastrup, N; Pallesen, G; Ralfikiaer, E
1990-01-01
Biopsy samples from 136 peripheral T-cell lymphomas have been examined and compared with benign inflammatory T-cell infiltrates in an attempt to establish whether immunohistological methods may help to improve the distinction between these conditions. The results confirm and extend previous reports and indicate that the aberrant T-cell phenotypes constitute the single most reliable criterion for the distinction between benign and malignant T-cell infiltrates. These phenotypes are expressed frequently in T-cell malignancies in. lymphoid organs and are also seen in a substantial number of biopsy samples from advanced cutaneous T-cell lymphomas (CTCL). In contrast, early CTCL do not express aberrant T-cell phenotypes and are indistinguishable from benign cutaneous conditions in terms of their immunophenotypic properties. It is concluded that immunophenotypic techniques form a valuable supplement to routine histological methods for the diagnosis of T-cell lymphomas in lymphoid organs. The methods may also help to improve the diagnosis of advanced CTCL, but are of no or only limited help for the recognition of the early stages.
Parallel transmission RF pulse design for eddy current correction at ultra high field.
Zheng, Hai; Zhao, Tiejun; Qian, Yongxian; Ibrahim, Tamer; Boada, Fernando
2012-08-01
Multidimensional spatially selective RF pulses have been used in MRI applications such as B₁ and B₀ inhomogeneities mitigation. However, the long pulse duration has limited their practical applications. Recently, theoretical and experimental studies have shown that parallel transmission can effectively shorten pulse duration without sacrificing the quality of the excitation pattern. Nonetheless, parallel transmission with accelerated pulses can be severely impeded by hardware and/or system imperfections. One of such imperfections is the effect of the eddy current field. In this paper, we first show the effects of the eddy current field on the excitation pattern and then report an RF pulse the design method to correct eddy current fields caused by the RF coil and the gradient system. Experimental results on a 7 T human eight-channel parallel transmit system show substantial improvements on excitation patterns with the use of eddy current correction. Moreover, the proposed model-based correction method not only demonstrates comparable excitation patterns as the trajectory measurement method, but also significantly improves time efficiency. Copyright © 2012. Published by Elsevier Inc.
Oluwasola, Abideen O; Malaka, David; Khramtsov, Andrey Ilyich; Ikpatt, Offiong Francis; Odetunde, Abayomi; Adeyanju, Oyinlolu Olorunsogo; Sveen, Walmy Elisabeth; Falusi, Adeyinka Gloria; Huo, Dezheng; Olopade, Olufunmilayo Ibironke
2013-12-01
The importance of hormone receptor status in assigning treatment and the potential use of human epidermal growth factor receptor 2 (HER2)-targeted therapy have made it beneficial for laboratories to improve detection techniques. Because interlaboratory variability in immunohistochemistry (IHC) tests may also affect studies of breast cancer subtypes in different countries, we undertook a Web-based quality improvement training and a comparative study of accuracy of immunohistochemical tests of breast cancer biomarkers between a well-established laboratory in the United States (University of Chicago) and a field laboratory in Ibadan, Nigeria. Two hundred and thirty-two breast tumor blocks were evaluated for estrogen receptors (ERs), progesterone receptors (PRs), and HER2 status at both laboratories using tissue microarray technique. Initially, concordance analysis revealed κ scores of 0.42 (moderate agreement) for ER, 0.41 (moderate agreement) for PR, and 0.39 (fair agreement) for HER2 between the 2 laboratories. Antigen retrieval techniques and scoring methods were identified as important reasons for discrepancy. Web-based conferences using Web conferencing tools such as Skype and WebEx were then held periodically to discuss IHC staining protocols and standard scoring systems and to resolve discrepant cases. After quality assurance and training, the agreement improved to 0.64 (substantial agreement) for ER, 0.60 (moderate agreement) for PR, and 0.75 (substantial agreement) for HER2. We found Web-based conferences and digital microscopy useful and cost-effective tools for quality assurance of IHC, consultation, and collaboration between distant laboratories. Quality improvement exercises in testing of tumor biomarkers will reduce misclassification in epidemiologic studies of breast cancer subtypes and provide much needed capacity building in resource-poor countries. © 2013.
New method of fixation of in-bone implanted prosthesis
Pitkin, Mark; Cassidy, Charles; Muppavarapu, Raghuveer; Raymond, James; Shevtsov, Maxim; Galibin, Oleg; Rousselle, Serge D.
2013-01-01
This article presents results on the effectiveness of a new version of the titanium porous composite skin and bone integrated pylon (SBIP). The SBIP is designed for direct skeletal attachment of limb prostheses and was evaluated in a pre-clinical study with three rabbits. In accordance with the study protocol, a new version of the pylon (SBIP-3) was implanted into the hind leg residuum of three rabbits. The SBIP-3 has side fins that are designed to improve the bond between the bone and pylon. The fins are positioned inside two slots precut in the bone walls; their length can be adjusted to match the thickness of the bone walls. After 13 (animal 1) or 26 wk (animals 2 and 3), the animals were sacrificed and samples collected for histopathological analysis. The space between the fins and the bone into which they were fit was filled with fibro-vascular tissue and woven bone. No substantial inflammation was found. We suggest that if further studies substantiate the present results, the proposed method can become an alternative to the established technique of implanting prostheses into the medullar canal of the hosting bone. PMID:24013918
Deformable known component model-based reconstruction for coronary CT angiography
NASA Astrophysics Data System (ADS)
Zhang, X.; Tilley, S.; Xu, S.; Mathews, A.; McVeigh, E. R.; Stayman, J. W.
2017-03-01
Purpose: Atherosclerosis detection remains challenging in coronary CT angiography for patients with cardiac implants. Pacing electrodes of a pacemaker or lead components of a defibrillator can create substantial blooming and streak artifacts in the heart region, severely hindering the visualization of a plaque of interest. We present a novel reconstruction method that incorporates a deformable model for metal leads to eliminate metal artifacts and improve anatomy visualization even near the boundary of the component. Methods: The proposed reconstruction method, referred as STF-dKCR, includes a novel parameterization of the component that integrates deformation, a 3D-2D preregistration process that estimates component shape and position, and a polyenergetic forward model for x-ray propagation through the component where the spectral properties are jointly estimated. The methodology was tested on physical data of a cardiac phantom acquired on a CBCT testbench. The phantom included a simulated vessel, a metal wire emulating a pacing lead, and a small Teflon sphere attached to the vessel wall, mimicking a calcified plaque. The proposed method was also compared to the traditional FBP reconstruction and an interpolation-based metal correction method (FBP-MAR). Results: Metal artifacts presented in standard FBP reconstruction were significantly reduced in both FBP-MAR and STF- dKCR, yet only the STF-dKCR approach significantly improved the visibility of the small Teflon target (within 2 mm of the metal wire). The attenuation of the Teflon bead improved to 0.0481 mm-1 with STF-dKCR from 0.0166 mm-1 with FBP and from 0.0301 mm-1 with FBP-MAR - much closer to the expected 0.0414 mm-1. Conclusion: The proposed method has the potential to improve plaque visualization in coronary CT angiography in the presence of wire-shaped metal components.
Critical Assessment of Small Molecule Identification 2016: automated methods.
Schymanski, Emma L; Ruttkies, Christoph; Krauss, Martin; Brouard, Céline; Kind, Tobias; Dührkop, Kai; Allen, Felicity; Vaniya, Arpana; Verdegem, Dries; Böcker, Sebastian; Rousu, Juho; Shen, Huibin; Tsugawa, Hiroshi; Sajed, Tanvir; Fiehn, Oliver; Ghesquière, Bart; Neumann, Steffen
2017-03-27
The fourth round of the Critical Assessment of Small Molecule Identification (CASMI) Contest ( www.casmi-contest.org ) was held in 2016, with two new categories for automated methods. This article covers the 208 challenges in Categories 2 and 3, without and with metadata, from organization, participation, results and post-contest evaluation of CASMI 2016 through to perspectives for future contests and small molecule annotation/identification. The Input Output Kernel Regression (CSI:IOKR) machine learning approach performed best in "Category 2: Best Automatic Structural Identification-In Silico Fragmentation Only", won by Team Brouard with 41% challenge wins. The winner of "Category 3: Best Automatic Structural Identification-Full Information" was Team Kind (MS-FINDER), with 76% challenge wins. The best methods were able to achieve over 30% Top 1 ranks in Category 2, with all methods ranking the correct candidate in the Top 10 in around 50% of challenges. This success rate rose to 70% Top 1 ranks in Category 3, with candidates in the Top 10 in over 80% of the challenges. The machine learning and chemistry-based approaches are shown to perform in complementary ways. The improvement in (semi-)automated fragmentation methods for small molecule identification has been substantial. The achieved high rates of correct candidates in the Top 1 and Top 10, despite large candidate numbers, open up great possibilities for high-throughput annotation of untargeted analysis for "known unknowns". As more high quality training data becomes available, the improvements in machine learning methods will likely continue, but the alternative approaches still provide valuable complementary information. Improved integration of experimental context will also improve identification success further for "real life" annotations. The true "unknown unknowns" remain to be evaluated in future CASMI contests. Graphical abstract .
Real-Time Detection Method And System For Identifying Individual Aerosol Particles
Gard, Eric Evan; Fergenson, David Philip
2005-10-25
A method and system of identifying individual aerosol particles in real time. Sample aerosol particles are compared against and identified with substantially matching known particle types by producing positive and negative test spectra of an individual aerosol particle using a bipolar single particle mass spectrometer. Each test spectrum is compared to spectra of the same respective polarity in a database of predetermined positive and negative spectra for known particle types and a set of substantially matching spectra is obtained. Finally the identity of the individual aerosol particle is determined from the set of substantially matching spectra by determining a best matching one of the known particle types having both a substantially matching positive spectrum and a substantially matching negative spectrum associated with the best matching known particle type.
Investigating Star-Gas Correlation and Evolution in the 100pc Cygnus X Complex
NASA Astrophysics Data System (ADS)
Gutermuth, Robert
We request support to pursue a substantial refinement of the ongoing characterizations of star and gas surface density in nearby star forming regions by engaging in a focused study of the Cygnus X star forming complex. The substantial physical size of the region and high spatial dynamic range of its surveys enables us to achieve the following science goals: - Characterize the distributions of gas and stellar column densities in a large, nearby starforming complex and integrate those values over successively larger physical scales in order to gauge the effect of varying physical resolution on the measured star-gas correlation. - Validate integrated 24 ¼m luminosity as a method of estimating star formation rate surface density using a region in which the substantial number of known forming members should ensure that the IMF is statistically well-sampled. - Validate 12CO luminosity as a method of estimating molecular gas column density against 13CO column density. tegrated 24 micron and radio continuum luminosity. To achieve these goals, we will perform substantial improvement and expansion of the Cygnus X Spitzer (and 2MASS) Legacy Survey point source catalog using UKIRT Infrared Deep Sky Survey (UKIDSS) near-IR data and WISE mid-IR data. From this catalog, we will produce a comprehensive census of young stellar objects (YSOs) with IR-excess emission over the numerical bulk of the stellar mass function (0.2 2 M ). This YSO catalog is expected to be considerably larger than the entire YSO census of the nearest kiloparsec. Both the point source and YSO catalogs will be contributed to the Infrared Science Archive (IRSA) to facilitate community access to these improved data products. In addition, we will provide a star formation surface density map derived from the MIPS 24 micron map of Cygnus X from the Spitzer Legacy Survey and gas column density maps derived from 12CO and 13CO data from the Exeter-Five College Radio Astronomy Observatory Cygnus Survey. The proposed program will bring to maturity a major new scientific result from the combination of data from several NASA program investments (Spitzer Legacy, WISE, & 2MASS) and some external archives (UKIDSS GPS, Exeter-FCRAO XGRS) that we have shown above add considerable value to the scientific interpretation of the data from the NASA archive. The improvement in effective sensitivity to low mass YSOs from the Cygnus X Legacy Survey source catalog and our targeted science investigation to examine the star-gas correlation (and any deviation that may correlate with local YSO evolutionary age) are relevant to the NASA Astrophysics Theme, Cosmic Origins, which aspires to unveil how the universe developed to the current day configuration of galaxies, stars and planets and the conditions necessary for life.
Substantially oxygen-free contact tube
NASA Technical Reports Server (NTRS)
Pike, James F. (Inventor)
1993-01-01
A device for arc welding is provided in which a continuously-fed electrode wire is in electrical contact with a contact tube. The contact tube is improved by using a substantially oxygen-free conductive alloy in order to reduce the amount of electrical erosion.
Substantially Oxygen-Free Contact Tube
NASA Technical Reports Server (NTRS)
Pike, James F. (Inventor)
1991-01-01
A device for arc welding is provided in which a continuously-fed electrode wire is in electrical contact with a contact tube. The contact tube is improved by using a substantially oxygen-free conductive alloy in order to reduce the amount of electrical erosion.
NASA Astrophysics Data System (ADS)
Tomasi, G.; Kimberley, S.; Rosso, L.; Aboagye, E.; Turkheimer, F.
2012-04-01
In positron emission tomography (PET) studies involving organs different from the brain, ignoring the metabolite contribution to the tissue time-activity curves (TAC), as in the standard single-input (SI) models, may compromise the accuracy of the estimated parameters. We employed here double-input (DI) compartmental modeling (CM), previously used for [11C]thymidine, and a novel DI spectral analysis (SA) approach on the tracers 5-[18F]fluorouracil (5-[18F]FU) and [18F]fluorothymidine ([18F]FLT). CM and SA were performed initially with a SI approach using the parent plasma TAC as an input function. These methods were then employed using a DI approach with the metabolite plasma TAC as an additional input function. Regions of interest (ROIs) corresponding to healthy liver, kidneys and liver metastases for 5-[18F]FU and to tumor, vertebra and liver for [18F]FLT were analyzed. For 5-[18F]FU, the improvement of the fit quality with the DI approaches was remarkable; in CM, the Akaike information criterion (AIC) always selected the DI over the SI model. Volume of distribution estimates obtained with DI CM and DI SA were in excellent agreement, for both parent 5-[18F]FU (R2 = 0.91) and metabolite [18F]FBAL (R2 = 0.99). For [18F]FLT, the DI methods provided notable improvements but less substantial than for 5-[18F]FU due to the lower rate of metabolism of [18F]FLT. On the basis of the AIC values, agreement between [18F]FLT Ki estimated with the SI and DI models was good (R2 = 0.75) for the ROIs where the metabolite contribution was negligible, indicating that the additional input did not bias the parent tracer only-related estimates. When the AIC suggested a substantial contribution of the metabolite [18F]FLT-glucuronide, on the other hand, the change in the parent tracer only-related parameters was significant (R2 = 0.33 for Ki). Our results indicated that improvements of DI over SI approaches can range from moderate to substantial and are more significant for tracers with a high rate of metabolism. Furthermore, they showed that SA is suitable for DI modeling and can be used effectively in the analysis of PET data.
Pruning Rogue Taxa Improves Phylogenetic Accuracy: An Efficient Algorithm and Webservice
Aberer, Andre J.; Krompass, Denis; Stamatakis, Alexandros
2013-01-01
Abstract The presence of rogue taxa (rogues) in a set of trees can frequently have a negative impact on the results of a bootstrap analysis (e.g., the overall support in consensus trees). We introduce an efficient graph-based algorithm for rogue taxon identification as well as an interactive webservice implementing this algorithm. Compared with our previous method, the new algorithm is up to 4 orders of magnitude faster, while returning qualitatively identical results. Because of this significant improvement in scalability, the new algorithm can now identify substantially more complex and compute-intensive rogue taxon constellations. On a large and diverse collection of real-world data sets, we show that our method yields better supported reduced/pruned consensus trees than any competing rogue taxon identification method. Using the parallel version of our open-source code, we successfully identified rogue taxa in a set of 100 trees with 116 334 taxa each. For simulated data sets, we show that when removing/pruning rogue taxa with our method from a tree set, we consistently obtain bootstrap consensus trees as well as maximum-likelihood trees that are topologically closer to the respective true trees. PMID:22962004
Pruning rogue taxa improves phylogenetic accuracy: an efficient algorithm and webservice.
Aberer, Andre J; Krompass, Denis; Stamatakis, Alexandros
2013-01-01
The presence of rogue taxa (rogues) in a set of trees can frequently have a negative impact on the results of a bootstrap analysis (e.g., the overall support in consensus trees). We introduce an efficient graph-based algorithm for rogue taxon identification as well as an interactive webservice implementing this algorithm. Compared with our previous method, the new algorithm is up to 4 orders of magnitude faster, while returning qualitatively identical results. Because of this significant improvement in scalability, the new algorithm can now identify substantially more complex and compute-intensive rogue taxon constellations. On a large and diverse collection of real-world data sets, we show that our method yields better supported reduced/pruned consensus trees than any competing rogue taxon identification method. Using the parallel version of our open-source code, we successfully identified rogue taxa in a set of 100 trees with 116 334 taxa each. For simulated data sets, we show that when removing/pruning rogue taxa with our method from a tree set, we consistently obtain bootstrap consensus trees as well as maximum-likelihood trees that are topologically closer to the respective true trees.
Sastry, Madhavi; Lowrie, Jeffrey F; Dixon, Steven L; Sherman, Woody
2010-05-24
A systematic virtual screening study on 11 pharmaceutically relevant targets has been conducted to investigate the interrelation between 8 two-dimensional (2D) fingerprinting methods, 13 atom-typing schemes, 13 bit scaling rules, and 12 similarity metrics using the new cheminformatics package Canvas. In total, 157 872 virtual screens were performed to assess the ability of each combination of parameters to identify actives in a database screen. In general, fingerprint methods, such as MOLPRINT2D, Radial, and Dendritic that encode information about local environment beyond simple linear paths outperformed other fingerprint methods. Atom-typing schemes with more specific information, such as Daylight, Mol2, and Carhart were generally superior to more generic atom-typing schemes. Enrichment factors across all targets were improved considerably with the best settings, although no single set of parameters performed optimally on all targets. The size of the addressable bit space for the fingerprints was also explored, and it was found to have a substantial impact on enrichments. Small bit spaces, such as 1024, resulted in many collisions and in a significant degradation in enrichments compared to larger bit spaces that avoid collisions.
Chen, Baojiang; Qin, Jing
2014-05-10
In statistical analysis, a regression model is needed if one is interested in finding the relationship between a response variable and covariates. When the response depends on the covariate, then it may also depend on the function of this covariate. If one has no knowledge of this functional form but expect for monotonic increasing or decreasing, then the isotonic regression model is preferable. Estimation of parameters for isotonic regression models is based on the pool-adjacent-violators algorithm (PAVA), where the monotonicity constraints are built in. With missing data, people often employ the augmented estimating method to improve estimation efficiency by incorporating auxiliary information through a working regression model. However, under the framework of the isotonic regression model, the PAVA does not work as the monotonicity constraints are violated. In this paper, we develop an empirical likelihood-based method for isotonic regression model to incorporate the auxiliary information. Because the monotonicity constraints still hold, the PAVA can be used for parameter estimation. Simulation studies demonstrate that the proposed method can yield more efficient estimates, and in some situations, the efficiency improvement is substantial. We apply this method to a dementia study. Copyright © 2013 John Wiley & Sons, Ltd.
A Robust Bayesian Random Effects Model for Nonlinear Calibration Problems
Fong, Y.; Wakefield, J.; De Rosa, S.; Frahm, N.
2013-01-01
Summary In the context of a bioassay or an immunoassay, calibration means fitting a curve, usually nonlinear, through the observations collected on a set of samples containing known concentrations of a target substance, and then using the fitted curve and observations collected on samples of interest to predict the concentrations of the target substance in these samples. Recent technological advances have greatly improved our ability to quantify minute amounts of substance from a tiny volume of biological sample. This has in turn led to a need to improve statistical methods for calibration. In this paper, we focus on developing calibration methods robust to dependent outliers. We introduce a novel normal mixture model with dependent error terms to model the experimental noise. In addition, we propose a re-parameterization of the five parameter logistic nonlinear regression model that allows us to better incorporate prior information. We examine the performance of our methods with simulation studies and show that they lead to a substantial increase in performance measured in terms of mean squared error of estimation and a measure of the average prediction accuracy. A real data example from the HIV Vaccine Trials Network Laboratory is used to illustrate the methods. PMID:22551415
Wavelet median denoising of ultrasound images
NASA Astrophysics Data System (ADS)
Macey, Katherine E.; Page, Wyatt H.
2002-05-01
Ultrasound images are contaminated with both additive and multiplicative noise, which is modeled by Gaussian and speckle noise respectively. Distinguishing small features such as fallopian tubes in the female genital tract in the noisy environment is problematic. A new method for noise reduction, Wavelet Median Denoising, is presented. Wavelet Median Denoising consists of performing a standard noise reduction technique, median filtering, in the wavelet domain. The new method is tested on 126 images, comprised of 9 original images each with 14 levels of Gaussian or speckle noise. Results for both separable and non-separable wavelets are evaluated, relative to soft-thresholding in the wavelet domain, using the signal-to-noise ratio and subjective assessment. The performance of Wavelet Median Denoising is comparable to that of soft-thresholding. Both methods are more successful in removing Gaussian noise than speckle noise. Wavelet Median Denoising outperforms soft-thresholding for a larger number of cases of speckle noise reduction than of Gaussian noise reduction. Noise reduction is more successful using non-separable wavelets than separable wavelets. When both methods are applied to ultrasound images obtained from a phantom of the female genital tract a small improvement is seen; however, a substantial improvement is required prior to clinical use.
Improved Discrete Approximation of Laplacian of Gaussian
NASA Technical Reports Server (NTRS)
Shuler, Robert L., Jr.
2004-01-01
An improved method of computing a discrete approximation of the Laplacian of a Gaussian convolution of an image has been devised. The primary advantage of the method is that without substantially degrading the accuracy of the end result, it reduces the amount of information that must be processed and thus reduces the amount of circuitry needed to perform the Laplacian-of- Gaussian (LOG) operation. Some background information is necessary to place the method in context. The method is intended for application to the LOG part of a process of real-time digital filtering of digitized video data that represent brightnesses in pixels in a square array. The particular filtering process of interest is one that converts pixel brightnesses to binary form, thereby reducing the amount of information that must be performed in subsequent correlation processing (e.g., correlations between images in a stereoscopic pair for determining distances or correlations between successive frames of the same image for detecting motions). The Laplacian is often included in the filtering process because it emphasizes edges and textures, while the Gaussian is often included because it smooths out noise that might not be consistent between left and right images or between successive frames of the same image.
An improved current potential method for fast computation of stellarator coil shapes
NASA Astrophysics Data System (ADS)
Landreman, Matt
2017-04-01
Several fast methods for computing stellarator coil shapes are compared, including the classical NESCOIL procedure (Merkel 1987 Nucl. Fusion 27 867), its generalization using truncated singular value decomposition, and a Tikhonov regularization approach we call REGCOIL in which the squared current density is included in the objective function. Considering W7-X and NCSX geometries, and for any desired level of regularization, we find the REGCOIL approach simultaneously achieves lower surface-averaged and maximum values of both current density (on the coil winding surface) and normal magnetic field (on the desired plasma surface). This approach therefore can simultaneously improve the free-boundary reconstruction of the target plasma shape while substantially increasing the minimum distances between coils, preventing collisions between coils while improving access for ports and maintenance. The REGCOIL method also allows finer control over the level of regularization, it preserves convexity to ensure the local optimum found is the global optimum, and it eliminates two pathologies of NESCOIL: the resulting coil shapes become independent of the arbitrary choice of angles used to parameterize the coil surface, and the resulting coil shapes converge rather than diverge as Fourier resolution is increased. We therefore contend that REGCOIL should be used instead of NESCOIL for applications in which a fast and robust method for coil calculation is needed, such as when targeting coil complexity in fixed-boundary plasma optimization, or for scoping new stellarator geometries.
Classifying four-category visual objects using multiple ERP components in single-trial ERP.
Qin, Yu; Zhan, Yu; Wang, Changming; Zhang, Jiacai; Yao, Li; Guo, Xiaojuan; Wu, Xia; Hu, Bin
2016-08-01
Object categorization using single-trial electroencephalography (EEG) data measured while participants view images has been studied intensively. In previous studies, multiple event-related potential (ERP) components (e.g., P1, N1, P2, and P3) were used to improve the performance of object categorization of visual stimuli. In this study, we introduce a novel method that uses multiple-kernel support vector machine to fuse multiple ERP component features. We investigate whether fusing the potential complementary information of different ERP components (e.g., P1, N1, P2a, and P2b) can improve the performance of four-category visual object classification in single-trial EEGs. We also compare the classification accuracy of different ERP component fusion methods. Our experimental results indicate that the classification accuracy increases through multiple ERP fusion. Additional comparative analyses indicate that the multiple-kernel fusion method can achieve a mean classification accuracy higher than 72 %, which is substantially better than that achieved with any single ERP component feature (55.07 % for the best single ERP component, N1). We compare the classification results with those of other fusion methods and determine that the accuracy of the multiple-kernel fusion method is 5.47, 4.06, and 16.90 % higher than those of feature concatenation, feature extraction, and decision fusion, respectively. Our study shows that our multiple-kernel fusion method outperforms other fusion methods and thus provides a means to improve the classification performance of single-trial ERPs in brain-computer interface research.
Balachandran, Priya; Friberg, Maria; Vanlandingham, V; Kozak, K; Manolis, Amanda; Brevnov, Maxim; Crowley, Erin; Bird, Patrick; Goins, David; Furtado, Manohar R; Petrauskene, Olga V; Tebbs, Robert S; Charbonneau, Duane
2012-02-01
Reducing the risk of Salmonella contamination in pet food is critical for both companion animals and humans, and its importance is reflected by the substantial increase in the demand for pathogen testing. Accurate and rapid detection of foodborne pathogens improves food safety, protects the public health, and benefits food producers by assuring product quality while facilitating product release in a timely manner. Traditional culture-based methods for Salmonella screening are laborious and can take 5 to 7 days to obtain definitive results. In this study, we developed two methods for the detection of low levels of Salmonella in pet food using real-time PCR: (i) detection of Salmonella in 25 g of dried pet food in less than 14 h with an automated magnetic bead-based nucleic acid extraction method and (ii) detection of Salmonella in 375 g of composite dry pet food matrix in less than 24 h with a manual centrifugation-based nucleic acid preparation method. Both methods included a preclarification step using a novel protocol that removes food matrix-associated debris and PCR inhibitors and improves the sensitivity of detection. Validation studies revealed no significant differences between the two real-time PCR methods and the standard U.S. Food and Drug Administration Bacteriological Analytical Manual (chapter 5) culture confirmation method.
Factor models for cancer signatures
NASA Astrophysics Data System (ADS)
Kakushadze, Zura; Yu, Willie
2016-11-01
We present a novel method for extracting cancer signatures by applying statistical risk models (http://ssrn.com/abstract=2732453) from quantitative finance to cancer genome data. Using 1389 whole genome sequenced samples from 14 cancers, we identify an ;overall; mode of somatic mutational noise. We give a prescription for factoring out this noise and source code for fixing the number of signatures. We apply nonnegative matrix factorization (NMF) to genome data aggregated by cancer subtype and filtered using our method. The resultant signatures have substantially lower variability than those from unfiltered data. Also, the computational cost of signature extraction is cut by about a factor of 10. We find 3 novel cancer signatures, including a liver cancer dominant signature (96% contribution) and a renal cell carcinoma signature (70% contribution). Our method accelerates finding new cancer signatures and improves their overall stability. Reciprocally, the methods for extracting cancer signatures could have interesting applications in quantitative finance.
Removal of central obscuration and spider arm effects with beam-shaping coronagraphy
NASA Astrophysics Data System (ADS)
Abe, L.; Murakami, N.; Nishikawa, J.; Tamura, M.
2006-05-01
This paper describes a method for removing the effect of a centrally obscured aperture with additional spider arms in arbitrary geometrical configurations. The proposed method is based on a two-stage process where the light beam is first shaped to remove the central obscuration and spider arms, in order to feed a second, highly efficient coronagraph. The beam-shaping stage is a combination of a diffraction mask in the first focal plane and a complex amplitude filter located in the conjugate pupil. This paper specifically describes the case of using Lyot occulting masks and circular phase-shifting masks as diffracting components. The basic principle of the method is given along with an analytical description and numerical simulations. Substantial improvement in the performance of high-contrast coronagraphs can be obtained with this method, even if the beam-shaping filter is not perfectly manufactured.
Devers, Kelly J
2011-02-01
The 10-year systematic review of published health services and management research by Weiner et al. (2011) chronicles the contributions of qualitative methods, highlights areas of substantial progress, and identifies areas in need of more progress. This article (Devers, 2011) discusses possible reasons for lack of progress in some areas--related to the under-supply of well-trained qualitative researchers and more tangible demand for their research--and mechanisms for future improvement. To ensure a robust health services research toolbox, the field must take additional steps to provide stronger education and training in qualitative methods and more funding and publication opportunities. Given the rapidly changing health care system post the passage of national health reform and the chalresearch issues associated with it, the health services research and management field will not meet its future challenges with quantitative methods alone or with a half-empty toolbox.
Huang, Yi-Fei; Gulko, Brad; Siepel, Adam
2017-04-01
Many genetic variants that influence phenotypes of interest are located outside of protein-coding genes, yet existing methods for identifying such variants have poor predictive power. Here we introduce a new computational method, called LINSIGHT, that substantially improves the prediction of noncoding nucleotide sites at which mutations are likely to have deleterious fitness consequences, and which, therefore, are likely to be phenotypically important. LINSIGHT combines a generalized linear model for functional genomic data with a probabilistic model of molecular evolution. The method is fast and highly scalable, enabling it to exploit the 'big data' available in modern genomics. We show that LINSIGHT outperforms the best available methods in identifying human noncoding variants associated with inherited diseases. In addition, we apply LINSIGHT to an atlas of human enhancers and show that the fitness consequences at enhancers depend on cell type, tissue specificity, and constraints at associated promoters.
Ford, James H.; Oliver, Karen A.; Giles, Miriam; Cates-Wessel, Kathryn; Krahn, Dean; Levin, Frances R.
2017-01-01
Background and Objectives In 2000, the American Board of Medical Specialties implemented the Maintenance of Certification (MOC), a structured process to help physicians identify and implement a quality improvement project to improve patient care. This study reports on findings from an MOC Performance in Practice (PIP) module designed and evaluated by addiction psychiatrists who are members of the American Academy of Addiction Psychiatry (AAAP). Method A 3-phase process was utilized to recruit AAAP members to participate in the study. The current study utilized data from 154 self-selected AAAP members who evaluated the effectiveness of the MOC Tobacco Cessation PIP. Results Of the physicians participating, 76% (n 120) completed the Tobacco PIP. A paired t-test analysis revealed that reported changes in clinical measure documentation were significant across all six measures. Targeted improvement efforts focused on a single clinical measure. Results found that simple change projects designed to improve clinical practice led to substantial changes in self-reported chart documentation for the selected measure. Conclusions The current findings suggest that addiction psychiatrists can leverage the MOC process to improve clinical care. PMID:27973746
Improving EHR Capabilities to Facilitate Stage 3 Meaningful Use Care Coordination Criteria.
Cross, Dori A; Cohen, Genna R; Nong, Paige; Day, Anya-Victoria; Vibbert, Danielle; Naraharisetti, Ramya; Adler-Milstein, Julia
Primary care practices have been limited in their ability to leverage electronic health records (EHRs) and health information exchange (HIE) to improve care coordination, but will soon be incentivized to do so under proposed Stage 3 meaningful use criteria. We use mixed methods to understand how primary care practices manage, share and reconcile electronic patient information across care settings, and identify innovations in EHR design to support enhanced care coordination. Opportunities identified by practices focused on availability and usability of features that facilitate (1) generation of customized summary of care records, (2) team-based care approaches, and (3) management of the increased volume of electronic information generated and exchanged during care transitions. More broadly, vendors and policymakers need to continue to work together to improve interoperability as the key to effective care coordination. If these EHR innovations were widespread, the value of meeting the proposed Stage 3 care coordination criteria would be substantially enhanced.
Microfluidics for rapid cytokeratin immunohistochemical staining in frozen sections.
Brajkovic, Saska; Dupouy, Diego G; de Leval, Laurence; Gijs, Martin Am
2017-08-01
Frozen sections (FS) of tumor samples represent a cornerstone of pathological intraoperative consultation and have an important role in the microscopic analysis of specimens during surgery. So far, immunohistochemical (IHC) stainings on FS have been demonstrated for a few markers using manual methods. Microfluidic technologies have proven to bring substantial improvement in many fields of diagnostics, though only a few microfluidic devices have been designed to improve the performance of IHC assays. In this work, we show optimization of a complete pan-cytokeratin chromogenic immunostaining protocol on FS using a microfluidic tissue processor into a protocol taking <12 min. Our results showed specificity and low levels of background. The dimensions of the microfluidic prototype device are compatible with the space constraints of an intraoperative pathology laboratory. We therefore anticipate that the adoption of microfluidic technologies in the field of surgical pathology can significantly improve the way FSs influence surgical procedures.
NASA Astrophysics Data System (ADS)
Vidanović, Ivana; Bogojević, Aleksandar; Balaž, Antun; Belić, Aleksandar
2009-12-01
In this paper, building on a previous analysis [I. Vidanović, A. Bogojević, and A. Belić, preceding paper, Phys. Rev. E 80, 066705 (2009)] of exact diagonalization of the space-discretized evolution operator for the study of properties of nonrelativistic quantum systems, we present a substantial improvement to this method. We apply recently introduced effective action approach for obtaining short-time expansion of the propagator up to very high orders to calculate matrix elements of space-discretized evolution operator. This improves by many orders of magnitude previously used approximations for discretized matrix elements and allows us to numerically obtain large numbers of accurate energy eigenvalues and eigenstates using numerical diagonalization. We illustrate this approach on several one- and two-dimensional models. The quality of numerically calculated higher-order eigenstates is assessed by comparison with semiclassical cumulative density of states.
Microfluidics for rapid cytokeratin immunohistochemical staining in frozen sections
Brajkovic, Saska; Dupouy, Diego G.; de Leval, Laurence; Gijs, Martin A. M.
2017-01-01
Frozen sections (FS) of tumor samples represent a cornerstone of pathological intraoperative consultation and play an important role in the microscopic analysis of specimens during surgery. So far, immunohistochemical (IHC) stainings on FS have been demonstrated for a few markers using manual methods. Microfluidic technologies have proven to bring substantial improvement in many fields of diagnostics, though only a few microfluidic devices have been designed to improve the performance of IHC assays. In this work, we show optimization of a complete pan-cytokeratin chromogenic immunostaining protocol on FS using a microfluidic tissue processor, into a protocol taking less than 12 minutes. Our results showed specificity and low levels of background. The dimensions of the microfluidic prototype device are compatible with the space constraints of an intraoperative pathology laboratory. We therefore anticipate that the adoption of microfluidic technologies in the field of surgical pathology can significantly improve the way FSs influence surgical procedures. PMID:28553936
Procedural techniques in sacral nerve modulation.
Williams, Elizabeth R; Siegel, Steven W
2010-12-01
Sacral neuromodulation involves a staged process, including a screening trial and delayed formal implantation for those with substantial improvement. The advent of the tined lead has revolutionized the technology, allowing for a minimally invasive outpatient procedure to be performed under intravenous sedation. With the addition of fluoroscopy to the bilateral percutaneous nerve evaluation, there has been marked improvement in the placement of these temporary leads. Thus, the screening evaluation is now a better reflection of possible permanent improvement. Both methods of screening have advantages and disadvantages. Selection of a particular procedure should be tailored to individual patient characteristics. Subsequent implantation of the internal pulse generator (IPG) or explantation of an unsuccessful staged lead is straightforward outpatient procedure, providing minimal additional risk for the patient. Future refinement to the procedure may involve the introduction of a rechargeable battery, eliminating the need for IPG replacement at the end of the battery life.
Global Tsunami Warning System Development Since 2004
NASA Astrophysics Data System (ADS)
Weinstein, S.; Becker, N. C.; Wang, D.; Fryer, G. J.; McCreery, C.; Hirshorn, B. F.
2014-12-01
The 9.1 Mw Great Sumatra Earthquake of Dec. 26, 2004, generated the most destructive tsunami in history killing 227,000 people along Indian Ocean coastlines and was recorded by sea-level instruments world-wide. This tragedy showed the Indian Ocean needed a tsunami warning system to prevent another tragedy on this scale. The Great Sumatra Earthquake also highlighted the need for tsunami warning systems in other ocean basins. Instruments for recording earthquakes and sea-level data useful for tsunami monitoring did not exist outside of the Pacific Ocean in 2004. Seismometers were few in number, and even fewer were high-quality long period broadband instruments. Nor was much of their data made available to the US tsunami warning centers (TWCs). In 2004 the US TWCs relied exclusively on instrumentation provided and maintained by IRIS and the USGS for areas outside of the Pacific.Since 2004, the US TWCs and their partners have made substantial improvements to seismic and sea-level monitoring networks with the addition of new and better instruments, densification of existing networks, better communications infrastructure, and improved data sharing among tsunami warning centers. In particular, the number of sea-level stations transmitting data in near real-time and the amount of seismic data available to the tsunami warning centers has more than tripled. The DART network that consisted of a half-dozen Pacific stations in 2004 now totals nearly 60 stations worldwide. Earthquake and tsunami science has progressed as well. It took nearly three weeks to obtain the first reliable estimates of the 2004 Sumatra Earthquake's magnitude. Today, thanks to improved seismic networks and modern computing power, TWCs use the W-phase seismic moment method to determine accurate earthquake magnitudes and focal mechanisms for great earthquakes within 25 minutes. TWC scientists have also leveraged these modern computers to generate tsunami forecasts in a matter of minutes.Progress towards a global tsunami warning system has been substantial and today fully-functioning TWCs protect most of the world's coastlines. These improvements have also led to a substantial reduction of time required by the TWCs to detect, locate, and assess the tsunami threat from earthquakes occurring worldwide.
2012-01-01
Background High-density genotyping arrays that measure hybridization of genomic DNA fragments to allele-specific oligonucleotide probes are widely used to genotype single nucleotide polymorphisms (SNPs) in genetic studies, including human genome-wide association studies. Hybridization intensities are converted to genotype calls by clustering algorithms that assign each sample to a genotype class at each SNP. Data for SNP probes that do not conform to the expected pattern of clustering are often discarded, contributing to ascertainment bias and resulting in lost information - as much as 50% in a recent genome-wide association study in dogs. Results We identified atypical patterns of hybridization intensities that were highly reproducible and demonstrated that these patterns represent genetic variants that were not accounted for in the design of the array platform. We characterized variable intensity oligonucleotide (VINO) probes that display such patterns and are found in all hybridization-based genotyping platforms, including those developed for human, dog, cattle, and mouse. When recognized and properly interpreted, VINOs recovered a substantial fraction of discarded probes and counteracted SNP ascertainment bias. We developed software (MouseDivGeno) that identifies VINOs and improves the accuracy of genotype calling. MouseDivGeno produced highly concordant genotype calls when compared with other methods but it uniquely identified more than 786000 VINOs in 351 mouse samples. We used whole-genome sequence from 14 mouse strains to confirm the presence of novel variants explaining 28000 VINOs in those strains. We also identified VINOs in human HapMap 3 samples, many of which were specific to an African population. Incorporating VINOs in phylogenetic analyses substantially improved the accuracy of a Mus species tree and local haplotype assignment in laboratory mouse strains. Conclusion The problems of ascertainment bias and missing information due to genotyping errors are widely recognized as limiting factors in genetic studies. We have conducted the first formal analysis of the effect of novel variants on genotyping arrays, and we have shown that these variants account for a large portion of miscalled and uncalled genotypes. Genetic studies will benefit from substantial improvements in the accuracy of their results by incorporating VINOs in their analyses. PMID:22260749
Recovering Wood and McCarthy's ERP-prototypes by means of ERP-specific procrustes-rotation.
Beauducel, André
2018-02-01
The misallocation of treatment-variance on the wrong component has been discussed in the context of temporal principal component analysis of event-related potentials. There is, until now, no rotation-method that can perfectly recover Wood and McCarthy's prototypes without making use of additional information on treatment-effects. In order to close this gap, two new methods: for component rotation were proposed. After Varimax-prerotation, the first method identifies very small slopes of successive loadings. The corresponding loadings are set to zero in a target-matrix for event-related orthogonal partial Procrustes- (EPP-) rotation. The second method generates Gaussian normal distributions around the peaks of the Varimax-loadings and performs orthogonal Procrustes-rotation towards these Gaussian distributions. Oblique versions of this Gaussian event-related Procrustes- (GEP) rotation and of EPP-rotation are based on Promax-rotation. A simulation study revealed that the new orthogonal rotations recover Wood and McCarthy's prototypes and eliminate misallocation of treatment-variance. In an additional simulation study with a more pronounced overlap of the prototypes GEP Promax-rotation reduced the variance misallocation slightly more than EPP Promax-rotation. Comparison with Existing Method(s): Varimax- and conventional Promax-rotations resulted in substantial misallocations of variance in simulation studies when components had temporal overlap. A substantially reduced misallocation of variance occurred with the EPP-, EPP Promax-, GEP-, and GEP Promax-rotations. Misallocation of variance can be minimized by means of the new rotation methods: Making use of information on the temporal order of the loadings may allow for improvements of the rotation of temporal PCA components. Copyright © 2017 Elsevier B.V. All rights reserved.
Aerosol Measurements in the Mid-Atlantic: Trends and Uncertainty
NASA Astrophysics Data System (ADS)
Hains, J. C.; Chen, L. A.; Taubman, B. F.; Dickerson, R. R.
2006-05-01
Elevated levels of PM2.5 are associated with cardiovascular and respiratory problems and even increased mortality rates. In 2002 we ran two commonly used PM2.5 speciation samplers (an IMPROVE sampler and an EPA sampler) in parallel at Fort Meade, Maryland (a suburban site located in the Baltimore- Washington urban corridor). The filters were analyzed at different labs. This experiment allowed us to calculate the 'real world' uncertainties associated with these instruments. The EPA method retrieved a January average PM2.5 mass of 9.3 μg/m3 with a standard deviation of 2.8 μg/m3, while the IMPROVE method retrieved an average mass of 7.3 μg/m3 with a standard deviation of 2.1 μg/m3. The EPA method retrieved a July average PM2.5 mass of 26.4 μg/m3 with a standard deviation of 14.6 μg/m3, while the IMPROVE method retrieved an average mass of 23.3 μg/m3 with a standard deviation of 13.0 μg/m3. We calculated a 5% uncertainty associated with the EPA and IMPROVE methods that accounts for uncertainties in flow control strategies and laboratory analysis. The RMS difference between the two methods in January was 2.1 μg/m3, which is about 25% of the monthly average mass and greater than the uncertainty we calculated. In July the RMS difference between the two methods was 5.2 μg/m3, about 20% of the monthly average mass, and greater than the uncertainty we calculated. The EPA methods retrieve consistently higher concentrations of PM2.5 than the IMPROVE methods on a daily basis in January and July. This suggests a systematic bias possibly resulting from contamination of either of the sampling methods. We reconstructed the mass and found that both samplers have good correlation between reconstructed and gravimetric mass, though the IMPROVE method has slightly better correlation than the EPA method. In January, organic carbon is the largest contributor to PM2.5 mass, and in July both sulfate and organic matter contribute substantially to PM2.5. Source apportionment models suggest that regional and local power plants are the major sources of sulfate, while mobile and vegetative burning factors are the major sources of organic carbon.
Method of fabricating a uranium-bearing foil
Gooch, Jackie G [Seymour, TN; DeMint, Amy L [Kingston, TN
2012-04-24
Methods of fabricating a uranium-bearing foil are described. The foil may be substantially pure uranium, or may be a uranium alloy such as a uranium-molybdenum alloy. The method typically includes a series of hot rolling operations on a cast plate material to form a thin sheet. These hot rolling operations are typically performed using a process where each pass reduces the thickness of the plate by a substantially constant percentage. The sheet is typically then annealed and then cooled. The process typically concludes with a series of cold rolling passes where each pass reduces the thickness of the plate by a substantially constant thickness amount to form the foil.
Test Results for Entry Guidance Methods for Space Vehicles
NASA Technical Reports Server (NTRS)
Hanson, John M.; Jones, Robert E.
2004-01-01
There are a number of approaches to advanced guidance and control that have the potential for achieving the goals of significantly increasing reusable launch vehicle (or any space vehicle that enters an atmosphere) safety and reliability, and reducing the cost. This paper examines some approaches to entry guidance. An effort called Integration and Testing of Advanced Guidance and Control Technologies has recently completed a rigorous testing phase where these algorithms faced high-fidelity vehicle models and were required to perform a variety of representative tests. The algorithm developers spent substantial effort improving the algorithm performance in the testing. This paper lists the test cases used to demonstrate that the desired results are achieved, shows an automated test scoring method that greatly reduces the evaluation effort required, and displays results of the tests. Results show a significant improvement over previous guidance approaches. The two best-scoring algorithm approaches show roughly equivalent results and are ready to be applied to future vehicle concepts.
Test Results for Entry Guidance Methods for Reusable Launch Vehicles
NASA Technical Reports Server (NTRS)
Hanson, John M.; Jones, Robert E.
2003-01-01
There are a number of approaches to advanced guidance and control (AG&C) that have the potential for achieving the goals of significantly increasing reusable launch vehicle (RLV) safety and reliability, and reducing the cost. This paper examines some approaches to entry guidance. An effort called Integration and Testing of Advanced Guidance and Control Technologies (ITAGCT) has recently completed a rigorous testing phase where these algorithms faced high-fidelity vehicle models and were required to perform a variety of representative tests. The algorithm developers spent substantial effort improving the algorithm performance in the testing. This paper lists the test cases used to demonstrate that the desired results are achieved, shows an automated test scoring method that greatly reduces the evaluation effort required, and displays results of the tests. Results show a significant improvement over previous guidance approaches. The two best-scoring algorithm approaches show roughly equivalent results and are ready to be applied to future reusable vehicle concepts.
Rapid determination of total protein and wet gluten in commercial wheat flour using siSVR-NIR.
Chen, Jia; Zhu, Shipin; Zhao, Guohua
2017-04-15
The determination of total protein and wet gluten is of critical importance when screening commercial flour for desired processing suitability. To this end, a near-infrared spectroscopy (NIR) method with support vector regression was developed in the present study. The effects of spectral preprocessing and the synergy interval on model performance were investigated. The results showed that the models from raw spectra were not acceptable, but they were substantially improved by properly applying spectral preprocessing methods. Meanwhile, the synergy interval was validated with a good ability to improve the performance of models based on the whole spectrum. The coefficient of determination (R 2 ), the root mean square error of prediction (RMSEP) and the standard deviation ratio (SDR) of the best models for total protein (wet gluten) were 0.906 (0.850), 0.425 (1.024) and 3.065 (2.482), respectively. These two best models have similar and lower relative errors (approximately 8.8%), which indicates their feasibility. Copyright © 2016 Elsevier Ltd. All rights reserved.
An iterative reconstruction of cosmological initial density fields
NASA Astrophysics Data System (ADS)
Hada, Ryuichiro; Eisenstein, Daniel J.
2018-05-01
We present an iterative method to reconstruct the linear-theory initial conditions from the late-time cosmological matter density field, with the intent of improving the recovery of the cosmic distance scale from the baryon acoustic oscillations (BAOs). We present tests using the dark matter density field in both real and redshift space generated from an N-body simulation. In redshift space at z = 0.5, we find that the reconstructed displacement field using our iterative method are more than 80% correlated with the true displacement field of the dark matter particles on scales k < 0.10h Mpc-1. Furthermore, we show that the two-point correlation function of our reconstructed density field matches that of the initial density field substantially better, especially on small scales (<40h-1 Mpc). Our redshift-space results are improved if we use an anisotropic smoothing so as to account for the reduced small-scale information along the line of sight in redshift space.
Accuracy and Precision of Radioactivity Quantification in Nuclear Medicine Images
Frey, Eric C.; Humm, John L.; Ljungberg, Michael
2012-01-01
The ability to reliably quantify activity in nuclear medicine has a number of increasingly important applications. Dosimetry for targeted therapy treatment planning or for approval of new imaging agents requires accurate estimation of the activity in organs, tumors, or voxels at several imaging time points. Another important application is the use of quantitative metrics derived from images, such as the standard uptake value commonly used in positron emission tomography (PET), to diagnose and follow treatment of tumors. These measures require quantification of organ or tumor activities in nuclear medicine images. However, there are a number of physical, patient, and technical factors that limit the quantitative reliability of nuclear medicine images. There have been a large number of improvements in instrumentation, including the development of hybrid single-photon emission computed tomography/computed tomography and PET/computed tomography systems, and reconstruction methods, including the use of statistical iterative reconstruction methods, which have substantially improved the ability to obtain reliable quantitative information from planar, single-photon emission computed tomography, and PET images. PMID:22475429
Practice-tailored facilitation to improve pediatric preventive care delivery: a randomized trial.
Meropol, Sharon B; Schiltz, Nicholas K; Sattar, Abdus; Stange, Kurt C; Nevar, Ann H; Davey, Christina; Ferretti, Gerald A; Howell, Diana E; Strosaker, Robyn; Vavrek, Pamela; Bader, Samantha; Ruhe, Mary C; Cuttler, Leona
2014-06-01
Evolving primary care models require methods to help practices achieve quality standards. This study assessed the effectiveness of a Practice-Tailored Facilitation Intervention for improving delivery of 3 pediatric preventive services. In this cluster-randomized trial, a practice facilitator implemented practice-tailored rapid-cycle feedback/change strategies for improving obesity screening/counseling, lead screening, and dental fluoride varnish application. Thirty practices were randomized to Early or Late Intervention, and outcomes assessed for 16 419 well-child visits. A multidisciplinary team characterized facilitation processes by using comparative case study methods. Baseline performance was as follows: for Obesity: 3.5% successful performance in Early and 6.3% in Late practices, P = .74; Lead: 62.2% and 77.8% success, respectively, P = .11; and Fluoride: <0.1% success for all practices. Four months after randomization, performance rose in Early practices, to 82.8% for Obesity, 86.3% for Lead, and 89.1% for Fluoride, all P < .001 for improvement compared with Late practices' control time. During the full 6-month intervention, care improved versus baseline in all practices, for Obesity for Early practices to 86.5%, and for Late practices 88.9%; for Lead for Early practices to 87.5% and Late practices 94.5%; and for Fluoride, for Early practices to 78.9% and Late practices 81.9%, all P < .001 compared with baseline. Improvements were sustained 2 months after intervention. Successful facilitation involved multidisciplinary support, rapid-cycle problem solving feedback, and ongoing relationship-building, allowing individualizing facilitation approach and intensity based on 3 levels of practice need. Practice-tailored Facilitation Intervention can lead to substantial, simultaneous, and sustained improvements in 3 domains, and holds promise as a broad-based method to advance pediatric preventive care. Copyright © 2014 by the American Academy of Pediatrics.
The value of gynecologic cancer follow-up: evidence-based ignorance?
Lajer, Henrik; Jensen, Mette B; Kilsmark, Jannie; Albæk, Jens; Svane, Danny; Mirza, Mansoor R; Geertsen, Poul F; Reerman, Diana; Hansen, Kåre; Milter, Maya C; Mogensen, Ole
2010-11-01
To explore the extent of evidence-based data and cost-utility of follow-up after primary treatment of endometrial and ovarian cancer, addressing perspectives of technology, organization, economics, and patients. Systematic literature searches according to the recommendations of the Cochrane Handbook for Systematic Reviews of Interventions were conducted separately for each of the 4 perspectives. In addition, the organizational analysis included a nationwide questionnaire survey among all relevant hospital departments, and the operating costs were calculated. None of the identified studies supported a survival benefit from hospital-based follow-up after completion of primary treatment of endometrial or ovarian cancer. The methods for follow-up were of low technology (gynecologic examination with or without ultrasound examination). Other technologies had poor sensitivity and specificity in detecting recurrence. Small changes in applied technologies and organization lead to substantial changes in costs. Substantial differences especially in frequency and applied methods were found between departments. The literature review did not find evidence that follow-up affects the women's quality of life. The main purpose of follow-up after treatment of cancer is improved survival. Our review of the literature showed no evidence of a positive effect on survival in women followed up after primary treatment of endometrial or ovarian cancer. The conception of follow-up among physicians, patients, and their relatives therefore needs revision. Follow-up after treatment should have a clearly defined and evidence-based purpose. Based on the existing literature, this purpose should presently focus on other end points rather than early detection of relapse and improved survival. These end points could be quality of life, treatment toxicity, and economy.
State of the art in hair analysis for detection of drug and alcohol abuse.
Pragst, Fritz; Balikova, Marie A
2006-08-01
Hair differs from other materials used for toxicological analysis because of its unique ability to serve as a long-term storage of foreign substances with respect to the temporal appearance in blood. Over the last 20 years, hair testing has gained increasing attention and recognition for the retrospective investigation of chronic drug abuse as well as intentional or unintentional poisoning. In this paper, we review the physiological basics of hair growth, mechanisms of substance incorporation, analytical methods, result interpretation and practical applications of hair analysis for drugs and other organic substances. Improved chromatographic-mass spectrometric techniques with increased selectivity and sensitivity and new methods of sample preparation have improved detection limits from the ng/mg range to below pg/mg. These technical advances have substantially enhanced the ability to detect numerous drugs and other poisons in hair. For example, it was possible to detect previous administration of a single very low dose in drug-facilitated crimes. In addition to its potential application in large scale workplace drug testing and driving ability examination, hair analysis is also used for detection of gestational drug exposure, cases of criminal liability of drug addicts, diagnosis of chronic intoxication and in postmortem toxicology. Hair has only limited relevance in therapy compliance control. Fatty acid ethyl esters and ethyl glucuronide in hair have proven to be suitable markers for alcohol abuse. Hair analysis for drugs is, however, not a simple routine procedure and needs substantial guidelines throughout the testing process, i.e., from sample collection to results interpretation.
The burden of influenza in East and South-East Asia: a review of the English language literature.
Simmerman, James M; Uyeki, Timothy M
2008-05-01
While human infections with avian influenza A (H5NI) viruses in Asia have prompted concerns about an influenza pandemic, the burden of human influenza in East and Southeast Asia has received far less attention. We conducted a review of English language articles on influenza in 18 countries in East and Southeast Asia published from 1980 to 2006 that were indexed on PubMed. Articles that described human influenza-associated illnesses among outpatients or hospitalized patients, influenza-associated deaths, or influenza-associated socioeconomic costs were reviewed. We found 35 articles from 9 countries that met criteria for inclusion in the review. The quality of articles varied substantially. Significant heterogeneity was noted in case definitions, sampling schemes and laboratory methods. Early studies relied on cell culture, had difficulties with specimen collection and handling, and reported a low burden of disease. The recent addition of PCR testing has greatly improved the proportion of respiratory illnesses diagnosed with influenza. These more recent studies reported that 11-26% of outpatient febrile illness and 6-14% of hospitalized pneumonia cases had laboratory-confirmed influenza infection. The influenza disease burden literature from East and Southeast Asia is limited but expanding. Recent studies using improved laboratory testing methods and indirect statistical approaches report a substantial burden of disease, similar to that of Europe and North America. Current increased international focus on influenza, coupled with unprecedented funding for surveillance and research, provide a unique opportunity to more comprehensively describe the burden of human influenza in the region.
Increasing the Cryogenic Toughness of Steels
NASA Technical Reports Server (NTRS)
Rush, H. F.
1986-01-01
Grain-refining heat treatments increase toughness without substantial strength loss. Five alloys selected for study, all at or near technological limit. Results showed clearly grain sizes of these alloys refined by such heat treatments and grain refinement results in large improvement in toughness without substantial loss in strength. Best improvements seen in HP-9-4-20 Steel, at low-strength end of technological limit, and in Maraging 200, at high-strength end. These alloys, in grain refined condition, considered for model applications in high-Reynolds-number cryogenic wind tunnels.
26 CFR 1.1237-1 - Real property subdivided for sale.
Code of Federal Regulations, 2013 CFR
2013-04-01
... improvements. Other changes in the market price of the lot, not arising from improvements made by the taxpayer... roads, including gravel roads where required by the climate, are not substantial improvements. (5...
26 CFR 1.1237-1 - Real property subdivided for sale.
Code of Federal Regulations, 2014 CFR
2014-04-01
... improvements. Other changes in the market price of the lot, not arising from improvements made by the taxpayer... roads, including gravel roads where required by the climate, are not substantial improvements. (5...
26 CFR 1.1237-1 - Real property subdivided for sale.
Code of Federal Regulations, 2011 CFR
2011-04-01
... improvements. Other changes in the market price of the lot, not arising from improvements made by the taxpayer... roads, including gravel roads where required by the climate, are not substantial improvements. (5...
26 CFR 1.1237-1 - Real property subdivided for sale.
Code of Federal Regulations, 2012 CFR
2012-04-01
... improvements. Other changes in the market price of the lot, not arising from improvements made by the taxpayer... roads, including gravel roads where required by the climate, are not substantial improvements. (5...
26 CFR 1.1237-1 - Real property subdivided for sale.
Code of Federal Regulations, 2010 CFR
2010-04-01
... improvements. Other changes in the market price of the lot, not arising from improvements made by the taxpayer... roads, including gravel roads where required by the climate, are not substantial improvements. (5...
An efficient and sensitive method for preparing cDNA libraries from scarce biological samples
Sterling, Catherine H.; Veksler-Lublinsky, Isana; Ambros, Victor
2015-01-01
The preparation and high-throughput sequencing of cDNA libraries from samples of small RNA is a powerful tool to quantify known small RNAs (such as microRNAs) and to discover novel RNA species. Interest in identifying the small RNA repertoire present in tissues and in biofluids has grown substantially with the findings that small RNAs can serve as indicators of biological conditions and disease states. Here we describe a novel and straightforward method to clone cDNA libraries from small quantities of input RNA. This method permits the generation of cDNA libraries from sub-picogram quantities of RNA robustly, efficiently and reproducibly. We demonstrate that the method provides a significant improvement in sensitivity compared to previous cloning methods while maintaining reproducible identification of diverse small RNA species. This method should have widespread applications in a variety of contexts, including biomarker discovery from scarce samples of human tissue or body fluids. PMID:25056322
Quantitative properties of clustering within modern microscopic nuclear models
DOE Office of Scientific and Technical Information (OSTI.GOV)
Volya, A.; Tchuvil’sky, Yu. M., E-mail: tchuvl@nucl-th.sinp.msu.ru
2016-09-15
A method for studying cluster spectroscopic properties of nuclear fragmentation, such as spectroscopic amplitudes, cluster form factors, and spectroscopic factors, is developed on the basis of modern precision nuclear models that take into account the mixing of large-scale shell-model configurations. Alpha-cluster channels are considered as an example. A mathematical proof of the need for taking into account the channel-wave-function renormalization generated by exchange terms of the antisymmetrization operator (Fliessbach effect) is given. Examples where this effect is confirmed by a high quality of the description of experimental data are presented. By and large, the method in question extends substantially themore » possibilities for studying clustering phenomena in nuclei and for improving the quality of their description.« less
Silicon ribbon growth by a capillary action shaping technique
NASA Technical Reports Server (NTRS)
Schwuttke, G. H.; Schwuttke, G. H.; Ciszek, T. F.; Kran, A.
1977-01-01
Substantial improvements in ribbon surface quality are achieved with a higher melt meniscus than that attainable with the film-fed (EFG) growth technique. A capillary action shaping method is described in which meniscus shaping for the desired ribbon geometry occurs at the vertex of a wettable die. As ribbon growth depletes the melt meniscus, capillary action supplies replacement material. Topics discussed cover experimental apparatus and growth procedures; die materials investigations, fabrication and evaluation; process development for 25 mm, 38 mm, 50 mm and 100 mm silicon ribbons; and long grain direct solidification of silicon. Methods for the structural and electrical characterization of cast silicon ribbons are assessed as well as silicon ribbon technology for the 1978 to 1986 period.
75 FR 27504 - Substantial Product Hazard List: Hand-Held Hair Dryers
Federal Register 2010, 2011, 2012, 2013, 2014
2010-05-17
...The Consumer Product Safety Improvement Act of 2008 (``CPSIA''), authorizes the United States Consumer Product Safety Commission (``Commission'') to specify, by rule, for any consumer product or class of consumer products, characteristics whose existence or absence shall be deemed a substantial product hazard under certain circumstances. In this document, the Commission is proposing a rule to determine that any hand-held hair dryer without integral immersion protection presents a substantial product hazard.
Boonyasit, Yuwadee; Laiwattanapaisal, Wanida
2015-01-01
A method for acquiring albumin-corrected fructosamine values from whole blood using a microfluidic paper-based analytical system that offers substantial improvement over previous methods is proposed. The time required to quantify both serum albumin and fructosamine is shortened to 10 min with detection limits of 0.50 g dl(-1) and 0.58 mM, respectively (S/N = 3). The proposed system also exhibited good within-run and run-to-run reproducibility. The results of the interference study revealed that the acceptable recoveries ranged from 95.1 to 106.2%. The system was compared with currently used large-scale methods (n = 15), and the results demonstrated good agreement among the techniques. The microfluidic paper-based system has the potential to continuously monitor glycemic levels in low resource settings.
The use of periodization in exercise prescriptions for inactive adults: A systematic review
Strohacker, Kelley; Fazzino, Daniel; Breslin, Whitney L.; Xu, Xiaomeng
2015-01-01
Background Periodization of exercise is a method typically used in sports training, but the impact of periodized exercise on health outcomes in untrained adults is unclear. Purpose This review aims to summarize existing research wherein aerobic or resistance exercise was prescribed to inactive adults using a recognized periodization method. Methods A search of relevant databases, conducted between January and February of 2014, yielded 21 studies published between 2000 and 2013 that assessed the impact of periodized exercise on health outcomes in untrained participants. Results Substantial heterogeneity existed between studies, even under the same periodization method. Compared to baseline values or non-training control groups, prescribing periodized resistance or aerobic exercise yielded significant improvements in health outcomes related to traditional and emerging risk factors for cardiovascular disease, low-back and neck/shoulder pain, disease severity, and quality of life, with mixed results for increasing bone mineral density. Conclusions Although it is premature to conclude that periodized exercise is superior to non-periodized exercise for improving health outcomes, periodization appears to be a feasible means of prescribing exercise to inactive adults within an intervention setting. Further research is necessary to understand the effectiveness of periodizing aerobic exercise, the psychological effects of periodization, and the feasibility of implementing flexible non-linear methods. PMID:26844095
Webster, Joshua D; Michalowski, Aleksandra M; Dwyer, Jennifer E; Corps, Kara N; Wei, Bih-Rong; Juopperi, Tarja; Hoover, Shelley B; Simpson, R Mark
2012-01-01
The extent to which histopathology pattern recognition image analysis (PRIA) agrees with microscopic assessment has not been established. Thus, a commercial PRIA platform was evaluated in two applications using whole-slide images. Substantial agreement, lacking significant constant or proportional errors, between PRIA and manual morphometric image segmentation was obtained for pulmonary metastatic cancer areas (Passing/Bablok regression). Bland-Altman analysis indicated heteroscedastic measurements and tendency toward increasing variance with increasing tumor burden, but no significant trend in mean bias. The average between-methods percent tumor content difference was -0.64. Analysis of between-methods measurement differences relative to the percent tumor magnitude revealed that method disagreement had an impact primarily in the smallest measurements (tumor burden <3%). Regression-based 95% limits of agreement indicated substantial agreement for method interchangeability. Repeated measures revealed concordance correlation of >0.988, indicating high reproducibility for both methods, yet PRIA reproducibility was superior (C.V.: PRIA = 7.4, manual = 17.1). Evaluation of PRIA on morphologically complex teratomas led to diagnostic agreement with pathologist assessments of pluripotency on subsets of teratomas. Accommodation of the diversity of teratoma histologic features frequently resulted in detrimental trade-offs, increasing PRIA error elsewhere in images. PRIA error was nonrandom and influenced by variations in histomorphology. File-size limitations encountered while training algorithms and consequences of spectral image processing dominance contributed to diagnostic inaccuracies experienced for some teratomas. PRIA appeared better suited for tissues with limited phenotypic diversity. Technical improvements may enhance diagnostic agreement, and consistent pathologist input will benefit further development and application of PRIA.
Multi-gradient drilling method and system
Maurer, William C.; Medley, Jr., George H.; McDonald, William J.
2003-01-01
A multi-gradient system for drilling a well bore from a surface location into a seabed includes an injector for injecting buoyant substantially incompressible articles into a column of drilling fluid associated with the well bore. Preferably, the substantially incompressible articles comprises hollow substantially spherical bodies.
Code of Federal Regulations, 2014 CFR
2014-07-01
... substantial physical harm to persons, property, or the environment and to which persons or improvements on... substantially the quality of the environment, prevent or damage the beneficial use of land or water resources.... Reclamation activity means the reclamation, abatement, control, or prevention of adverse effects of past...
Code of Federal Regulations, 2012 CFR
2012-07-01
... substantial physical harm to persons, property, or the environment and to which persons or improvements on... substantially the quality of the environment, prevent or damage the beneficial use of land or water resources.... Reclamation activity means the reclamation, abatement, control, or prevention of adverse effects of past...
Context matters: A community-based study of urban minority parents’ views on child health
Bolar, Cassandra L.; Hernandez, Natalie; Akintobi, Tabia Henry; McAllister, Calvin; Ferguson, Aneeqah S.; Rollins, Latrice; Wrenn, Glenda; Okafor, Martha; Collins, David; Clem, Thomas
2016-01-01
Background Among children, there are substantial ethno-racial minority disparities across a broad range of health-related behaviors, experiences, and outcomes. Addressing these disparities is important, as childhood and adolescence establish health trajectories that extend throughout life. Methods The current study employed a community-based participatory research approach to gain community insight on child health priorities and to frame an intervention aimed at improving the health of minority children. Eight focus groups were conducted among seventy-five African American parents in a Southeastern city. The current study was guided by an ecological theoretical framework. Results Although the focus of this investigation was on community identification of child health priorities, participants cited, as root determinants, contextual factors, which included lack of healthy food options, lack of spaces for physical activity, and community violence. These co-occurring factors were related to limited engagement in outdoor activities and physical activity, increased obesity, and poor mental health and coping. Poor parenting was cited as the most substantial barrier to improving child health outcomes, and quality parenting was identified as the most important issue to address for community programs focused on promoting the health and success of children. For improving health outcomes for children in their neighborhoods, establishment of positive social capital and constructive activities were also cited. Conclusions These results reinforce social determinants of health as influences on child health outcomes and describe how community engagement can address potential solutions through interventions that resonate with program participants. PMID:27275021
Walker, Lindsey; Warfa, Abdi-Rizak M
2017-01-01
While the inquiry approach to science teaching has been widely recommended as an epistemic mechanism to promote deep content understanding, there is also increased expectation that process and other transferable skills should be integral part of science pedagogy. To test the hypothesis that coupling process skills to content teaching impacts academic success measures, we meta-analyzed twenty-one studies (n = 21) involving 7876 students that compared Process Oriented Guided Inquiry Learning (POGIL), a pedagogy that provides opportunities for improving process skills during content learning through guided-inquiry activities, to standard lecture conditions. Based on conventional measures of class performance, POGIL had a small effect on achievement outcomes (effect size = 0.29, [95% CI = 0.15-0.43]) but substantially improved the odds of passing a class (odds ratio = 2.02, [95% CI: 1.45-2.83]). That is, participants in the POGIL pedagogy had higher odds of passing a course and roughly performed 0.3 standard deviations higher on achievement measures than participants in standard lectures. In relative risk terms, POGIL reduced the risk of failing a course by 38%. These findings suggest providing opportunities to improve process skills during class instruction does not inhibit content learning but enhances conventional success measures. We compare these findings with those of recent large meta-analysis that examined the effects of global active learning methods on achievement outcomes and course failure rates in science, technology, engineering, and mathematics (STEM) fields.
Walker, Lindsey
2017-01-01
While the inquiry approach to science teaching has been widely recommended as an epistemic mechanism to promote deep content understanding, there is also increased expectation that process and other transferable skills should be integral part of science pedagogy. To test the hypothesis that coupling process skills to content teaching impacts academic success measures, we meta-analyzed twenty-one studies (n = 21) involving 7876 students that compared Process Oriented Guided Inquiry Learning (POGIL), a pedagogy that provides opportunities for improving process skills during content learning through guided-inquiry activities, to standard lecture conditions. Based on conventional measures of class performance, POGIL had a small effect on achievement outcomes (effect size = 0.29, [95% CI = 0.15–0.43]) but substantially improved the odds of passing a class (odds ratio = 2.02, [95% CI: 1.45–2.83]). That is, participants in the POGIL pedagogy had higher odds of passing a course and roughly performed 0.3 standard deviations higher on achievement measures than participants in standard lectures. In relative risk terms, POGIL reduced the risk of failing a course by 38%. These findings suggest providing opportunities to improve process skills during class instruction does not inhibit content learning but enhances conventional success measures. We compare these findings with those of recent large meta-analysis that examined the effects of global active learning methods on achievement outcomes and course failure rates in science, technology, engineering, and mathematics (STEM) fields. PMID:29023502
Hardy, David J; Wolff, Matthew A; Xia, Jianlin; Schulten, Klaus; Skeel, Robert D
2016-03-21
The multilevel summation method for calculating electrostatic interactions in molecular dynamics simulations constructs an approximation to a pairwise interaction kernel and its gradient, which can be evaluated at a cost that scales linearly with the number of atoms. The method smoothly splits the kernel into a sum of partial kernels of increasing range and decreasing variability with the longer-range parts interpolated from grids of increasing coarseness. Multilevel summation is especially appropriate in the context of dynamics and minimization, because it can produce continuous gradients. This article explores the use of B-splines to increase the accuracy of the multilevel summation method (for nonperiodic boundaries) without incurring additional computation other than a preprocessing step (whose cost also scales linearly). To obtain accurate results efficiently involves technical difficulties, which are overcome by a novel preprocessing algorithm. Numerical experiments demonstrate that the resulting method offers substantial improvements in accuracy and that its performance is competitive with an implementation of the fast multipole method in general and markedly better for Hamiltonian formulations of molecular dynamics. The improvement is great enough to establish multilevel summation as a serious contender for calculating pairwise interactions in molecular dynamics simulations. In particular, the method appears to be uniquely capable for molecular dynamics in two situations, nonperiodic boundary conditions and massively parallel computation, where the fast Fourier transform employed in the particle-mesh Ewald method falls short.
NASA Astrophysics Data System (ADS)
Hardy, David J.; Wolff, Matthew A.; Xia, Jianlin; Schulten, Klaus; Skeel, Robert D.
2016-03-01
The multilevel summation method for calculating electrostatic interactions in molecular dynamics simulations constructs an approximation to a pairwise interaction kernel and its gradient, which can be evaluated at a cost that scales linearly with the number of atoms. The method smoothly splits the kernel into a sum of partial kernels of increasing range and decreasing variability with the longer-range parts interpolated from grids of increasing coarseness. Multilevel summation is especially appropriate in the context of dynamics and minimization, because it can produce continuous gradients. This article explores the use of B-splines to increase the accuracy of the multilevel summation method (for nonperiodic boundaries) without incurring additional computation other than a preprocessing step (whose cost also scales linearly). To obtain accurate results efficiently involves technical difficulties, which are overcome by a novel preprocessing algorithm. Numerical experiments demonstrate that the resulting method offers substantial improvements in accuracy and that its performance is competitive with an implementation of the fast multipole method in general and markedly better for Hamiltonian formulations of molecular dynamics. The improvement is great enough to establish multilevel summation as a serious contender for calculating pairwise interactions in molecular dynamics simulations. In particular, the method appears to be uniquely capable for molecular dynamics in two situations, nonperiodic boundary conditions and massively parallel computation, where the fast Fourier transform employed in the particle-mesh Ewald method falls short.
Method of Manufacturing a Light Emitting, Photovoltaic or Other Electronic Apparatus and System
NASA Technical Reports Server (NTRS)
Blanchard, Richard A. (Inventor); Fuller, Kirk A. (Inventor); Ray, William Johnstone (Inventor); Shotton, Neil O. (Inventor); Frazier, Donald Odell (Inventor); Lowenthal, Mark D. (Inventor); Lewandowski, Mark Allan (Inventor)
2013-01-01
The present invention provides a method of manufacturing an electronic apparatus, such as a lighting device having light emitting diodes (LEDs) or a power generating device having photovoltaic diodes. The exemplary method includes forming at least one first conductor coupled to a base; coupling a plurality of substantially spherical substrate particles to the at least one first conductor; converting the substrate particles into a plurality of substantially spherical diodes; forming at least one second conductor coupled to the substantially spherical diodes; and depositing or attaching a plurality of substantially spherical lenses suspended in a first polymer. The lenses and the suspending polymer have different indices of refraction. In some embodiments, the lenses and diodes have a ratio of mean diameters or lengths between about 10:1 and 2:1. In various embodiments, the forming, coupling and converting steps are performed by or through a printing process.
Trends in ADL and IADL Disability in Community-Dwelling Older Adults in Shanghai, China, 1998–2008
2013-01-01
Objectives. We investigated trends in activities of daily living (ADL) and instrumental activities of daily living (IADL) disability from 1998 to 2008 among elder adults in Shanghai, China. Method. Our data came from 4 waves of the Shanghai Longitudinal Survey of Elderly Life and Opinion (1998, 2003, 2005, and 2008). ADL and IADL disabilities were recorded dichotomously (difficulty vs. no difficulty). The major independent variable was survey year. Covariates included demographics, socioeconomic conditions, family and social support, and other health conditions. Nested random-effect models were applied to estimate trends over time, referenced to 1998. Results. In comparison with the baseline year (1998), older adults in 2008 had lower odds of being ADL disabled, though the effect was no longer statistically significant when other health conditions were taken into account. Elders in 2003, 2005, and 2008 were 20%–26%, 17%–38%, and 53%–64% less likely to be IADL disabled than those in 1998, respectively, depending on the set of covariates included in the model. Discussion. Shanghai elders experienced substantial improvements in both ADL and IADL disability prevalence over the past decade. The trend toward improvement in IADL function is more consistent and substantial than that of ADL function. PMID:23525547
Bolobajev, J; Öncü, N Bilgin; Viisimaa, M; Trapido, M; Balcıoğlu, I; Goi, A
2015-01-01
An innovative strategy integrating the use of biosurfactant (BS) and persulphate activated by chelated iron for the decontamination of soil from an emerging pollutant chlorophene was studied in laboratory down-flow columns along with other persulphate activation aids including combined application of persulphate and hydrogen peroxide, and persulphate activation with sodium hydroxide. Although BS addition improved chlorophene removal by the persulphate treatment, the addition of chelated iron did not have a significant influence. Combined application of persulphate with hydrogen peroxide resulted in a significant (p≤.05) overall improvement of chlorophene removal compared with treatment with persulphate only. The highest removal rate (71%) of chlorophene was achieved with the base-activated persulphate, but only in the upper part (of 0.0-3.5 cm in depth) of the column. The chemicals at the applied dosages did not substantially influence the Daphnia magna toxicity of the effluent. Dehydrogenase activity (DHA) measurements indicated no substantial changes in the microbial activity during the persulphate treatment. The highest oxygen consumption and a slight increase in DHA were observed with the BS addition. The combined application of persulphate and BS at natural soil pH is a promising method for chlorophene-contaminated soil remediation. Hydroquinone was identified among the by-products of chlorophene degradation.
QUANTIFYING ALTERNATIVE SPLICING FROM PAIRED-END RNA-SEQUENCING DATA.
Rossell, David; Stephan-Otto Attolini, Camille; Kroiss, Manuel; Stöcker, Almond
2014-03-01
RNA-sequencing has revolutionized biomedical research and, in particular, our ability to study gene alternative splicing. The problem has important implications for human health, as alternative splicing may be involved in malfunctions at the cellular level and multiple diseases. However, the high-dimensional nature of the data and the existence of experimental biases pose serious data analysis challenges. We find that the standard data summaries used to study alternative splicing are severely limited, as they ignore a substantial amount of valuable information. Current data analysis methods are based on such summaries and are hence sub-optimal. Further, they have limited flexibility in accounting for technical biases. We propose novel data summaries and a Bayesian modeling framework that overcome these limitations and determine biases in a non-parametric, highly flexible manner. These summaries adapt naturally to the rapid improvements in sequencing technology. We provide efficient point estimates and uncertainty assessments. The approach allows to study alternative splicing patterns for individual samples and can also be the basis for downstream analyses. We found a several fold improvement in estimation mean square error compared popular approaches in simulations, and substantially higher consistency between replicates in experimental data. Our findings indicate the need for adjusting the routine summarization and analysis of alternative splicing RNA-seq studies. We provide a software implementation in the R package casper.
New Methods for Personal Exposure Monitoring for Airborne Particles
Koehler, Kirsten A.; Peters, Thomas
2016-01-01
Airborne particles have been associated with a range of adverse cardiopulmonary outcomes, which has driven its monitoring at stationary, central sites throughout the world. Individual exposures, however, can differ substantially from concentrations measured at central sites due to spatial variability across a region and sources unique to the individual, such as cooking or cleaning in homes, traffic emissions during commutes, and widely varying sources encountered at work. Personal monitoring with small, battery-powered instruments enables the measurement of an individual’s exposure as they go about their daily activities. Personal monitoring can substantially reduce exposure misclassification and improve the power to detect relationships between particulate pollution and adverse health outcomes. By partitioning exposures to known locations and sources, it may be possible to account for variable toxicity of different sources. This review outlines recent advances in the field of personal exposure assessment for particulate pollution. Advances in battery technology have improved the feasibility of 24-hour monitoring, providing the ability to more completely attribute exposures to microenvironment (e.g., work, home, commute). New metrics to evaluate the relationship between particulate matter and health are also being considered, including particle number concentration, particle composition measures, and particle oxidative load. Such metrics provide opportunities to develop more precise associations between airborne particles and health and may provide opportunities for more effective regulations. PMID:26385477
Heim, Stefan; Pape-Neumann, Julia; van Ermingen-Marbach, Muna; Brinkhaus, Moti; Grande, Marion
2015-07-01
Whereas the neurobiological basis of developmental dyslexia has received substantial attention, only little is known about the processes in the brain during remediation. This holds in particular in light of recent findings on cognitive subtypes of dyslexia which suggest interactions between individual profiles, training methods, and also the task in the scanner. Therefore, we trained three groups of German dyslexic primary school children in the domains of phonology, attention, or visual word recognition. We compared neurofunctional changes after 4 weeks of training in these groups to those in untrained normal readers in a reading task and in a task of visual attention. The overall reading improvement in the dyslexic children was comparable over groups. It was accompanied by substantial increase of the activation level in the visual word form area (VWFA) during a reading task inside the scanner. Moreover, there were activation increases that were unique for each training group in the reading task. In contrast, when children performed the visual attention task, shared training effects were found in the left inferior frontal sulcus and gyrus, which varied in amplitude between the groups. Overall, the data reveal that different remediation programmes matched to individual profiles of dyslexia may improve reading ability and commonly affect the VWFA in dyslexia as a shared part of otherwise distinct networks.
Motion vector field upsampling for improved 4D cone-beam CT motion compensation of the thorax
NASA Astrophysics Data System (ADS)
Sauppe, Sebastian; Rank, Christopher M.; Brehm, Marcus; Paysan, Pascal; Seghers, Dieter; Kachelrieß, Marc
2017-03-01
To improve the accuracy of motion vector fields (MVFs) required for respiratory motion compensated (MoCo) CT image reconstruction without increasing the computational complexity of the MVF estimation approach, we propose a MVF upsampling method that is able to reduce the motion blurring in reconstructed 4D images. While respiratory gating improves the temporal resolution, it leads to sparse view sampling artifacts. MoCo image reconstruction has the potential to remove all motion artifacts while simultaneously making use of 100% of the rawdata. However the MVF accuracy is still below the temporal resolution of the CBCT data acquisition. Increasing the number of motion bins would increase reconstruction time and amplify sparse view artifacts, but not necessarily the accuracy of MVF. Therefore we propose a new method to upsample estimated MVFs and use those for MoCo. To estimate the MVFs, a modified version of the Demons algorithm is used. Our proposed method is able to interpolate the original MVFs up to a factor that each projection has its own individual MVF. To validate the method we use an artificially deformed clinical CT scan, with a breathing pattern of a real patient, and patient data acquired with a TrueBeamTM4D CBCT system (Varian Medical Systems). We evaluate our method for different numbers of respiratory bins, each again with different upsampling factors. Employing our upsampling method, motion blurring in the reconstructed 4D images, induced by irregular breathing and the limited temporal resolution of phase-correlated images, is substantially reduced.
Improved score statistics for meta-analysis in single-variant and gene-level association studies.
Yang, Jingjing; Chen, Sai; Abecasis, Gonçalo
2018-06-01
Meta-analysis is now an essential tool for genetic association studies, allowing them to combine large studies and greatly accelerating the pace of genetic discovery. Although the standard meta-analysis methods perform equivalently as the more cumbersome joint analysis under ideal settings, they result in substantial power loss under unbalanced settings with various case-control ratios. Here, we investigate the power loss problem by the standard meta-analysis methods for unbalanced studies, and further propose novel meta-analysis methods performing equivalently to the joint analysis under both balanced and unbalanced settings. We derive improved meta-score-statistics that can accurately approximate the joint-score-statistics with combined individual-level data, for both linear and logistic regression models, with and without covariates. In addition, we propose a novel approach to adjust for population stratification by correcting for known population structures through minor allele frequencies. In the simulated gene-level association studies under unbalanced settings, our method recovered up to 85% power loss caused by the standard methods. We further showed the power gain of our methods in gene-level tests with 26 unbalanced studies of age-related macular degeneration . In addition, we took the meta-analysis of three unbalanced studies of type 2 diabetes as an example to discuss the challenges of meta-analyzing multi-ethnic samples. In summary, our improved meta-score-statistics with corrections for population stratification can be used to construct both single-variant and gene-level association studies, providing a useful framework for ensuring well-powered, convenient, cross-study analyses. © 2018 WILEY PERIODICALS, INC.
Making perceptual learning practical to improve visual functions.
Polat, Uri
2009-10-01
Task-specific improvement in performance after training is well established. The finding that learning is stimulus-specific and does not transfer well between different stimuli, between stimulus locations in the visual field, or between the two eyes has been used to support the notion that neurons or assemblies of neurons are modified at the earliest stage of cortical processing. However, a debate regarding the proposed mechanism underlying perceptual learning is an ongoing issue. Nevertheless, generalization of a trained task to other functions is an important key, for both understanding the neural mechanisms and the practical value of the training. This manuscript describes a structured perceptual learning method that previously used (amblyopia, myopia) and a novel technique and results that were applied for presbyopia. In general, subjects were trained for contrast detection of Gabor targets under lateral masking conditions. Training improved contrast sensitivity and diminished the lateral suppression when it existed (amblyopia). The improvement was transferred to unrelated functions such as visual acuity. The new results of presbyopia show substantial improvement of the spatial and temporal contrast sensitivity, leading to improved processing speed of target detection as well as reaction time. Consequently, the subjects, who were able to eliminate the need for reading glasses, benefited. Thus, here we show that the transfer of functions indicates that the specificity of improvement in the trained task can be generalized by repetitive practice of target detection, covering a sufficient range of spatial frequencies and orientations, leading to an improvement in unrelated visual functions. Thus, perceptual learning can be a practical method to improve visual functions in people with impaired or blurred vision.
Impact of time-of-flight PET on quantification errors in MR imaging-based attenuation correction.
Mehranian, Abolfazl; Zaidi, Habib
2015-04-01
Time-of-flight (TOF) PET/MR imaging is an emerging imaging technology with great capabilities offered by TOF to improve image quality and lesion detectability. We assessed, for the first time, the impact of TOF image reconstruction on PET quantification errors induced by MR imaging-based attenuation correction (MRAC) using simulation and clinical PET/CT studies. Standard 4-class attenuation maps were derived by segmentation of CT images of 27 patients undergoing PET/CT examinations into background air, lung, soft-tissue, and fat tissue classes, followed by the assignment of predefined attenuation coefficients to each class. For each patient, 4 PET images were reconstructed: non-TOF and TOF both corrected for attenuation using reference CT-based attenuation correction and the resulting 4-class MRAC maps. The relative errors between non-TOF and TOF MRAC reconstructions were compared with their reference CT-based attenuation correction reconstructions. The bias was locally and globally evaluated using volumes of interest (VOIs) defined on lesions and normal tissues and CT-derived tissue classes containing all voxels in a given tissue, respectively. The impact of TOF on reducing the errors induced by metal-susceptibility and respiratory-phase mismatch artifacts was also evaluated using clinical and simulation studies. Our results show that TOF PET can remarkably reduce attenuation correction artifacts and quantification errors in the lungs and bone tissues. Using classwise analysis, it was found that the non-TOF MRAC method results in an error of -3.4% ± 11.5% in the lungs and -21.8% ± 2.9% in bones, whereas its TOF counterpart reduced the errors to -2.9% ± 7.1% and -15.3% ± 2.3%, respectively. The VOI-based analysis revealed that the non-TOF and TOF methods resulted in an average overestimation of 7.5% and 3.9% in or near lung lesions (n = 23) and underestimation of less than 5% for soft tissue and in or near bone lesions (n = 91). Simulation results showed that as TOF resolution improves, artifacts and quantification errors are substantially reduced. TOF PET substantially reduces artifacts and improves significantly the quantitative accuracy of standard MRAC methods. Therefore, MRAC should be less of a concern on future TOF PET/MR scanners with improved timing resolution. © 2015 by the Society of Nuclear Medicine and Molecular Imaging, Inc.
NASA Astrophysics Data System (ADS)
Lin, Tengfei; Zhu, Lixin; Chen, Weiwei; Wu, Siwu; Guo, Baochun; Jia, Demin
2013-09-01
The silanization reaction between boehmite (BM) nanoplatelets and bis-[3-(triethoxysilyl)-propyl]-tetrasulfide (TESPT) was characterized in detail. Via such modification process, the grafted sulfide moieties on the BM endow reactivity toward rubber and substantially improved hydrophobicity for BM. Accordingly, TESPT was employed as in situ modifier for the nitrile rubber (NBR)/BM compounds to improve the mechanical properties of the reinforced vulcanizates. The effects of BM content and in situ modification on the mechanical properties, curing characteristics and morphology were investigated. BM was found to be effective in improving the mechanical performance of NBR vulcanizates. The NBR/BM composites could be further strengthened by the incorporation of TESPT. The interfacial adhesion of NBR/BM composites was obviously improved by the addition of TESPT. The substantially improved mechanical performance was correlated to the interfacial reaction and the improved dispersion of BM in rubber matrix.
Lattice Boltzmann Simulation of Electroosmotic Micromixing by Heterogeneous Surface Charge
NASA Astrophysics Data System (ADS)
Tang, G. H.; Wang, F. F.; Tao, W. Q.
Microelectroosmotic flow is usually restricted to low Reynolds number regime, and mixing in these microfluidic systems becomes problematic due to the negligible inertial effects. To gain an improved understanding of mixing enhancement in microchannels patterned with heterogeneous surface charge, the lattice Boltzmann method has been employed to obtain the electric potential distribution in the electrolyte, the flow field, and the species concentration distribution, respectively. The simulation results show that heterogeneous surfaces can significantly disturb the streamlines leading to apparently substantial improvements in mixing. However, the introduction of such a feature can reduce the mass flow rate in the channel. The reduction in flow rate effectively prolongs the available mixing time when the flow passes through the channel and the observed mixing enhancement by heterogeneous surfaces partly results from longer mixing time.
Metrics for Evaluating the Accuracy of Solar Power Forecasting: Preprint
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, J.; Hodge, B. M.; Florita, A.
2013-10-01
Forecasting solar energy generation is a challenging task due to the variety of solar power systems and weather regimes encountered. Forecast inaccuracies can result in substantial economic losses and power system reliability issues. This paper presents a suite of generally applicable and value-based metrics for solar forecasting for a comprehensive set of scenarios (i.e., different time horizons, geographic locations, applications, etc.). In addition, a comprehensive framework is developed to analyze the sensitivity of the proposed metrics to three types of solar forecasting improvements using a design of experiments methodology, in conjunction with response surface and sensitivity analysis methods. The resultsmore » show that the developed metrics can efficiently evaluate the quality of solar forecasts, and assess the economic and reliability impact of improved solar forecasting.« less
NASA Technical Reports Server (NTRS)
Stella, P. M.
1984-01-01
The availability of data regarding the radiation behavior of GaAs and silicon solar cells is discussed as well as efforts to provide sufficient information. Other materials are considered too immature for reasonable radiation evaluation. The lack of concern over the possible catastrophic radiation degradation in cascade cells is a potentially serious problem. Lithium counterdoping shows potential for removing damage in irradiated P-type material, although initial efficiencies are not comparable to current state of the art. The possibility of refining the lithium doping method to maintain high initial efficiencies and combining it with radiation tolerant structures such as thin BSF cells or vertical junction cells could provide a substantial improvement in EOL efficiencies. Laser annealing of junctions, either those formed ion implantation or diffusion, may not only improve initial cell performance but might also reduce the radiation degradation rate.
NASA Astrophysics Data System (ADS)
Della Ventura, B.; Funari, R.; Anoop, K. K.; Amoruso, S.; Ausanio, G.; Gesuele, F.; Velotta, R.; Altucci, C.
2015-06-01
We report an application of femtosecond laser ablation to improve the sensitivity of biosensors based on a quartz crystal microbalance device. The nanoparticles produced by irradiating a gold target with 527-nm, 300-fs laser pulses, in high vacuum, are directly deposited on the quartz crystal microbalance electrode. Different gold electrodes are fabricated by varying the deposition time, thus addressing how the nanoparticles surface coverage influences the sensor response. The modified biosensor is tested by weighting immobilized IgG antibody from goat and its analyte (IgG from mouse), and the results are compared with a standard electrode. A substantial increase of biosensor sensitivity is achieved, thus demonstrating that femtosecond laser ablation and deposition is a viable physical method to improve the biosensor sensitivity by means of nanostructured electrodes.
Asteroid mass estimation with Markov-chain Monte Carlo
NASA Astrophysics Data System (ADS)
Siltala, Lauri; Granvik, Mikael
2017-10-01
Estimates for asteroid masses are based on their gravitational perturbations on the orbits of other objects such as Mars, spacecraft, or other asteroids and/or their satellites. In the case of asteroid-asteroid perturbations, this leads to a 13-dimensional inverse problem at minimum where the aim is to derive the mass of the perturbing asteroid and six orbital elements for both the perturbing asteroid and the test asteroid by fitting their trajectories to their observed positions. The fitting has typically been carried out with linearized methods such as the least-squares method. These methods need to make certain assumptions regarding the shape of the probability distributions of the model parameters. This is problematic as these assumptions have not been validated. We have developed a new Markov-chain Monte Carlo method for mass estimation which does not require an assumption regarding the shape of the parameter distribution. Recently, we have implemented several upgrades to our MCMC method including improved schemes for handling observational errors and outlier data alongside the option to consider multiple perturbers and/or test asteroids simultaneously. These upgrades promise significantly improved results: based on two separate results for (19) Fortuna with different test asteroids we previously hypothesized that simultaneous use of both test asteroids would lead to an improved result similar to the average literature value for (19) Fortuna with substantially reduced uncertainties. Our upgraded algorithm indeed finds a result essentially equal to the literature value for this asteroid, confirming our previous hypothesis. Here we show these new results for (19) Fortuna and other example cases, and compare our results to previous estimates. Finally, we discuss our plans to improve our algorithm further, particularly in connection with Gaia.
Stripline split-ring resonator with integrated optogalvanic sample cell
NASA Astrophysics Data System (ADS)
Persson, Anders; Berglund, Martin; Thornell, Greger; Possnert, Göran; Salehpour, Mehran
2014-04-01
Intracavity optogalvanic spectroscopy (ICOGS) has been proposed as a method for unambiguous detection of rare isotopes. Of particular interest is 14C, where detection of extremely low concentrations in the 1:1015 range (14C: 12C), is of interest in, e.g., radiocarbon dating and pharmaceutical sciences. However, recent reports show that ICOGS suffers from substantial problems with reproducibility. To qualify ICOGS as an analytical method, more stable and reliable plasma generation and signal detection are needed. In our proposed setup, critical parameters have been improved. We have utilized a stripline split-ring resonator microwave-induced microplasma source to excite and sustain the plasma. Such a microplasma source offers several advantages over conventional ICOGS plasma sources. For example, the stripline split-ring resonator concept employs separated plasma generation and signal detection, which enables sensitive detection at stable plasma conditions. The concept also permits in situ observation of the discharge conditions, which was found to improve reproducibility. Unique to the stripline split-ring resonator microplasma source in this study, is that the optogalvanic sample cell has been embedded in the device itself. This integration enables improved temperature control and more stable and accurate signal detection. Significant improvements are demonstrated, including reproducibility, signal-to-noise ratio, and precision.
Parallelized reliability estimation of reconfigurable computer networks
NASA Technical Reports Server (NTRS)
Nicol, David M.; Das, Subhendu; Palumbo, Dan
1990-01-01
A parallelized system, ASSURE, for computing the reliability of embedded avionics flight control systems which are able to reconfigure themselves in the event of failure is described. ASSURE accepts a grammar that describes a reliability semi-Markov state-space. From this it creates a parallel program that simultaneously generates and analyzes the state-space, placing upper and lower bounds on the probability of system failure. ASSURE is implemented on a 32-node Intel iPSC/860, and has achieved high processor efficiencies on real problems. Through a combination of improved algorithms, exploitation of parallelism, and use of an advanced microprocessor architecture, ASSURE has reduced the execution time on substantial problems by a factor of one thousand over previous workstation implementations. Furthermore, ASSURE's parallel execution rate on the iPSC/860 is an order of magnitude faster than its serial execution rate on a Cray-2 supercomputer. While dynamic load balancing is necessary for ASSURE's good performance, it is needed only infrequently; the particular method of load balancing used does not substantially affect performance.
Adaptive subdomain modeling: A multi-analysis technique for ocean circulation models
NASA Astrophysics Data System (ADS)
Altuntas, Alper; Baugh, John
2017-07-01
Many coastal and ocean processes of interest operate over large temporal and geographical scales and require a substantial amount of computational resources, particularly when engineering design and failure scenarios are also considered. This study presents an adaptive multi-analysis technique that improves the efficiency of these computations when multiple alternatives are being simulated. The technique, called adaptive subdomain modeling, concurrently analyzes any number of child domains, with each instance corresponding to a unique design or failure scenario, in addition to a full-scale parent domain providing the boundary conditions for its children. To contain the altered hydrodynamics originating from the modifications, the spatial extent of each child domain is adaptively adjusted during runtime depending on the response of the model. The technique is incorporated in ADCIRC++, a re-implementation of the popular ADCIRC ocean circulation model with an updated software architecture designed to facilitate this adaptive behavior and to utilize concurrent executions of multiple domains. The results of our case studies confirm that the method substantially reduces computational effort while maintaining accuracy.
Physical complications in acute lung injury survivors: a two-year longitudinal prospective study.
Fan, Eddy; Dowdy, David W; Colantuoni, Elizabeth; Mendez-Tellez, Pedro A; Sevransky, Jonathan E; Shanholtz, Carl; Himmelfarb, Cheryl R Dennison; Desai, Sanjay V; Ciesla, Nancy; Herridge, Margaret S; Pronovost, Peter J; Needham, Dale M
2014-04-01
Survivors of severe critical illness frequently develop substantial and persistent physical complications, including muscle weakness, impaired physical function, and decreased health-related quality of life. Our objective was to determine the longitudinal epidemiology of muscle weakness, physical function, and health-related quality of life and their associations with critical illness and ICU exposures. A multisite prospective study with longitudinal follow-up at 3, 6, 12, and 24 months after acute lung injury. Thirteen ICUs from four academic teaching hospitals. Two hundred twenty-two survivors of acute lung injury. None. At each time point, patients underwent standardized clinical evaluations of extremity, hand grip, and respiratory muscle strength; anthropometrics (height, weight, mid-arm circumference, and triceps skin fold thickness); 6-minute walk distance, and the Medical Outcomes Short-Form 36 health-related quality of life survey. During their hospitalization, survivors also had detailed daily evaluation of critical illness and related treatment variables. Over one third of survivors had objective evidence of muscle weakness at hospital discharge, with most improving within 12 months. This weakness was associated with substantial impairments in physical function and health-related quality of life that persisted at 24 months. The duration of bed rest during critical illness was consistently associated with weakness throughout 24-month follow-up. The cumulative dose of systematic corticosteroids and use of neuromuscular blockers in the ICU were not associated with weakness. Muscle weakness is common after acute lung injury, usually recovering within 12 months. This weakness is associated with substantial impairments in physical function and health-related quality of life that continue beyond 24 months. These results provide valuable prognostic information regarding physical recovery after acute lung injury. Evidence-based methods to reduce the duration of bed rest during critical illness may be important for improving these long-term impairments.
Lu, Yao; Treiman, Donald J
2011-06-01
This paper extends previous work on family structure and children's education by conceptualizing migration as a distinct form of family disruption that reduces parental input but brings substantial economic benefits through remittances. It examines the multiple and countervailing effects of migration on schooling in the context of substantial migration and limited educational opportunities for Blacks in South Africa. The receipt of remittances substantially increases Black children's school attendance, but has no such effect for Whites. The effect for Blacks is in part attributable to improved household economic conditions that increase household educational spending and reduce the demand for child labor. We also find a negative effect of parental absence due to migration, but it is largely cushioned by inflows of remittances. Sensitivity analyses using propensity score methods and contextual fixed-effect modeling suggest that the beneficial effect of remittances is relatively robust. We find further that remittances help ameliorate inter-familial socioeconomic inequality in schooling. Finally, we evaluate possible temporal changes and show that the positive and equalizing effects of remittances persisted during and after the apartheid regime. We conclude that labor migration and remittances, as institutionalized family strategies adopted by many Blacks, help reconfigure structural opportunities in the educational stratification process in South Africa.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Edwards, T D
1983-01-01
The French Intensive approach to truck gardening has the potential to provide substantially higher yields and lower per acre costs than do conventional farming techniques. It was the intent of this grant to show that there is the potential to accomplish the gains that the French Intensive method has to offer. It is obvious that locally grown food can greatly reduce transportation energy costs but when there is the consideration of higher efficiencies there will also be energy cost reductions due to lower fertilizer and pesticide useage. As with any farming technique, there is a substantial time interval for completemore » soil recovery after there have been made substantial soil modifications. There were major crop improvements even though there was such a short time since the soil had been greatly disturbed. It was also the intent of this grant to accomplish two other major objectives: first, the garden was managed under organic techniques which meant that there were no chemical fertilizers or synthetic pesticides to be used. Second, the garden was constructed so that a handicapped person in a wheelchair could manage and have a higher degree of self sufficiency with the garden. As an overall result, I would say that the garden has taken the first step of success and each year should become better.« less
Lu, Yao; Treiman, Donald J.
2013-01-01
This paper extends previous work on family structure and children’s education by conceptualizing migration as a distinct form of family disruption that reduces parental input but brings substantial economic benefits through remittances. It examines the multiple and countervailing effects of migration on schooling in the context of substantial migration and limited educational opportunities for Blacks in South Africa. The receipt of remittances substantially increases Black children’s school attendance, but has no such effect for Whites. The effect for Blacks is in part attributable to improved household economic conditions that increase household educational spending and reduce the demand for child labor. We also find a negative effect of parental absence due to migration, but it is largely cushioned by inflows of remittances. Sensitivity analyses using propensity score methods and contextual fixed-effect modeling suggest that the beneficial effect of remittances is relatively robust. We find further that remittances help ameliorate inter-familial socioeconomic inequality in schooling. Finally, we evaluate possible temporal changes and show that the positive and equalizing effects of remittances persisted during and after the apartheid regime. We conclude that labor migration and remittances, as institutionalized family strategies adopted by many Blacks, help reconfigure structural opportunities in the educational stratification process in South Africa. PMID:23935219
Vehicular impact absorption system
NASA Technical Reports Server (NTRS)
Knoell, A. C.; Wilson, A. H. (Inventor)
1978-01-01
An improved vehicular impact absorption system characterized by a plurality of aligned crash cushions of substantially cubic configuration is described. Each consists of a plurality of voided aluminum beverage cans arranged in substantial parallelism within a plurality of superimposed tiers and a covering envelope formed of metal hardware cloth. A plurality of cables is extended through the cushions in substantial parallelism with an axis of alignment for the cushions adapted to be anchored at each of the opposite end thereof.
Extended charge banking model of dual path shocks for implantable cardioverter defibrillators
Dosdall, Derek J; Sweeney, James D
2008-01-01
Background Single path defibrillation shock methods have been improved through the use of the Charge Banking Model of defibrillation, which predicts the response of the heart to shocks as a simple resistor-capacitor (RC) circuit. While dual path defibrillation configurations have significantly reduced defibrillation thresholds, improvements to dual path defibrillation techniques have been limited to experimental observations without a practical model to aid in improving dual path defibrillation techniques. Methods The Charge Banking Model has been extended into a new Extended Charge Banking Model of defibrillation that represents small sections of the heart as separate RC circuits, uses a weighting factor based on published defibrillation shock field gradient measures, and implements a critical mass criteria to predict the relative efficacy of single and dual path defibrillation shocks. Results The new model reproduced the results from several published experimental protocols that demonstrated the relative efficacy of dual path defibrillation shocks. The model predicts that time between phases or pulses of dual path defibrillation shock configurations should be minimized to maximize shock efficacy. Discussion Through this approach the Extended Charge Banking Model predictions may be used to improve dual path and multi-pulse defibrillation techniques, which have been shown experimentally to lower defibrillation thresholds substantially. The new model may be a useful tool to help in further improving dual path and multiple pulse defibrillation techniques by predicting optimal pulse durations and shock timing parameters. PMID:18673561
Botía, Juan A; Vandrovcova, Jana; Forabosco, Paola; Guelfi, Sebastian; D'Sa, Karishma; Hardy, John; Lewis, Cathryn M; Ryten, Mina; Weale, Michael E
2017-04-12
Weighted Gene Co-expression Network Analysis (WGCNA) is a widely used R software package for the generation of gene co-expression networks (GCN). WGCNA generates both a GCN and a derived partitioning of clusters of genes (modules). We propose k-means clustering as an additional processing step to conventional WGCNA, which we have implemented in the R package km2gcn (k-means to gene co-expression network, https://github.com/juanbot/km2gcn ). We assessed our method on networks created from UKBEC data (10 different human brain tissues), on networks created from GTEx data (42 human tissues, including 13 brain tissues), and on simulated networks derived from GTEx data. We observed substantially improved module properties, including: (1) few or zero misplaced genes; (2) increased counts of replicable clusters in alternate tissues (x3.1 on average); (3) improved enrichment of Gene Ontology terms (seen in 48/52 GCNs) (4) improved cell type enrichment signals (seen in 21/23 brain GCNs); and (5) more accurate partitions in simulated data according to a range of similarity indices. The results obtained from our investigations indicate that our k-means method, applied as an adjunct to standard WGCNA, results in better network partitions. These improved partitions enable more fruitful downstream analyses, as gene modules are more biologically meaningful.
Method of coextruding plastics to form a composite sheet
Tsien, Hsue C.
1985-06-04
This invention pertains to a method of producing a composite sheet of plastic materials by means of coextrusion. Two plastic materials are matched with respect to their melt indices. These matched plastic materials are then coextruded in a side-by-side orientation while hot and soft to form a composite sheet having a substantially uniform demarkation therebetween. The plastic materials are fed at a substantially equal extrusion velocity and generally have substantially equal viscosities. The coextruded plastics can be worked after coextrusion while they are still hot and soft.
Improved transition path sampling methods for simulation of rare events
NASA Astrophysics Data System (ADS)
Chopra, Manan; Malshe, Rohit; Reddy, Allam S.; de Pablo, J. J.
2008-04-01
The free energy surfaces of a wide variety of systems encountered in physics, chemistry, and biology are characterized by the existence of deep minima separated by numerous barriers. One of the central aims of recent research in computational chemistry and physics has been to determine how transitions occur between deep local minima on rugged free energy landscapes, and transition path sampling (TPS) Monte-Carlo methods have emerged as an effective means for numerical investigation of such transitions. Many of the shortcomings of TPS-like approaches generally stem from their high computational demands. Two new algorithms are presented in this work that improve the efficiency of TPS simulations. The first algorithm uses biased shooting moves to render the sampling of reactive trajectories more efficient. The second algorithm is shown to substantially improve the accuracy of the transition state ensemble by introducing a subset of local transition path simulations in the transition state. The system considered in this work consists of a two-dimensional rough energy surface that is representative of numerous systems encountered in applications. When taken together, these algorithms provide gains in efficiency of over two orders of magnitude when compared to traditional TPS simulations.
Better bioinformatics through usability analysis.
Bolchini, Davide; Finkelstein, Anthony; Perrone, Vito; Nagl, Sylvia
2009-02-01
Improving the usability of bioinformatics resources enables researchers to find, interact with, share, compare and manipulate important information more effectively and efficiently. It thus enables researchers to gain improved insights into biological processes with the potential, ultimately, of yielding new scientific results. Usability 'barriers' can pose significant obstacles to a satisfactory user experience and force researchers to spend unnecessary time and effort to complete their tasks. The number of online biological databases available is growing and there is an expanding community of diverse users. In this context there is an increasing need to ensure the highest standards of usability. Using 'state-of-the-art' usability evaluation methods, we have identified and characterized a sample of usability issues potentially relevant to web bioinformatics resources, in general. These specifically concern the design of the navigation and search mechanisms available to the user. The usability issues we have discovered in our substantial case studies are undermining the ability of users to find the information they need in their daily research activities. In addition to characterizing these issues, specific recommendations for improvements are proposed leveraging proven practices from web and usability engineering. The methods and approach we exemplify can be readily adopted by the developers of bioinformatics resources.
SNOMED CT module-driven clinical archetype management.
Allones, J L; Taboada, M; Martinez, D; Lozano, R; Sobrido, M J
2013-06-01
To explore semantic search to improve management and user navigation in clinical archetype repositories. In order to support semantic searches across archetypes, an automated method based on SNOMED CT modularization is implemented to transform clinical archetypes into SNOMED CT extracts. Concurrently, query terms are converted into SNOMED CT concepts using the search engine Lucene. Retrieval is then carried out by matching query concepts with the corresponding SNOMED CT segments. A test collection of the 16 clinical archetypes, including over 250 terms, and a subset of 55 clinical terms from two medical dictionaries, MediLexicon and MedlinePlus, were used to test our method. The keyword-based service supported by the OpenEHR repository offered us a benchmark to evaluate the enhancement of performance. In total, our approach reached 97.4% precision and 69.1% recall, providing a substantial improvement of recall (more than 70%) compared to the benchmark. Exploiting medical domain knowledge from ontologies such as SNOMED CT may overcome some limitations of the keyword-based systems and thus improve the search experience of repository users. An automated approach based on ontology segmentation is an efficient and feasible way for supporting modeling, management and user navigation in clinical archetype repositories. Copyright © 2013 Elsevier Inc. All rights reserved.
Alkali-catalyzed low temperature wet crosslinking of plant proteins using carboxylic acids.
Reddy, Narendra; Li, Ying; Yang, Yiqi
2009-01-01
We report the development of a new method of alkali-catalyzed low temperature wet crosslinking of plant proteins to improve their breaking tenacity without using high temperatures or phosphorus-containing catalysts used in conventional poly(carboxylic acid) crosslinking of cellulose and proteins. Carboxylic acids are preferred over aldehyde-containing crosslinkers for crosslinking proteins and cellulose because of their low toxicity and cost and ability to improve the desired properties of the materials. However, current knowledge in carboxylic acid crosslinking of proteins and cellulose requires the use of carboxylic acids with at least three carboxylic groups, toxic phosphorous-containing catalysts and curing at high temperatures (150-185 degrees C). The use of high temperatures and low pH in conventional carboxylic acid crosslinking has been reported to cause substantial strength loss and/or undesired changes in the properties of the crosslinked materials. In this research, gliadin, soy protein, and zein fibers have been crosslinked with malic acid, citric acid, and butanetetracarboxylic acid to improve the tenacity of the fibers without using high temperatures and phosphorus-containing catalysts. The new method of wet crosslinking using carboxylic acids containing two or more carboxylic groups will be useful to crosslink proteins for various industrial applications.
Noise sensitivity of portfolio selection in constant conditional correlation GARCH models
NASA Astrophysics Data System (ADS)
Varga-Haszonits, I.; Kondor, I.
2007-11-01
This paper investigates the efficiency of minimum variance portfolio optimization for stock price movements following the Constant Conditional Correlation GARCH process proposed by Bollerslev. Simulations show that the quality of portfolio selection can be improved substantially by computing optimal portfolio weights from conditional covariances instead of unconditional ones. Measurement noise can be further reduced by applying some filtering method on the conditional correlation matrix (such as Random Matrix Theory based filtering). As an empirical support for the simulation results, the analysis is also carried out for a time series of S&P500 stock prices.
Parental leave and child health.
Ruhm, C J
2000-11-01
This study investigates whether rights to parental leave improve pediatric health. Aggregate data are used for 16 European countries over the 1969 through 1994 period. More generous paid leave is found to reduce deaths of infants and young children. The magnitudes of the estimated effects are substantial, especially where a causal effect of leave is most plausible. In particular, there is a much stronger negative relationship between leave durations and post-neonatal or child fatalities than for perinatal mortality, neonatal deaths, or low birth weight. The evidence further suggests that parental leave may be a cost-effective method of bettering child health.
Combination film/splash fill for overcoming film fouling
DOE Office of Scientific and Technical Information (OSTI.GOV)
Phelps, P.M.; Minett, T.O.
1995-02-01
In summary, this large cooling tower user has found the Phelps film/splash Stack-Pack fill design to attain a substantial improvement in capability of their existing crossflow cooling towers, without increasing fan power or tower size. The lack of fouling in the film fill component of this fill design is due to the use of film fill with large (1 inch) spacing between sheets, coupled with effective water treatment as provided by Nalco. This combination of factors provides a proven method for significantly increasing crossflow or counterflow cooling tower capability while minimizing chances of serious fill fouling.
Regional Anesthesia in Total Joint Arthroplasty: What Is the Evidence?
Elmofty, Dalia H; Buvanendran, Asokumar
2017-09-01
Total joint arthroplasty is one of the most common surgical procedures performed for end-stage osteoarthritis. The increasing demand for knee and hip arthroplasties along with the improvement in life expectancy has created a substantial medical and economic impact on the society. Effective planning of health care for these individuals is vital. The best method for providing anesthesia and analgesia for total joint arthroplasty has not been defined. Yet, emerging evidence suggests that the type of anesthesia can affect morbidity and mortality of patients undergoing these procedures. Copyright © 2017 Elsevier Inc. All rights reserved.
Systems and methods for using a boehmite bond-coat with polyimide membranes for gas separation
Polishchuk, Kimberly Ann
2013-03-05
The subject matter disclosed herein relates to gas separation membranes and, more specifically, to polyimide gas separation membranes. In an embodiment, a gas separation membrane includes a porous substrate, a substantially continuous polyimide membrane layer, and one or more layers of boehmite nanoparticles disposed between the porous substrate and the polyimide membrane layer to form a bond-coat layer. The bond-coat layer is configured to improve the adhesion of the polyimide membrane layer to the porous substrate, and the polyimide membrane layer has a thickness approximately 100 nm or less.
[Pollution hazard for water bodies at oil production].
Zholdakova, Z I; Beliaeva, N I
2015-01-01
In the paper there have been summarizes the concepts of the danger of the pollution ofwater bodies in oil production (the most dangerous are reagents used in the drilling, drilling waste, oil and petrochemicals, oil biodestructors. There was shown the danger of the spread of oil pollution. New indices, presenting a hazard during drilling and oil production have been substantiated The tasks aimed to the improvement of the standards and methods of the control of the water pollution by oil, as well as of the documents regulating the conditions of environmental protection during the drilling have been conceived.
Multi-Detection Events, Probability Density Functions, and Reduced Location Area
DOE Office of Scientific and Technical Information (OSTI.GOV)
Eslinger, Paul W.; Schrom, Brian T.
2016-03-01
Abstract Several efforts have been made in the Comprehensive Nuclear-Test-Ban Treaty (CTBT) community to assess the benefits of combining detections of radionuclides to improve the location estimates available from atmospheric transport modeling (ATM) backtrack calculations. We present a Bayesian estimation approach rather than a simple dilution field of regard approach to allow xenon detections and non-detections to be combined mathematically. This system represents one possible probabilistic approach to radionuclide event formation. Application of this method to a recent interesting radionuclide event shows a substantial reduction in the location uncertainty of that event.
Pregnancy and alcohol use: evidence and recommendations for prenatal care.
Bailey, Beth A; Sokol, Robert J
2008-06-01
Pregnancy alcohol consumption has been linked to poor birth outcomes and long-term developmental problems. Despite this, a significant number of women drink during pregnancy. Although most prenatal care providers are asking women about alcohol use, validated screening tools are infrequently employed. Research has demonstrated that currently available screening methods and intervention techniques are effective in identifying and reducing pregnancy drinking. Implementing universal screening and appropriate intervention for pregnancy alcohol use should be a priority for prenatal care providers, as these efforts could substantially improve pregnancy, birth, and longer term developmental outcomes for those affected.
Gorin, Everett
1979-01-01
In a process for hydrocracking heavy polynuclear carbonaceous feedstocks to produce lighter hydrocarbon fuels by contacting the heavy feedstocks with hydrogen in the presence of a molten metal halide catalyst, thereafter separating at least a substantial portion of the carbonaceous material associated with the reaction mixture from the spent molten metal halide and thereafter regenerating the metal halide catalyst, an improvement comprising contacting the spent molten metal halide catalyst after removal of a major portion of the carbonaceous material therefrom with an additional quantity of hydrogen is disclosed.
Guidelines for the Reporting of Treatment Trials for Alcohol Use Disorders
Witkiewitz, Katie; Finney, John W.; Harris, Alex H.S; Kivlahan, Daniel R.; Kranzler, Henry R.
2015-01-01
Background The primary goals in conducting clinical trials of treatments for alcohol use disorders (AUDs) is to identify efficacious treatments and determine which treatments are most efficacious for which patients. Accurate reporting of study design features and results is imperative to enable readers of research reports to evaluate to what extent a study has achieved these goals. Guidance on quality of clinical trial reporting has evolved substantially over the past two decades, primarily through the publication and widespread adoption of the Consolidated Standards of Reporting Trials (CONSORT) statement. However, there is room to improve the adoption of those standards in reporting the design and findings of treatment trials for AUD. Methods Narrative review of guidance on reporting quality in AUD treatment trials. Results Despite improvements in the reporting of results of treatment trials for AUD over the past two decades, many published reports provide insufficient information on design or methods. Conclusions The reporting of alcohol treatment trial design, analysis, and results requires improvement in four primary areas: (1) trial registration, (2) procedures for recruitment and retention, (3) procedures for randomization and intervention design considerations, and (4) statistical methods used to assess treatment efficacy. Improvements in these areas and the adoption of reporting standards by authors, reviewers, and editors are critical to an accurate assessment of the reliability and validity of treatment effects. Continued developments in this area are needed to move AUD treatment research forward via systematic reviews and meta-analyses that maximize the utility of completed studies. PMID:26259958
Weighted analysis of paired microarray experiments.
Kristiansson, Erik; Sjögren, Anders; Rudemo, Mats; Nerman, Olle
2005-01-01
In microarray experiments quality often varies, for example between samples and between arrays. The need for quality control is therefore strong. A statistical model and a corresponding analysis method is suggested for experiments with pairing, including designs with individuals observed before and after treatment and many experiments with two-colour spotted arrays. The model is of mixed type with some parameters estimated by an empirical Bayes method. Differences in quality are modelled by individual variances and correlations between repetitions. The method is applied to three real and several simulated datasets. Two of the real datasets are of Affymetrix type with patients profiled before and after treatment, and the third dataset is of two-colour spotted cDNA type. In all cases, the patients or arrays had different estimated variances, leading to distinctly unequal weights in the analysis. We suggest also plots which illustrate the variances and correlations that affect the weights computed by our analysis method. For simulated data the improvement relative to previously published methods without weighting is shown to be substantial.
Wilding, Bruce M; Turner, Terry D
2014-12-02
A method of natural gas liquefaction may include cooling a gaseous NG process stream to form a liquid NG process stream. The method may further include directing the first tail gas stream out of a plant at a first pressure and directing a second tail gas stream out of the plant at a second pressure. An additional method of natural gas liquefaction may include separating CO.sub.2 from a liquid NG process stream and processing the CO.sub.2 to provide a CO.sub.2 product stream. Another method of natural gas liquefaction may include combining a marginal gaseous NG process stream with a secondary substantially pure NG stream to provide an improved gaseous NG process stream. Additionally, a NG liquefaction plant may include a first tail gas outlet, and at least a second tail gas outlet, the at least a second tail gas outlet separate from the first tail gas outlet.
Aiken, Alexander M.; Wanyoro, Anthony K.; Mwangi, Jonah; Juma, Francis; Mugoya, Isaac K.; Scott, J. Anthony G
2013-01-01
Introduction In low-income countries, Surgical Site Infection (SSI) is a common form of hospital-acquired infection. Antibiotic prophylaxis is an effective method of preventing these infections, if given immediately before the start of surgery. Although several studies in Africa have compared pre-operative versus post-operative prophylaxis, there are no studies describing the implementation of policies to improve prescribing of surgical antibiotic prophylaxis in African hospitals. Methods We conducted SSI surveillance at a typical Government hospital in Kenya over a 16 month period between August 2010 and December 2011, using standard definitions of SSI and the extent of contamination of surgical wounds. As an intervention, we developed a hospital policy that advised pre-operative antibiotic prophylaxis and discouraged extended post-operative antibiotics use. We measured process, outcome and balancing effects of this intervention in using an interrupted time series design. Results From a starting point of near-exclusive post-operative antibiotic use, after policy introduction in February 2011 there was rapid adoption of the use of pre-operative antibiotic prophylaxis (60% of operations at 1 week; 98% at 6 weeks) and a substantial decrease in the use of post-operative antibiotics (40% of operations at 1 week; 10% at 6 weeks) in Clean and Clean-Contaminated surgery. There was no immediate step-change in risk of SSI, but overall, there appeared to be a moderate reduction in the risk of superficial SSI across all levels of wound contamination. There were marked reductions in the costs associated with antibiotic use, the number of intravenous injections performed and nursing time spent administering these. Conclusion Implementation of a locally developed policy regarding surgical antibiotic prophylaxis is an achievable quality improvement target for hospitals in low-income countries, and can lead to substantial benefits for individual patients and the institution. PMID:24244390
Desensitization of metastable intermolecular composites
Busse, James R [South Fork, CO; Dye, Robert C [Los Alamos, NM; Foley, Timothy J [Los Alamos, NM; Higa, Kelvin T [Ridgecrest, CA; Jorgensen, Betty S [Jemez Springs, NM; Sanders, Victor E [White Rock, NM; Son, Steven F [Los Alamos, NM
2011-04-26
A method to substantially desensitize a metastable intermolecular composite material to electrostatic discharge and friction comprising mixing the composite material with an organic diluent and removing enough organic diluent from the mixture to form a mixture with a substantially putty-like consistency, as well as a concomitant method of recovering the metastable intermolecular composite material.
Desensitization and recovery of metastable intermolecular composites
Busse, James R [South Fork, CO; Dye, Robert C [Los Alamos, NM; Foley, Timothy J [Los Alamos, NM; Higa, Kelvin T [Ridgecrest, CA; Jorgensen, Betty S [Jemez Springs, NM; Sanders, Victor E [White Rock, NM; Son, Steven F [Los Alamos, NM
2010-09-07
A method to substantially desensitize a metastable intermolecular composite material to electrostatic discharge and friction comprising mixing the composite material with an organic diluent and removing enough organic diluent from the mixture to form a mixture with a substantially putty-like consistency, as well as a concomitant method of recovering the metastable intermolecular composite material.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hardy, David J., E-mail: dhardy@illinois.edu; Schulten, Klaus; Wolff, Matthew A.
2016-03-21
The multilevel summation method for calculating electrostatic interactions in molecular dynamics simulations constructs an approximation to a pairwise interaction kernel and its gradient, which can be evaluated at a cost that scales linearly with the number of atoms. The method smoothly splits the kernel into a sum of partial kernels of increasing range and decreasing variability with the longer-range parts interpolated from grids of increasing coarseness. Multilevel summation is especially appropriate in the context of dynamics and minimization, because it can produce continuous gradients. This article explores the use of B-splines to increase the accuracy of the multilevel summation methodmore » (for nonperiodic boundaries) without incurring additional computation other than a preprocessing step (whose cost also scales linearly). To obtain accurate results efficiently involves technical difficulties, which are overcome by a novel preprocessing algorithm. Numerical experiments demonstrate that the resulting method offers substantial improvements in accuracy and that its performance is competitive with an implementation of the fast multipole method in general and markedly better for Hamiltonian formulations of molecular dynamics. The improvement is great enough to establish multilevel summation as a serious contender for calculating pairwise interactions in molecular dynamics simulations. In particular, the method appears to be uniquely capable for molecular dynamics in two situations, nonperiodic boundary conditions and massively parallel computation, where the fast Fourier transform employed in the particle–mesh Ewald method falls short.« less
Improved perovskite phototransistor prepared using multi-step annealing method
NASA Astrophysics Data System (ADS)
Cao, Mingxuan; Zhang, Yating; Yu, Yu; Yao, Jianquan
2018-02-01
Organic-inorganic hybrid perovskites with good intrinsic physical properties have received substantial interest for solar cell and optoelectronic applications. However, perovskite film always suffers from a low carrier mobility due to its structural imperfection including sharp grain boundaries and pinholes, restricting their device performance and application potential. Here we demonstrate a straightforward strategy based on multi-step annealing process to improve the performance of perovskite photodetector. Annealing temperature and duration greatly affects the surface morphology and optoelectrical properties of perovskites which determines the device property of phototransistor. The perovskite films treated with multi-step annealing method tend to form highly uniform, well-crystallized and high surface coverage perovskite film, which exhibit stronger ultraviolet-visible absorption and photoluminescence spectrum compare to the perovskites prepared by conventional one-step annealing process. The field-effect mobilities of perovskite photodetector treated by one-step direct annealing method shows mobility as 0.121 (0.062) cm2V-1s-1 for holes (electrons), which increases to 1.01 (0.54) cm2V-1s-1 for that treated with muti-step slow annealing method. Moreover, the perovskite phototransistors exhibit a fast photoresponse speed of 78 μs. In general, this work focuses on the influence of annealing methods on perovskite phototransistor, instead of obtains best parameters of it. These findings prove that Multi-step annealing methods is feasible to prepared high performance based photodetector.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, Yuxuan; Martin, William; Williams, Mark
In this paper, a correction-based resonance self-shielding method is developed that allows annular subdivision of the fuel rod. The method performs the conventional iteration of the embedded self-shielding method (ESSM) without subdivision of the fuel to capture the interpin shielding effect. The resultant self-shielded cross sections are modified by correction factors incorporating the intrapin effects of radial variation of the shielded cross section, radial temperature distribution, and resonance interference. A quasi–one-dimensional slowing-down equation is developed to calculate such correction factors. The method is implemented in the DeCART code and compared with the conventional ESSM and subgroup method with benchmark MCNPmore » results. The new method yields substantially improved results for both spatially dependent reaction rates and eigenvalues for typical pressurized water reactor pin cell cases with uniform and nonuniform fuel temperature profiles. Finally, the new method is also proved effective in treating assembly heterogeneity and complex material composition such as mixed oxide fuel, where resonance interference is much more intense.« less
Cima, Robert R; Brown, Michael J; Hebl, James R; Moore, Robin; Rogers, James C; Kollengode, Anantha; Amstutz, Gwendolyn J; Weisbrod, Cheryl A; Narr, Bradly J; Deschamps, Claude
2011-07-01
Operating rooms (ORs) are resource-intense and costly hospital units. Maximizing OR efficiency is essential to maintaining an economically viable institution. OR efficiency projects often focus on a limited number of ORs or cases. Efforts across an entire OR suite have not been reported. Lean and Six Sigma methodologies were developed in the manufacturing industry to increase efficiency by eliminating non-value-added steps. We applied Lean and Six Sigma methodologies across an entire surgical suite to improve efficiency. A multidisciplinary surgical process improvement team constructed a value stream map of the entire surgical process from the decision for surgery to discharge. Each process step was analyzed in 3 domains, ie, personnel, information processed, and time. Multidisciplinary teams addressed 5 work streams to increase value at each step: minimizing volume variation; streamlining the preoperative process; reducing nonoperative time; eliminating redundant information; and promoting employee engagement. Process improvements were implemented sequentially in surgical specialties. Key performance metrics were collected before and after implementation. Across 3 surgical specialties, process redesign resulted in substantial improvements in on-time starts and reduction in number of cases past 5 pm. Substantial gains were achieved in nonoperative time, staff overtime, and ORs saved. These changes resulted in substantial increases in margin/OR/day. Use of Lean and Six Sigma methodologies increased OR efficiency and financial performance across an entire operating suite. Process mapping, leadership support, staff engagement, and sharing performance metrics are keys to enhancing OR efficiency. The performance gains were substantial, sustainable, positive financially, and transferrable to other specialties. Copyright © 2011 American College of Surgeons. Published by Elsevier Inc. All rights reserved.
Sandroff, Brian M; Bollaert, Rachel E; Pilutti, Lara A; Peterson, Melissa L; Baynard, Tracy; Fernhall, Bo; McAuley, Edward; Motl, Robert W
2017-10-01
Mobility disability is a common, debilitating feature of multiple sclerosis (MS). Exercise training has been identified as an approach to improve MS-related mobility disability. However, exercise randomized controlled trials (RCTs) on mobility in MS have generally not selectively targeted those with the onset of irreversible mobility disability. The current multi-site RCT compared the efficacy of 6-months of supervised, multimodal exercise training with an active control condition for improving mobility, gait, physical fitness, and cognitive outcomes in persons with substantial MS-related mobility disability. 83 participants with substantial MS-related mobility disability underwent initial mobility, gait, fitness, and cognitive processing speed assessments and were randomly assigned to 6-months of supervised multimodal (progressive aerobic, resistance, and balance) exercise training (intervention condition) or stretching-and-toning activities (control condition). Participants completed the same outcome assessments halfway through and immediately following the 6-month study period. There were statistically significant improvements in six-minute walk performance (F(2158)=3.12, p=0.05, η p 2 =0.04), peak power output (F(2150)=8.16, p<0.01, η p 2 =0.10), and Paced Auditory Serial Addition Test performance (F(2162)=4.67, p=0.01, η p 2 =0.05), but not gait outcomes, for those who underwent the intervention compared with those who underwent the control condition. This RCT provides novel, preliminary evidence that multimodal exercise training may improve endurance walking performance and cognitive processing speed, perhaps based on improvements in cardiorespiratory capacity, in persons with MS with substantial mobility disability. This is critical for informing the development of multi-site exercise rehabilitation programs in larger samples of persons with MS-related mobility disability. Copyright © 2017 Elsevier Inc. All rights reserved.
Aveling, Emma-Louise; Martin, Graham; Jiménez García, Senai; Martin, Lisa; Herbert, Georgia; Armstrong, Natalie; Dixon-Woods, Mary; Woolhouse, Ian
2012-12-01
Peer review offers a promising way of promoting improvement in health systems, but the optimal model is not yet clear. We aimed to describe a specific peer review model-reciprocal peer-to-peer review (RP2PR)-to identify the features that appeared to support optimal functioning. We conducted an ethnographic study involving observations, interviews and documentary analysis of the Improving Lung Cancer Outcomes Project, which involved 30 paired multidisciplinary lung cancer teams participating in facilitated reciprocal site visits. Analysis was based on the constant comparative method. Fundamental features of the model include multidisciplinary participation, a focus on discussion and observation of teams in action, rather than paperwork; facilitated reflection and discussion on data and observations; support to develop focused improvement plans. Five key features were identified as important in optimising this model: peers and pairing methods; minimising logistic burden; structure of visits; independent facilitation; and credibility of the process. Facilitated RP2PR was generally a positive experience for participants, but implementing improvement plans was challenging and required substantial support. RP2PR appears to be optimised when it is well organised; a safe environment for learning is created; credibility is maximised; implementation and impact are supported. RP2PR is seen as credible and legitimate by lung cancer teams and can act as a powerful stimulus to produce focused quality improvement plans and to support implementation. Our findings have identified how RP2PR functioned and may be optimised to provide a constructive, open space for identifying opportunities for improvement and solutions.
Enhancement of perfluoropolyether boundary lubrication performance: I. Preliminary results
NASA Technical Reports Server (NTRS)
Jones, W. R., Jr.; Ajayi, O. O.; Goodell, A. J.; Wedeven, L. D.; Devine, E.; Premore, R. E.
1995-01-01
A ball bearing simulator operating under starved conditions was used to evaluate the boundary lubrication performance of a perfluoropolyether (PFPE) Krytox 143 AB. Several approaches to enhance boundary lubrication were studied. These included: (1) soluble boundary additives, (2) bearing surface modifications, (3) 'run-in' surface films, and (4) ceramic bearing components. In addition, results were compared with two non-perfluorinated liquid lubricant formulations. Based on these preliminary tests, the following tentative conclusions can be made: (1) substantial improvements in boundary lubrication performance were observed with a beta-diketone boundary additive and a tricresyl phosphate (TCP) liquid surface pretreatment; (2) the use of rough Si3N4 balls (Ra = 40 micro-in) also provided substantial improvement but with concomitant abrasive wear; (3) marginal improvements were seen with two boundary additives (a phosphine and a phosphatriazine) and a neat (100%) fluid (a carboxylic acid terminated PFPE); and surface pretreatments with a synthetic hydrocarbon, a PTFE coating, and TiC coated 440C and smooth Si3N4 balls (R(sub a) less than 1 micro-in); and (4) two non-PFPE lubricant formulations (a PAO and a synthetic hydrocarbon) yielded substantial improvements.
NASA Astrophysics Data System (ADS)
McHugh, Luisa
Contemporary research has suggested that in order for students to compete globally in the 21st century workplace, pedagogy must shift to include the integration of science and mathematics, where teachers effectively incorporate the two disciplines seamlessly. Mathematics facilitates a deeper understanding of science concepts and has been linked to improved student perception of the integration of science and mathematics. Although there is adequate literature to substantiate students' positive responses to integration in terms of attitudes, there has been little empirical data to support significant academic improvement when both disciplines are taught in an integrated method. This research study, conducted at several school districts on Long Island and New York City, New York, examined teachers' attitudes toward integration and students' attitudes about, and achievement on assessments in, an integrated 8th grade science classroom compared to students in a non-integrated classroom. An examination of these parameters was conducted to analyze the impact of the sizeable investment of time and resources needed to teach an integrated curriculum effectively. These resources included substantial teacher training, planning time, collaboration with colleagues, and administration of student assessments. The findings suggest that students had positive outcomes associated with experiencing an integrated science and mathematics curriculum, though these were only weakly correlated with teacher confidence in implementing the integrated model successfully. The positive outcomes included the ability of students to understand scientific concepts within a concrete mathematical framework, improved confidence in applying mathematics to scientific ideas, and increased agreement with the usefulness of mathematics in interpreting science concepts. Implications of these research findings may be of benefit to educators and policymakers looking to adapt integrated curricula in order to improve the preparation of students to learn and achieve in a global society.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Park, Peter C.; Schreibmann, Eduard; Roper, Justin
2015-03-15
Purpose: Computed tomography (CT) artifacts can severely degrade dose calculation accuracy in proton therapy. Prompted by the recently increased popularity of magnetic resonance imaging (MRI) in the radiation therapy clinic, we developed an MRI-based CT artifact correction method for improving the accuracy of proton range calculations. Methods and Materials: The proposed method replaces corrupted CT data by mapping CT Hounsfield units (HU number) from a nearby artifact-free slice, using a coregistered MRI. MRI and CT volumetric images were registered with use of 3-dimensional (3D) deformable image registration (DIR). The registration was fine-tuned on a slice-by-slice basis by using 2D DIR.more » Based on the intensity of paired MRI pixel values and HU from an artifact-free slice, we performed a comprehensive analysis to predict the correct HU for the corrupted region. For a proof-of-concept validation, metal artifacts were simulated on a reference data set. Proton range was calculated using reference, artifactual, and corrected images to quantify the reduction in proton range error. The correction method was applied to 4 unique clinical cases. Results: The correction method resulted in substantial artifact reduction, both quantitatively and qualitatively. On respective simulated brain and head and neck CT images, the mean error was reduced from 495 and 370 HU to 108 and 92 HU after correction. Correspondingly, the absolute mean proton range errors of 2.4 cm and 1.7 cm were reduced to less than 2 mm in both cases. Conclusions: Our MRI-based CT artifact correction method can improve CT image quality and proton range calculation accuracy for patients with severe CT artifacts.« less
Tadd, Andrew R; Schwank, Johannes
2013-05-14
A catalytic reforming method is disclosed herein. The method includes sequentially supplying a plurality of feedstocks of variable compositions to a reformer. The method further includes adding a respective predetermined co-reactant to each of the plurality of feedstocks to obtain a substantially constant output from the reformer for the plurality of feedstocks. The respective predetermined co-reactant is based on a C/H/O atomic composition for a respective one of the plurality of feedstocks and a predetermined C/H/O atomic composition for the substantially constant output.
Sajn, Luka; Kukar, Matjaž
2011-12-01
The paper presents results of our long-term study on using image processing and data mining methods in a medical imaging. Since evaluation of modern medical images is becoming increasingly complex, advanced analytical and decision support tools are involved in integration of partial diagnostic results. Such partial results, frequently obtained from tests with substantial imperfections, are integrated into ultimate diagnostic conclusion about the probability of disease for a given patient. We study various topics such as improving the predictive power of clinical tests by utilizing pre-test and post-test probabilities, texture representation, multi-resolution feature extraction, feature construction and data mining algorithms that significantly outperform medical practice. Our long-term study reveals three significant milestones. The first improvement was achieved by significantly increasing post-test diagnostic probabilities with respect to expert physicians. The second, even more significant improvement utilizes multi-resolution image parametrization. Machine learning methods in conjunction with the feature subset selection on these parameters significantly improve diagnostic performance. However, further feature construction with the principle component analysis on these features elevates results to an even higher accuracy level that represents the third milestone. With the proposed approach clinical results are significantly improved throughout the study. The most significant result of our study is improvement in the diagnostic power of the whole diagnostic process. Our compound approach aids, but does not replace, the physician's judgment and may assist in decisions on cost effectiveness of tests. Copyright © 2010 Elsevier Ireland Ltd. All rights reserved.
Jeffrey, N.; Abdalla, F. B.; Lahav, O.; ...
2018-05-15
Mapping the underlying density field, including non-visible dark matter, using weak gravitational lensing measurements is now a standard tool in cosmology. Due to its importance to the science results of current and upcoming surveys, the quality of the convergence reconstruction methods should be well understood. We compare three different mass map reconstruction methods: Kaiser-Squires (KS), Wiener filter, and GLIMPSE. KS is a direct inversion method, taking no account of survey masks or noise. The Wiener filter is well motivated for Gaussian density fields in a Bayesian framework. The GLIMPSE method uses sparsity, with the aim of reconstructing non-linearities in themore » density field. We compare these methods with a series of tests on the public Dark Energy Survey (DES) Science Verification (SV) data and on realistic DES simulations. The Wiener filter and GLIMPSE methods offer substantial improvement on the standard smoothed KS with a range of metrics. For both the Wiener filter and GLIMPSE convergence reconstructions we present a 12% improvement in Pearson correlation with the underlying truth from simulations. To compare the mapping methods' abilities to find mass peaks, we measure the difference between peak counts from simulated {\\Lambda}CDM shear catalogues and catalogues with no mass fluctuations. This is a standard data vector when inferring cosmology from peak statistics. The maximum signal-to-noise value of these peak statistic data vectors was increased by a factor of 3.5 for the Wiener filter and by a factor of 9 using GLIMPSE. With simulations we measure the reconstruction of the harmonic phases, showing that the concentration of the phase residuals is improved 17% by GLIMPSE and 18% by the Wiener filter. We show that the correlation between the reconstructions from data and the foreground redMaPPer clusters is increased 18% by the Wiener filter and 32% by GLIMPSE. [Abridged]« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jeffrey, N.; et al.
2018-01-26
Mapping the underlying density field, including non-visible dark matter, using weak gravitational lensing measurements is now a standard tool in cosmology. Due to its importance to the science results of current and upcoming surveys, the quality of the convergence reconstruction methods should be well understood. We compare three different mass map reconstruction methods: Kaiser-Squires (KS), Wiener filter, and GLIMPSE. KS is a direct inversion method, taking no account of survey masks or noise. The Wiener filter is well motivated for Gaussian density fields in a Bayesian framework. The GLIMPSE method uses sparsity, with the aim of reconstructing non-linearities in themore » density field. We compare these methods with a series of tests on the public Dark Energy Survey (DES) Science Verification (SV) data and on realistic DES simulations. The Wiener filter and GLIMPSE methods offer substantial improvement on the standard smoothed KS with a range of metrics. For both the Wiener filter and GLIMPSE convergence reconstructions we present a 12% improvement in Pearson correlation with the underlying truth from simulations. To compare the mapping methods' abilities to find mass peaks, we measure the difference between peak counts from simulated {\\Lambda}CDM shear catalogues and catalogues with no mass fluctuations. This is a standard data vector when inferring cosmology from peak statistics. The maximum signal-to-noise value of these peak statistic data vectors was increased by a factor of 3.5 for the Wiener filter and by a factor of 9 using GLIMPSE. With simulations we measure the reconstruction of the harmonic phases, showing that the concentration of the phase residuals is improved 17% by GLIMPSE and 18% by the Wiener filter. We show that the correlation between the reconstructions from data and the foreground redMaPPer clusters is increased 18% by the Wiener filter and 32% by GLIMPSE. [Abridged]« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jeffrey, N.; Abdalla, F. B.; Lahav, O.
Mapping the underlying density field, including non-visible dark matter, using weak gravitational lensing measurements is now a standard tool in cosmology. Due to its importance to the science results of current and upcoming surveys, the quality of the convergence reconstruction methods should be well understood. We compare three different mass map reconstruction methods: Kaiser-Squires (KS), Wiener filter, and GLIMPSE. KS is a direct inversion method, taking no account of survey masks or noise. The Wiener filter is well motivated for Gaussian density fields in a Bayesian framework. The GLIMPSE method uses sparsity, with the aim of reconstructing non-linearities in themore » density field. We compare these methods with a series of tests on the public Dark Energy Survey (DES) Science Verification (SV) data and on realistic DES simulations. The Wiener filter and GLIMPSE methods offer substantial improvement on the standard smoothed KS with a range of metrics. For both the Wiener filter and GLIMPSE convergence reconstructions we present a 12% improvement in Pearson correlation with the underlying truth from simulations. To compare the mapping methods' abilities to find mass peaks, we measure the difference between peak counts from simulated {\\Lambda}CDM shear catalogues and catalogues with no mass fluctuations. This is a standard data vector when inferring cosmology from peak statistics. The maximum signal-to-noise value of these peak statistic data vectors was increased by a factor of 3.5 for the Wiener filter and by a factor of 9 using GLIMPSE. With simulations we measure the reconstruction of the harmonic phases, showing that the concentration of the phase residuals is improved 17% by GLIMPSE and 18% by the Wiener filter. We show that the correlation between the reconstructions from data and the foreground redMaPPer clusters is increased 18% by the Wiener filter and 32% by GLIMPSE. [Abridged]« less
Implementation of a novel efficient low cost method in structural health monitoring
NASA Astrophysics Data System (ADS)
Asadi, S.; Sepehry, N.; Shamshirsaz, M.; Vaghasloo, Y. A.
2017-05-01
In active structural health monitoring (SHM) methods, it is necessary to excite the structure with a preselected signal. More studies in the field of active SHM are focused on applying SHM on higher frequency ranges since it is possible to detect smaller damages, using higher excitation frequency. Also, to increase spatial domain of measurements and enhance signal to noise ratio (SNR), the amplitude of excitation signal is usually amplified. These issues become substantial where piezoelectric transducers with relatively high capacitance are used and consequently, need to utilize high power amplifiers becomes predominant. In this paper, a novel method named Step Excitation Method (SEM) is proposed and implemented for Lamb wave and transfer impedance-based SHM for damage detection in structures. Three different types of structure are studied: beam, plate and pipe. The related hardware is designed and fabricated which eliminates high power analog amplifiers and decreases complexity of driver significantly. Spectral Finite Element Method (SFEM) is applied to examine performance of proposed SEM. In proposed method, by determination of impulse response of the system, any input could be applied to the system by both finite element simulations and experiments without need for multiple measurements. The experimental results using SEM are compared with those obtained by conventional direct excitation method for healthy and damaged structures. The results show an improvement of amplitude resolution in damage detection comparing to conventional method which is due to achieving an SNR improvement up to 50%.
Kosinski, Karen Claire; Kulinkina, Alexandra V; Abrah, Akua Frimpomaa Atakora; Adjei, Michael N; Breen, Kara Marie; Chaudhry, Hafsa Myedah; Nevin, Paul E; Warner, Suzanne H; Tendulkar, Shalini Ahuja
2016-04-14
Surface water contaminated with human waste may transmit urogenital schistosomiasis (UGS). Water-related activities that allow skin exposure place people at risk, but public health practitioners know little about why some communities with access to improved water infrastructure have substantial surface water contact with infectious water bodies. Community-based mixed-methods research can provide critical information about water use and water infrastructure improvements. Our mixed-methods study assessed the context of water use in a rural community endemic for schistosomiasis. Eighty-seven (35.2 %) households reported using river water but not borehole water; 26 (10.5 %) reported using borehole water but not river water; and 133 (53.8 %) households reported using both water sources. All households are within 1 km of borehole wells, but tested water quality was poor in most wells. Schistosomiasis is perceived by study households (89.3 %) to be a widespread problem in the community, but perceived schistosomiasis risk fails to deter households from river water usage. Hematuria prevalence among schoolchildren does not differ by household water use preference. Focus group data provides context for water preferences. Demand for improvements to water infrastructure was a persistent theme; however, roles and responsibilities with respect to addressing community water and health concerns are ill-defined. Collectively, our study illustrates how complex attitudes towards water resources can affect which methods will be appropriate to address schistosomiasis.
Mezlini, Aziz M; Goldenberg, Anna
2017-10-01
Discovering genetic mechanisms driving complex diseases is a hard problem. Existing methods often lack power to identify the set of responsible genes. Protein-protein interaction networks have been shown to boost power when detecting gene-disease associations. We introduce a Bayesian framework, Conflux, to find disease associated genes from exome sequencing data using networks as a prior. There are two main advantages to using networks within a probabilistic graphical model. First, networks are noisy and incomplete, a substantial impediment to gene discovery. Incorporating networks into the structure of a probabilistic models for gene inference has less impact on the solution than relying on the noisy network structure directly. Second, using a Bayesian framework we can keep track of the uncertainty of each gene being associated with the phenotype rather than returning a fixed list of genes. We first show that using networks clearly improves gene detection compared to individual gene testing. We then show consistently improved performance of Conflux compared to the state-of-the-art diffusion network-based method Hotnet2 and a variety of other network and variant aggregation methods, using randomly generated and literature-reported gene sets. We test Hotnet2 and Conflux on several network configurations to reveal biases and patterns of false positives and false negatives in each case. Our experiments show that our novel Bayesian framework Conflux incorporates many of the advantages of the current state-of-the-art methods, while offering more flexibility and improved power in many gene-disease association scenarios.
Graham, Hamish R; Ayede, Adejumoke I; Bakare, Ayobami A; Oyewole, Oladapo B; Peel, David; Gray, Amy; McPake, Barbara; Neal, Eleanor; Qazi, Shamim; Izadnegahdar, Rasa; Falade, Adegoke G; Duke, Trevor
2017-10-27
Oxygen is a life-saving, essential medicine that is important for the treatment of many common childhood conditions. Improved oxygen systems can reduce childhood pneumonia mortality substantially. However, providing oxygen to children is challenging, especially in small hospitals with weak infrastructure and low human resource capacity. This trial will evaluate the implementation of improved oxygen systems at secondary-level hospitals in southwest Nigeria. The improved oxygen system includes: a standardised equipment package; training of clinical and technical staff; infrastructure support (including improved power supply); and quality improvement activities such as supportive supervision. Phase 1 will involve the introduction of pulse oximetry alone; phase 2 will involve the introduction of the full, improved oxygen system package. We have based the intervention design on a theory-based analysis of previous oxygen projects, and used quality improvement principles, evidence-based teaching methods, and behaviour-change strategies. We are using a stepped-wedge cluster randomised design with participating hospitals randomised to receive an improved oxygen system at 4-month steps (three hospitals per step). Our mixed-methods evaluation will evaluate effectiveness, impact, sustainability, process and fidelity. Our primary outcome measures are childhood pneumonia case fatality rate and inpatient neonatal mortality rate. Secondary outcome measures include a range of clinical, quality of care, technical, and health systems outcomes. The planned study duration is from 2015 to 2018. Our study will provide quality evidence on the effectiveness of improved oxygen systems, and how to better implement and scale-up oxygen systems in resource-limited settings. Our results should have important implications for policy-makers, hospital administrators, and child health organisations in Africa and globally. Australian New Zealand Clinical Trials Registry: ACTRN12617000341325 . Retrospectively registered on 6 March 2017.
Setting the equation: establishing value in spine care.
Resnick, Daniel K; Tosteson, Anna N A; Groman, Rachel F; Ghogawala, Zoher
2014-10-15
Topic review. Describe value measurement in spine care and discuss the motivation for, methods for, and limitations of such measurement. Spinal disorders are common and are an important cause of pain and disability. Numerous complementary and competing treatment strategies are used to treat spinal disorders, and the costs of these treatments is substantial and continue to rise despite clear evidence of improved health status as a result of these expenditures. The authors present the economic and legislative imperatives forcing the assessment of value in spine care. The definition of value in health care and methods to measure value specifically in spine care are presented. Limitations to the utility of value judgments and caveats to their use are presented. Examples of value calculations in spine care are presented and critiqued. Methods to improve and broaden the measurement of value across spine care are suggested, and the role of prospective registries in measuring value is discussed. Value can be measured in spine care through the use of appropriate economic measures and patient-reported outcomes measures. Value must be interpreted in light of the perspective of the assessor, the duration of the assessment period, the degree of appropriate risk stratification, and the relative value of treatment alternatives.