Sample records for efficient test set

  1. Does user-centred design affect the efficiency, usability and safety of CPOE order sets?

    PubMed Central

    Chan, Julie; Shojania, Kaveh G; Easty, Anthony C

    2011-01-01

    Background Application of user-centred design principles to Computerized provider order entry (CPOE) systems may improve task efficiency, usability or safety, but there is limited evaluative research of its impact on CPOE systems. Objective We evaluated the task efficiency, usability, and safety of three order set formats: our hospital's planned CPOE order sets (CPOE Test), computer order sets based on user-centred design principles (User Centred Design), and existing pre-printed paper order sets (Paper). Participants 27staff physicians, residents and medical students. Setting Sunnybrook Health Sciences Centre, an academic hospital in Toronto, Canada. Methods Participants completed four simulated order set tasks with three order set formats (two CPOE Test tasks, one User Centred Design, and one Paper). Order of presentation of order set formats and tasks was randomized. Users received individual training for the CPOE Test format only. Main Measures Completion time (efficiency), requests for assistance (usability), and errors in the submitted orders (safety). Results 27 study participants completed 108 order sets. Mean task times were: User Centred Design format 273 s, Paper format 293 s (p=0.73 compared to UCD format), and CPOE Test format 637 s (p<0.0001 compared to UCD format). Users requested assistance in 31% of the CPOE Test format tasks, whereas no assistance was needed for the other formats (p<0.01). There were no significant differences in number of errors between formats. Conclusions The User Centred Design format was more efficient and usable than the CPOE Test format even though training was provided for the latter. We conclude that application of user-centred design principles can enhance task efficiency and usability, increasing the likelihood of successful implementation. PMID:21486886

  2. Does user-centred design affect the efficiency, usability and safety of CPOE order sets?

    PubMed

    Chan, Julie; Shojania, Kaveh G; Easty, Anthony C; Etchells, Edward E

    2011-05-01

    Application of user-centred design principles to Computerized provider order entry (CPOE) systems may improve task efficiency, usability or safety, but there is limited evaluative research of its impact on CPOE systems. We evaluated the task efficiency, usability, and safety of three order set formats: our hospital's planned CPOE order sets (CPOE Test), computer order sets based on user-centred design principles (User Centred Design), and existing pre-printed paper order sets (Paper). 27 staff physicians, residents and medical students. Sunnybrook Health Sciences Centre, an academic hospital in Toronto, Canada. Methods Participants completed four simulated order set tasks with three order set formats (two CPOE Test tasks, one User Centred Design, and one Paper). Order of presentation of order set formats and tasks was randomized. Users received individual training for the CPOE Test format only. Completion time (efficiency), requests for assistance (usability), and errors in the submitted orders (safety). 27 study participants completed 108 order sets. Mean task times were: User Centred Design format 273 s, Paper format 293 s (p=0.73 compared to UCD format), and CPOE Test format 637 s (p<0.0001 compared to UCD format). Users requested assistance in 31% of the CPOE Test format tasks, whereas no assistance was needed for the other formats (p<0.01). There were no significant differences in number of errors between formats. The User Centred Design format was more efficient and usable than the CPOE Test format even though training was provided for the latter. We conclude that application of user-centred design principles can enhance task efficiency and usability, increasing the likelihood of successful implementation.

  3. Performance gains by using heated natural-gas fuel in an annular turbojet combustor

    NASA Technical Reports Server (NTRS)

    Marchionna, N. R.

    1973-01-01

    A full-scale annular turbojet combustor was tested with natural gas fuel heated from ambient temperature to 800 K (980 F). In all tests, heating the fuel improved combustion efficiency. Two sets of gaseous fuel nozzles were tested. Combustion instabilities occurred with one set of nozzles at two conditions: one where the efficiency approached 100 percent with the heated fuel; the other where the efficiency was very poor with the unheated fuel. The second set of nozzles exhibited no combustion instability. Altitude relight tests with the second set showed that relight was improved and was achievable at essentially the same condition as blowout when the fuel temperature was 800 K (980 F).

  4. Set of Criteria for Efficiency of the Process Forming the Answers to Multiple-Choice Test Items

    ERIC Educational Resources Information Center

    Rybanov, Alexander Aleksandrovich

    2013-01-01

    Is offered the set of criteria for assessing efficiency of the process forming the answers to multiple-choice test items. To increase accuracy of computer-assisted testing results, it is suggested to assess dynamics of the process of forming the final answer using the following factors: loss of time factor and correct choice factor. The model…

  5. Formal methods for test case generation

    NASA Technical Reports Server (NTRS)

    Rushby, John (Inventor); De Moura, Leonardo Mendonga (Inventor); Hamon, Gregoire (Inventor)

    2011-01-01

    The invention relates to the use of model checkers to generate efficient test sets for hardware and software systems. The method provides for extending existing tests to reach new coverage targets; searching *to* some or all of the uncovered targets in parallel; searching in parallel *from* some or all of the states reached in previous tests; and slicing the model relative to the current set of coverage targets. The invention provides efficient test case generation and test set formation. Deep regions of the state space can be reached within allotted time and memory. The approach has been applied to use of the model checkers of SRI's SAL system and to model-based designs developed in Stateflow. Stateflow models achieving complete state and transition coverage in a single test case are reported.

  6. Efficient Blockwise Permutation Tests Preserving Exchangeability

    PubMed Central

    Zhou, Chunxiao; Zwilling, Chris E.; Calhoun, Vince D.; Wang, Michelle Y.

    2014-01-01

    In this paper, we present a new blockwise permutation test approach based on the moments of the test statistic. The method is of importance to neuroimaging studies. In order to preserve the exchangeability condition required in permutation tests, we divide the entire set of data into certain exchangeability blocks. In addition, computationally efficient moments-based permutation tests are performed by approximating the permutation distribution of the test statistic with the Pearson distribution series. This involves the calculation of the first four moments of the permutation distribution within each block and then over the entire set of data. The accuracy and efficiency of the proposed method are demonstrated through simulated experiment on the magnetic resonance imaging (MRI) brain data, specifically the multi-site voxel-based morphometry analysis from structural MRI (sMRI). PMID:25289113

  7. 78 FR 63823 - Energy Conservation Program: Test Procedures for Television Sets

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-10-25

    ... Conservation Program: Test Procedures for Television Sets AGENCY: Office of Energy Efficiency and Renewable... Energy (DOE) issued a notice of proposed rulemaking (NOPR) to establish a new test procedure for... additional testing and proposed amendments to the TV test procedure in its March 12, 2013 supplemental notice...

  8. A Comparison of the One-and Three-Parameter Logistic Models on Measures of Test Efficiency.

    ERIC Educational Resources Information Center

    Benson, Jeri

    Two methods of item selection were used to select sets of 40 items from a 50-item verbal analogies test, and the resulting item sets were compared for relative efficiency. The BICAL program was used to select the 40 items having the best mean square fit to the one parameter logistic (Rasch) model. The LOGIST program was used to select the 40 items…

  9. Costs, equity, efficiency and feasibility of identifying the poor in Ghana's National Health Insurance Scheme: empirical analysis of various strategies.

    PubMed

    Aryeetey, Genevieve Cecilia; Jehu-Appiah, Caroline; Spaan, Ernst; Agyepong, Irene; Baltussen, Rob

    2012-01-01

    To analyse the costs and evaluate the equity, efficiency and feasibility of four strategies to identify poor households for premium exemptions in Ghana's National Health Insurance Scheme (NHIS): means testing (MT), proxy means testing (PMT), participatory wealth ranking (PWR) and geographic targeting (GT) in urban, rural and semi-urban settings in Ghana. We conducted the study in 145-147 households per setting with MT as our gold standard strategy. We estimated total costs that included costs of household surveys and cost of premiums paid to the poor, efficiency (cost per poor person identified), equity (number of true poor excluded) and the administrative feasibility of implementation. The cost of exempting one poor individual ranged from US$15.87 to US$95.44; exclusion of the poor ranged between 0% and 73%. MT was most efficient and equitable in rural and urban settings with low-poverty incidence; GT was efficient and equitable in the semi-urban setting with high-poverty incidence. PMT and PWR were less equitable and inefficient although feasible in some settings. We recommend MT as optimal strategy in low-poverty urban and rural settings and GT as optimal strategy in high-poverty semi-urban setting. The study is relevant to other social and developmental programmes that require identification and exemptions of the poor in low-income countries. © 2011 Blackwell Publishing Ltd.

  10. Pollutant Emissions and Energy Efficiency under Controlled Conditions for Household Biomass Cookstoves and Implications for Metrics Useful in Setting International Test Standards

    EPA Science Inventory

    Realistic metrics and methods for testing household biomass cookstoves are required to develop standards needed by international policy makers, donors, and investors. Application of consistent test practices allows emissions and energy efficiency performance to be benchmarked and...

  11. 78 FR 79637 - Energy Conservation Program: Test Procedure for Set-Top Boxes

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-12-31

    ... Energy Conservation Program: Test Procedure for Set-Top Boxes AGENCY: Office of Energy Efficiency and... Energy (DOE) withdraws a proposed rule published January 23, 2013 to establish a test procedure to... additional types of consumer products as covered products. (42 U.S.C. 6292(a)(20)) DOE may prescribe test...

  12. The time-efficiency principle: time as the key diagnostic strategy in primary care.

    PubMed

    Irving, Greg; Holden, John

    2013-08-01

    The test and retest opportunity afforded by reviewing a patient over time substantially increases the total gain in certainty when making a diagnosis in low-prevalence settings (the time-efficiency principle). This approach safely and efficiently reduces the number of patients who need to be formally tested in order to make a correct diagnosis for a person. Time, in terms of observed disease trajectory, provides a vital mechanism for achieving this task. It remains the best strategy for delivering near-optimal diagnoses in low-prevalence settings and should be used to its full advantage.

  13. Does stone entrapment with ″Uro-Net″ improve Ho:YAG laser lithotripsy efficiency in percutaneous nephrolithotomy and cystolithopaxy?: an in vitro study.

    PubMed

    Marchini, Giovanni Scala; Rai, Aayushi; De, Shubha; Sarkissian, Carl; Monga, Manoj

    2013-01-01

    to test the effect of stone entrapment on laser lithotripsy efficiency. Spherical stone phantoms were created using the BegoStone® plaster. Lithotripsy of one stone (1.0 g) per test jar was performed with Ho:YAG laser (365 µm fiber; 1 minute/trial). Four laser settings were tested: I-0.8 J,8 Hz; II-0.2J,50 Hz; III-0.5 J,50 Hz; IV-1.5 J,40 Hz. Uro-Net (US Endoscopy) deployment was used in 3/9 trials. Post-treatment, stone fragments were strained though a 1mm sieve; after a 7-day drying period fragments and unfragmented stone were weighed. Uro-Net nylon mesh and wire frame resistance were tested (laser fired for 30s). All nets used were evaluated for functionality and strength (compared to 10 new nets). Student's T test was used to compare the studied parameters; significance was set at p < 0.05. Laser settings I and II caused less damage to the net overall; the mesh and wire frame had worst injuries with setting IV; setting III had an intermediate outcome; 42% of nets were rendered unusable and excluded from strength analysis. There was no difference in mean strength between used functional nets and non-used devices (8.05 vs. 7.45 lbs, respectively; p = 0.14). Setting IV was the most efficient for lithotripsy (1.9 ± 0.6 mg/s; p < 0.001) with or without net stabilization; setting III was superior to I and II only if a net was not used. Laser lithotripsy is not optimized by stone entrapment with a net retrieval device which may be damaged by high energy laser settings.

  14. Highly Efficient Design-of-Experiments Methods for Combining CFD Analysis and Experimental Data

    NASA Technical Reports Server (NTRS)

    Anderson, Bernhard H.; Haller, Harold S.

    2009-01-01

    It is the purpose of this study to examine the impact of "highly efficient" Design-of-Experiments (DOE) methods for combining sets of CFD generated analysis data with smaller sets of Experimental test data in order to accurately predict performance results where experimental test data were not obtained. The study examines the impact of micro-ramp flow control on the shock wave boundary layer (SWBL) interaction where a complete paired set of data exist from both CFD analysis and Experimental measurements By combining the complete set of CFD analysis data composed of fifteen (15) cases with a smaller subset of experimental test data containing four/five (4/5) cases, compound data sets (CFD/EXP) were generated which allows the prediction of the complete set of Experimental results No statistical difference were found to exist between the combined (CFD/EXP) generated data sets and the complete Experimental data set composed of fifteen (15) cases. The same optimal micro-ramp configuration was obtained using the (CFD/EXP) generated data as obtained with the complete set of Experimental data, and the DOE response surfaces generated by the two data sets were also not statistically different.

  15. An algorithm for testing the efficient market hypothesis.

    PubMed

    Boboc, Ioana-Andreea; Dinică, Mihai-Cristian

    2013-01-01

    The objective of this research is to examine the efficiency of EUR/USD market through the application of a trading system. The system uses a genetic algorithm based on technical analysis indicators such as Exponential Moving Average (EMA), Moving Average Convergence Divergence (MACD), Relative Strength Index (RSI) and Filter that gives buying and selling recommendations to investors. The algorithm optimizes the strategies by dynamically searching for parameters that improve profitability in the training period. The best sets of rules are then applied on the testing period. The results show inconsistency in finding a set of trading rules that performs well in both periods. Strategies that achieve very good returns in the training period show difficulty in returning positive results in the testing period, this being consistent with the efficient market hypothesis (EMH).

  16. An Algorithm for Testing the Efficient Market Hypothesis

    PubMed Central

    Boboc, Ioana-Andreea; Dinică, Mihai-Cristian

    2013-01-01

    The objective of this research is to examine the efficiency of EUR/USD market through the application of a trading system. The system uses a genetic algorithm based on technical analysis indicators such as Exponential Moving Average (EMA), Moving Average Convergence Divergence (MACD), Relative Strength Index (RSI) and Filter that gives buying and selling recommendations to investors. The algorithm optimizes the strategies by dynamically searching for parameters that improve profitability in the training period. The best sets of rules are then applied on the testing period. The results show inconsistency in finding a set of trading rules that performs well in both periods. Strategies that achieve very good returns in the training period show difficulty in returning positive results in the testing period, this being consistent with the efficient market hypothesis (EMH). PMID:24205148

  17. Consistent Structural Integrity and Efficient Certification with Analysis. Volume 2: Detailed Report on Innovative Research Developed, Applied, and Commercially Available

    DTIC Science & Technology

    2005-05-01

    RESULTS ................................................................................................................49 5.3.1 Phase I Test Data...Ins 3 2.2a Wrinkling failure test article 7 2.2b Test results for a) Set A “0° core”, and b) Set B “90° core” 7 2.2c Test ... results for c) Set E “Isotropic (Foam) Core” 8 2.3 P = T because test data not yet reanalyzed with established CFs 9 2.4 P and T are now different

  18. The application of midbond basis sets in efficient and accurate ab initio calculations on electron-deficient systems

    NASA Astrophysics Data System (ADS)

    Choi, Chu Hwan

    2002-09-01

    Ab initio chemistry has shown great promise in reproducing experimental results and in its predictive power. The many complicated computational models and methods seem impenetrable to an inexperienced scientist, and the reliability of the results is not easily interpreted. The application of midbond orbitals is used to determine a general method for use in calculating weak intermolecular interactions, especially those involving electron-deficient systems. Using the criteria of consistency, flexibility, accuracy and efficiency we propose a supermolecular method of calculation using the full counterpoise (CP) method of Boys and Bernardi, coupled with Moller-Plesset (MP) perturbation theory as an efficient electron-correlative method. We also advocate the use of the highly efficient and reliable correlation-consistent polarized valence basis sets of Dunning. To these basis sets, we add a general set of midbond orbitals and demonstrate greatly enhanced efficiency in the calculation. The H2-H2 dimer is taken as a benchmark test case for our method, and details of the computation are elaborated. Our method reproduces with great accuracy the dissociation energies of other previous theoretical studies. The added efficiency of extending the basis sets with conventional means is compared with the performance of our midbond-extended basis sets. The improvement found with midbond functions is notably superior in every case tested. Finally, a novel application of midbond functions to the BH5 complex is presented. The system is an unusual van der Waals complex. The interaction potential curves are presented for several standard basis sets and midbond-enhanced basis sets, as well as for two popular, alternative correlation methods. We report that MP theory appears to be superior to coupled-cluster (CC) in speed, while it is more stable than B3LYP, a widely-used density functional theory (DFT). Application of our general method yields excellent results for the midbond basis sets. Again they prove superior to conventional extended basis sets. Based on these results, we recommend our general approach as a highly efficient, accurate method for calculating weakly interacting systems.

  19. Design of an efficient music-speech discriminator.

    PubMed

    Tardón, Lorenzo J; Sammartino, Simone; Barbancho, Isabel

    2010-01-01

    In this paper, the problem of the design of a simple and efficient music-speech discriminator for large audio data sets in which advanced music playing techniques are taught and voice and music are intrinsically interleaved is addressed. In the process, a number of features used in speech-music discrimination are defined and evaluated over the available data set. Specifically, the data set contains pieces of classical music played with different and unspecified instruments (or even lyrics) and the voice of a teacher (a top music performer) or even the overlapped voice of the translator and other persons. After an initial test of the performance of the features implemented, a selection process is started, which takes into account the type of classifier selected beforehand, to achieve good discrimination performance and computational efficiency, as shown in the experiments. The discrimination application has been defined and tested on a large data set supplied by Fundacion Albeniz, containing a large variety of classical music pieces played with different instrument, which include comments and speeches of famous performers.

  20. A Comparison of Three Types of Test Development Procedures Using Classical and Latent Trait Methods.

    ERIC Educational Resources Information Center

    Benson, Jeri; Wilson, Michael

    Three methods of item selection were used to select sets of 38 items from a 50-item verbal analogies test and the resulting item sets were compared for internal consistency, standard errors of measurement, item difficulty, biserial item-test correlations, and relative efficiency. Three groups of 1,500 cases each were used for item selection. First…

  1. Quantum efficiency test set up performances for NIR detector characterization at ESTEC

    NASA Astrophysics Data System (ADS)

    Crouzet, P.-E.; Duvet, L.; De Wit, F.; Beaufort, T.; Blommaert, S.; Butler, B.; Van Duinkerken, G.; ter Haar, J.; Heijnen, J.; van der Luijt, K.; Smit, H.; Viale, T.

    2014-07-01

    The Payload Technology Validation Section (Future mission preparation Office) at ESTEC is in charge of specific mission oriented validation activities, for science and robotic exploration missions, aiming at reducing development risks in the implementation phase. These activities take place during the early mission phases or during the implementation itself. In this framework, a test set up to characterize the quantum efficiency of near infrared detectors has been developed. The first detector to be tested will an HAWAII-2RG detector with a 2.5μm cut off, it will be used as commissioning device in preparation to the tests of prototypes European detectors developed under ESA funding. The capability to compare on the same setup detectors from different manufacturers will be a unique asset for the future mission preparation office. This publication presents the performances of the quantum efficiency test bench to prepare measurements on the HAWAII-2RG detector. A SOFRADIR Saturn detector has been used as a preliminary test vehicle for the bench. A test set up with a lamp, chopper, monochromator, pinhole and off axis mirrors allows to create a spot of 1mm diameter between 700nm and 2.5μm.The shape of the beam has been measured to match the rms voltage read by the Merlin Lock -in amplifier and the amplitude of the incoming signal. The reference detectors have been inter-calibrated with an uncertainty up to 3 %. For the measurement with HAWAII-2RG detector, the existing cryostat [1] has been modified to adapt cold black baffling, a cold filter wheel and a sapphire window. An statistic uncertainty of +/-2.6% on the quantum efficiency on the detector under test measurement is expected.

  2. Comparison of Full-Scale Propellers Having R.A.F.-6 and Clark Y Airfoil Sections

    NASA Technical Reports Server (NTRS)

    Freeman, Hugh B

    1932-01-01

    In this report the efficiencies of two series of propellers having two types of blade sections are compared. Six full-scale propellers were used, three having R. A. F.-6 and three Clark Y airfoil sections with thickness/chord ratios of 0.06, 0.08, and 0.10. The propellers were tested at five pitch setting, which covered the range ordinarily used in practice. The propellers having the Clark Y sections gave the highest peak efficiency at the low pitch settings. At the high pitch settings, the propellers with R. A. F.-6 sections gave about the same maximum efficiency as the Clark Y propellers and were more efficient for the conditions of climb and take-off.

  3. Appliance Standard Testing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hogan, Kathleen; Tiemann, Gregg

    2016-08-03

    The U.S. Department of Energy’s Appliance Standards and Equipment Program tests, sets and helps enforce efficiency standards on more than 60 U.S. products. A majority of that testing is performed at the Intertek laboratory in Cortland, NY.

  4. NEAT: an efficient network enrichment analysis test.

    PubMed

    Signorelli, Mirko; Vinciotti, Veronica; Wit, Ernst C

    2016-09-05

    Network enrichment analysis is a powerful method, which allows to integrate gene enrichment analysis with the information on relationships between genes that is provided by gene networks. Existing tests for network enrichment analysis deal only with undirected networks, they can be computationally slow and are based on normality assumptions. We propose NEAT, a test for network enrichment analysis. The test is based on the hypergeometric distribution, which naturally arises as the null distribution in this context. NEAT can be applied not only to undirected, but to directed and partially directed networks as well. Our simulations indicate that NEAT is considerably faster than alternative resampling-based methods, and that its capacity to detect enrichments is at least as good as the one of alternative tests. We discuss applications of NEAT to network analyses in yeast by testing for enrichment of the Environmental Stress Response target gene set with GO Slim and KEGG functional gene sets, and also by inspecting associations between functional sets themselves. NEAT is a flexible and efficient test for network enrichment analysis that aims to overcome some limitations of existing resampling-based tests. The method is implemented in the R package neat, which can be freely downloaded from CRAN ( https://cran.r-project.org/package=neat ).

  5. Plant disease severity assessment - How rater bias, assessment method and experimental design affect hypothesis testing and resource use efficiency

    USDA-ARS?s Scientific Manuscript database

    The impact of rater bias and assessment method on hypothesis testing was studied for different experimental designs for plant disease assessment using balanced and unbalanced data sets. Data sets with the same number of replicate estimates for each of two treatments are termed ‘balanced’, and those ...

  6. [Testing method research for key performance indicator of imaging acousto-optic tunable filter (AOTF)].

    PubMed

    Hu, Shan-Zhou; Chen, Fen-Fei; Zeng, Li-Bo; Wu, Qiong-Shui

    2013-01-01

    Imaging AOTF is an important optical filter component for new spectral imaging instruments developed in recent years. The principle of imaging AOTF component was demonstrated, and a set of testing methods for some key performances were studied, such as diffraction efficiency, wavelength shift with temperature, homogeneity in space for diffraction efficiency, imaging shift, etc.

  7. Financial performance monitoring of the technical efficiency of critical access hospitals: a data envelopment analysis and logistic regression modeling approach.

    PubMed

    Wilson, Asa B; Kerr, Bernard J; Bastian, Nathaniel D; Fulton, Lawrence V

    2012-01-01

    From 1980 to 1999, rural designated hospitals closed at a disproportionally high rate. In response to this emergent threat to healthcare access in rural settings, the Balanced Budget Act of 1997 made provisions for the creation of a new rural hospital--the critical access hospital (CAH). The conversion to CAH and the associated cost-based reimbursement scheme significantly slowed the closure rate of rural hospitals. This work investigates which methods can ensure the long-term viability of small hospitals. This article uses a two-step design to focus on a hypothesized relationship between technical efficiency of CAHs and a recently developed set of financial monitors for these entities. The goal is to identify the financial performance measures associated with efficiency. The first step uses data envelopment analysis (DEA) to differentiate efficient from inefficient facilities within a data set of 183 CAHs. Determining DEA efficiency is an a priori categorization of hospitals in the data set as efficient or inefficient. In the second step, DEA efficiency is the categorical dependent variable (efficient = 0, inefficient = 1) in the subsequent binary logistic regression (LR) model. A set of six financial monitors selected from the array of 20 measures were the LR independent variables. We use a binary LR to test the null hypothesis that recently developed CAH financial indicators had no predictive value for categorizing a CAH as efficient or inefficient, (i.e., there is no relationship between DEA efficiency and fiscal performance).

  8. Appliance Standard Testing

    ScienceCinema

    Hogan, Kathleen; Tiemann, Gregg

    2018-01-16

    The U.S. Department of Energy’s Appliance Standards and Equipment Program tests, sets and helps enforce efficiency standards on more than 60 U.S. products. A majority of that testing is performed at the Intertek laboratory in Cortland, NY.

  9. Heating boilers in Krakow, Poland: Options for improving efficiency and reducing emissions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cyklis, P.; Kowalski, J.; Kroll, J.

    1995-08-01

    In Krakow, Poland, coal-fired boilers are used to heat single apartment buildings and local heating districts. The population includes 2,930 small, hand-fired boilers and 227 larger traveling grate stoker-fired boilers. These boilers are important contributors to air quality problems in Krakow, and an assessment of their efficiency and emissions characteristics was recently undertaken. For the larger, stoker-fired boilers, efficiency was measured using a stack-loss method. In addition to the normal baseline fuel, the effects of coal cleaning and grading were evaluated. Testing was done at two selected sites. Boiler efficiencies were found to be low--50% to 67%. These boilers operatemore » without combustion controls or instrumentation for flue gas analysis. As a result, excess air levels are very high--up to 400%--leading to poor performance. Emissions were found to be typical for boilers of this type. Using the improved fuels yields reductions in emissions and improvement in efficiency when combined with proper adjustments. In the case of the hand-fired boilers, one set of cast-iron boilers and one set of steel boilers were tested. Efficiency in this case was measured using an input-output method for sets of three boilers taken together as a system. Emissions from these boilers are lowest when low volatile fuels, such as coke or smokeless briquettes, are used.« less

  10. Design and Testing of Flight Control Laws on the RASCAL Research Helicopter

    NASA Technical Reports Server (NTRS)

    Frost, Chad R.; Hindson, William S.; Moralez. Ernesto, III; Tucker, George E.; Dryfoos, James B.

    2001-01-01

    Two unique sets of flight control laws were designed, tested and flown on the Army/NASA Rotorcraft Aircrew Systems Concepts Airborne Laboratory (RASCAL) JUH-60A Black Hawk helicopter. The first set of control laws used a simple rate feedback scheme, intended to facilitate the first flight and subsequent flight qualification of the RASCAL research flight control system. The second set of control laws comprised a more sophisticated model-following architecture. Both sets of flight control laws were developed and tested extensively using desktop-to-flight modeling, analysis, and simulation tools. Flight test data matched the model predicted responses well, providing both evidence and confidence that future flight control development for RASCAL will be efficient and accurate.

  11. Two-stage fan. 4: Performance data for stator setting angle optimization

    NASA Technical Reports Server (NTRS)

    Burger, G. D.; Keenan, M. J.

    1975-01-01

    Stator setting angle optimization tests were conducted on a two-stage fan to improve efficiency at overspeed, stall margin at design speed, and both efficiency and stall margin at partspeed. The fan has a design pressure ratio of 2.8, a flow rate of 184.2 lb/sec (83.55 kg/sec) and a 1st-stage rotor tip speed of 1450 ft/sec (441.96 in/sec). Performance was obtained at 70,100, and 105 percent of design speed with different combinations of 1st-stage and 2nd-stage stator settings. One combination of settings, other than design, was common to all three speeds. At design speed, a 2.0 percentage point increase in stall margin was obtained at the expense of a 1.3 percentage point efficiency decrease. At 105 percent speed, efficiency was improved by 1.8 percentage points but stall margin decreased 4.7 percentage points. At 70 percent speed, no change in stall margin or operating line efficiency was obtained with stator resets although considerable speed-flow requlation occurred.

  12. Optimization of transversal phacoemulsification settings in peristaltic mode using a new transversal ultrasound machine.

    PubMed

    Wright, Dannen D; Wright, Alex J; Boulter, Tyler D; Bernhisel, Ashlie A; Stagg, Brian C; Zaugg, Brian; Pettey, Jeff H; Ha, Larry; Ta, Brian T; Olson, Randall J

    2017-09-01

    To determine the optimum bottle height, vacuum, aspiration rate, and power settings in the peristaltic mode of the Whitestar Signature Pro machine with Ellips FX tip action (transversal). John A. Moran Eye Center Laboratories, University of Utah, Salt Lake City, Utah, USA. Experimental study. Porcine lens nuclei were hardened with formalin and cut into 2.0 mm cubes. Lens cubes were emulsified using transversal and fragment removal time (efficiency), and fragment bounces off the tip (chatter) were measured to determine optimum aspiration rate, bottle height, vacuum, and power settings in the peristaltic mode. Efficiency increased in a linear fashion with increasing bottle height and vacuum. The most efficient aspiration rate was 50 mL/min, with 60 mL/min statistically similar. Increasing power increased efficiency up to 90% with increased chatter at 100%. The most efficient values for the settings tested were bottle height at 100 cm, vacuum at 600 mm Hg, aspiration rate of 50 or 60 mL/min, and power at 90%. Copyright © 2017 ASCRS and ESCRS. Published by Elsevier Inc. All rights reserved.

  13. 77 FR 2829 - Energy Conservation Program: Test Procedure for Television Sets

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-01-19

    ... provisions designed to improve energy efficiency. (All references to EPCA refer to the statute as amended... also provides that the test procedure shall be reasonably designed to produce test results which... facility one is denoted with numerical values, while the data from test facility two is denoted with...

  14. A minimum data set of water quality parameters to assess and compare treatment efficiency of stormwater facilities.

    PubMed

    Ingvertsen, Simon Toft; Jensen, Marina Bergen; Magid, Jakob

    2011-01-01

    Urban stormwater runoff is often of poor quality, impacting aquatic ecosystems and limiting the use of stormwater runoff for recreational purposes. Several stormwater treatment facilities (STFs) are in operation or at the pilot testing stage, but their efficiencies are neither well documented nor easily compared due to the complex contaminant profile of stormwater and the highly variable runoff hydrograph. On the basis of a review of available data sets on urban stormwater quality and environmental contaminant behavior, we suggest a few carefully selected contaminant parameters (the minimum data set) to be obligatory when assessing and comparing the efficiency of STFs. Consistent use of the minimum data set in all future monitoring schemes for STFs will ensure broad-spectrum testing at low costs and strengthen comparability among facilities. The proposed minimum data set includes: (i) fine fraction of suspended solids (<63 μm), (ii) total concentrations of zinc and copper, (iii) total concentrations of phenanthrene, fluoranthene, and benzo(b,k)fluoranthene, and (iv) total concentrations of phosphorus and nitrogen. Indicator pathogens and other specific contaminants (i.e., chromium, pesticides, phenols) may be added if recreational or certain catchment-scale objectives are to be met. Issues that need further investigation have been identified during the iterative process of developing the minimum data set. by the American Society of Agronomy, Crop Science Society of America, and Soil Science Society of America, Inc.

  15. Energy Efficient Engine integrated core/low spool design and performance report

    NASA Technical Reports Server (NTRS)

    Stearns, E. Marshall

    1985-01-01

    The Energy Efficient Engine (E3) is a NASA program to create fuel saving technology for future transport aircraft engines. The E3 technology advancements were demonstrated to operate reliably and achieve goal performance in tests of the Integrated Core/Low Spool vehicle. The first build of this undeveloped technology research engine set a record for low fuel consumption. Its design and detailed test results are herein presented.

  16. An optimized proportional-derivative controller for the human upper extremity with gravity.

    PubMed

    Jagodnik, Kathleen M; Blana, Dimitra; van den Bogert, Antonie J; Kirsch, Robert F

    2015-10-15

    When Functional Electrical Stimulation (FES) is used to restore movement in subjects with spinal cord injury (SCI), muscle stimulation patterns should be selected to generate accurate and efficient movements. Ideally, the controller for such a neuroprosthesis will have the simplest architecture possible, to facilitate translation into a clinical setting. In this study, we used the simulated annealing algorithm to optimize two proportional-derivative (PD) feedback controller gain sets for a 3-dimensional arm model that includes musculoskeletal dynamics and has 5 degrees of freedom and 22 muscles, performing goal-oriented reaching movements. Controller gains were optimized by minimizing a weighted sum of position errors, orientation errors, and muscle activations. After optimization, gain performance was evaluated on the basis of accuracy and efficiency of reaching movements, along with three other benchmark gain sets not optimized for our system, on a large set of dynamic reaching movements for which the controllers had not been optimized, to test ability to generalize. Robustness in the presence of weakened muscles was also tested. The two optimized gain sets were found to have very similar performance to each other on all metrics, and to exhibit significantly better accuracy, compared with the three standard gain sets. All gain sets investigated used physiologically acceptable amounts of muscular activation. It was concluded that optimization can yield significant improvements in controller performance while still maintaining muscular efficiency, and that optimization should be considered as a strategy for future neuroprosthesis controller design. Published by Elsevier Ltd.

  17. Optimized auxiliary basis sets for density fitted post-Hartree-Fock calculations of lanthanide containing molecules

    NASA Astrophysics Data System (ADS)

    Chmela, Jiří; Harding, Michael E.

    2018-06-01

    Optimised auxiliary basis sets for lanthanide atoms (Ce to Lu) for four basis sets of the Karlsruhe error-balanced segmented contracted def2 - series (SVP, TZVP, TZVPP and QZVPP) are reported. These auxiliary basis sets enable the use of the resolution-of-the-identity (RI) approximation in post Hartree-Fock methods - as for example, second-order perturbation theory (MP2) and coupled cluster (CC) theory. The auxiliary basis sets are tested on an enlarged set of about a hundred molecules where the test criterion is the size of the RI error in MP2 calculations. Our tests also show that the same auxiliary basis sets can be used together with different effective core potentials. With these auxiliary basis set calculations of MP2 and CC quality can now be performed efficiently on medium-sized molecules containing lanthanides.

  18. Greater power and computational efficiency for kernel-based association testing of sets of genetic variants.

    PubMed

    Lippert, Christoph; Xiang, Jing; Horta, Danilo; Widmer, Christian; Kadie, Carl; Heckerman, David; Listgarten, Jennifer

    2014-11-15

    Set-based variance component tests have been identified as a way to increase power in association studies by aggregating weak individual effects. However, the choice of test statistic has been largely ignored even though it may play an important role in obtaining optimal power. We compared a standard statistical test-a score test-with a recently developed likelihood ratio (LR) test. Further, when correction for hidden structure is needed, or gene-gene interactions are sought, state-of-the art algorithms for both the score and LR tests can be computationally impractical. Thus we develop new computationally efficient methods. After reviewing theoretical differences in performance between the score and LR tests, we find empirically on real data that the LR test generally has more power. In particular, on 15 of 17 real datasets, the LR test yielded at least as many associations as the score test-up to 23 more associations-whereas the score test yielded at most one more association than the LR test in the two remaining datasets. On synthetic data, we find that the LR test yielded up to 12% more associations, consistent with our results on real data, but also observe a regime of extremely small signal where the score test yielded up to 25% more associations than the LR test, consistent with theory. Finally, our computational speedups now enable (i) efficient LR testing when the background kernel is full rank, and (ii) efficient score testing when the background kernel changes with each test, as for gene-gene interaction tests. The latter yielded a factor of 2000 speedup on a cohort of size 13 500. Software available at http://research.microsoft.com/en-us/um/redmond/projects/MSCompBio/Fastlmm/. heckerma@microsoft.com Supplementary data are available at Bioinformatics online. © The Author 2014. Published by Oxford University Press.

  19. Genotype imputation efficiency in Nelore Cattle

    USDA-ARS?s Scientific Manuscript database

    Genotype imputation efficiency in Nelore cattle was evaluated in different scenarios of lower density (LD) chips, imputation methods and sets of animals to have their genotypes imputed. Twelve commercial and virtual custom LD chips with densities varying from 7K to 75K SNPs were tested. Customized L...

  20. An Efficiency Balanced Information Criterion for Item Selection in Computerized Adaptive Testing

    ERIC Educational Resources Information Center

    Han, Kyung T.

    2012-01-01

    Successful administration of computerized adaptive testing (CAT) programs in educational settings requires that test security and item exposure control issues be taken seriously. Developing an item selection algorithm that strikes the right balance between test precision and level of item pool utilization is the key to successful implementation…

  1. In vitro fragmentation efficiency of holmium: yttrium-aluminum-garnet (YAG) laser lithotripsy--a comprehensive study encompassing different frequencies, pulse energies, total power levels and laser fibre diameters.

    PubMed

    Kronenberg, Peter; Traxer, Olivier

    2014-08-01

    To assess the fragmentation (ablation) efficiency of laser lithotripsy along a wide range of pulse energies, frequencies, power settings and different laser fibres, in particular to compare high- with low-frequency lithotripsy using a dynamic and innovative testing procedure free from any human interaction bias. An automated laser fragmentation testing system was developed. The unmoving laser fibres fired at the surface of an artificial stone while the stone was moved past at a constant velocity, thus creating a fissure. The lithotripter settings were 0.2-1.2 J pulse energies, 5-40 Hz frequencies, 4-20 W power levels, and 200 and 550 μm core laser fibres. Fissure width, depth, and volume were analysed and comparisons between laser settings, fibres and ablation rates were made. Low frequency-high pulse energy (LoFr-HiPE) settings were (up to six times) more ablative than high frequency-low pulse energy (HiFr-LoPE) at the same power levels (P < 0.001), as they produced deeper (P < 0.01) and wider (P < 0.001) fissures. There were linear correlations between pulse energy and fragmentation volume, fissure width, and fissure depth (all P < 0.001). Total power did not correlate with fragmentation measurements. Laser fibre diameter did not affect fragmentation volume (P = 0.81), except at very low pulse energies (0.2 J), where the large fibre was less efficient (P = 0.015). At the same total power level, LoFr-HiPE lithotripsy was most efficient. Pulse energy was the key variable that drove fragmentation efficiency. Attention must be paid to prevent the formation of time-consuming bulky debris and adapt the lithotripter settings to one's needs. As fibre diameter did not affect fragmentation efficiency, small fibres are preferable due to better scope irrigation and manoeuvrability. © 2013 The Authors. BJU International © 2013 BJU International.

  2. Greater power and computational efficiency for kernel-based association testing of sets of genetic variants

    PubMed Central

    Lippert, Christoph; Xiang, Jing; Horta, Danilo; Widmer, Christian; Kadie, Carl; Heckerman, David; Listgarten, Jennifer

    2014-01-01

    Motivation: Set-based variance component tests have been identified as a way to increase power in association studies by aggregating weak individual effects. However, the choice of test statistic has been largely ignored even though it may play an important role in obtaining optimal power. We compared a standard statistical test—a score test—with a recently developed likelihood ratio (LR) test. Further, when correction for hidden structure is needed, or gene–gene interactions are sought, state-of-the art algorithms for both the score and LR tests can be computationally impractical. Thus we develop new computationally efficient methods. Results: After reviewing theoretical differences in performance between the score and LR tests, we find empirically on real data that the LR test generally has more power. In particular, on 15 of 17 real datasets, the LR test yielded at least as many associations as the score test—up to 23 more associations—whereas the score test yielded at most one more association than the LR test in the two remaining datasets. On synthetic data, we find that the LR test yielded up to 12% more associations, consistent with our results on real data, but also observe a regime of extremely small signal where the score test yielded up to 25% more associations than the LR test, consistent with theory. Finally, our computational speedups now enable (i) efficient LR testing when the background kernel is full rank, and (ii) efficient score testing when the background kernel changes with each test, as for gene–gene interaction tests. The latter yielded a factor of 2000 speedup on a cohort of size 13 500. Availability: Software available at http://research.microsoft.com/en-us/um/redmond/projects/MSCompBio/Fastlmm/. Contact: heckerma@microsoft.com Supplementary information: Supplementary data are available at Bioinformatics online. PMID:25075117

  3. Partial-genome evaluation of postweaning feed intake and efficiency of crossbred beef cattle

    USDA-ARS?s Scientific Manuscript database

    Effects of individual single nucleotide polymorphisms (SNP), and variation explained by sets of SNP associated with dry matter intake (DMI), metabolic mid-test weight (MBW), BW gain (GN) and feed efficiency expressed as phenotypic and genetic residual feed intake (RFIp; RFIg) were estimated from wei...

  4. Routine development of objectively derived search strategies.

    PubMed

    Hausner, Elke; Waffenschmidt, Siw; Kaiser, Thomas; Simon, Michael

    2012-02-29

    Over the past few years, information retrieval has become more and more professionalized, and information specialists are considered full members of a research team conducting systematic reviews. Research groups preparing systematic reviews and clinical practice guidelines have been the driving force in the development of search strategies, but open questions remain regarding the transparency of the development process and the available resources. An empirically guided approach to the development of a search strategy provides a way to increase transparency and efficiency. Our aim in this paper is to describe the empirically guided development process for search strategies as applied by the German Institute for Quality and Efficiency in Health Care (Institut für Qualität und Wirtschaftlichkeit im Gesundheitswesen, or "IQWiG"). This strategy consists of the following steps: generation of a test set, as well as the development, validation and standardized documentation of the search strategy. We illustrate our approach by means of an example, that is, a search for literature on brachytherapy in patients with prostate cancer. For this purpose, a test set was generated, including a total of 38 references from 3 systematic reviews. The development set for the generation of the strategy included 25 references. After application of textual analytic procedures, a strategy was developed that included all references in the development set. To test the search strategy on an independent set of references, the remaining 13 references in the test set (the validation set) were used. The validation set was also completely identified. Our conclusion is that an objectively derived approach similar to that used in search filter development is a feasible way to develop and validate reliable search strategies. Besides creating high-quality strategies, the widespread application of this approach will result in a substantial increase in the transparency of the development process of search strategies.

  5. A conjugate gradient method with descent properties under strong Wolfe line search

    NASA Astrophysics Data System (ADS)

    Zull, N.; ‘Aini, N.; Shoid, S.; Ghani, N. H. A.; Mohamed, N. S.; Rivaie, M.; Mamat, M.

    2017-09-01

    The conjugate gradient (CG) method is one of the optimization methods that are often used in practical applications. The continuous and numerous studies conducted on the CG method have led to vast improvements in its convergence properties and efficiency. In this paper, a new CG method possessing the sufficient descent and global convergence properties is proposed. The efficiency of the new CG algorithm relative to the existing CG methods is evaluated by testing them all on a set of test functions using MATLAB. The tests are measured in terms of iteration numbers and CPU time under strong Wolfe line search. Overall, this new method performs efficiently and comparable to the other famous methods.

  6. Comment to: "Martini straight: Boosting performance using a shorter cutoff and GPUs" by D.H. de Jong, S. Baoukina, H.I. Ingólfsson, and S.J. Marrink

    NASA Astrophysics Data System (ADS)

    Benedetti, Florian; Loison, Claire

    2018-07-01

    In a recent study published in this journal, de Jong et al. investigated the efficiency improvement reached thanks to new parameter sets for molecular dynamics simulations using the coarse-grained Martini force-field and its implementation in the Gromacs simulation package (de Jong et al., 2016). The advantages of the new sets are the computational efficiency and the conservation of the equilibrium properties of the Martini model. This article reports additional tests on the total energy conservation for zwitterionic lipid bilayer membranes. The results show that the conclusion by de Jong et al. on the total energy conservation of the new parameter sets, based on short simulations and homogeneous systems, is not generalizable to long lipid bilayer simulations. The energy conservation of the three parameter sets compared in their article (common, new and new-RF) differ if one analyzes sufficiently long trajectories or if one measures the total energy drifts. In practice, when total energy conservation is important for a Martini lipid bilayer simulation, we would consider either keeping the common set, or carefully testing the new-RF set for energy leaks or sources before production use.

  7. Third-Order Incremental Dual-Basis Set Zero-Buffer Approach: An Accurate and Efficient Way To Obtain CCSD and CCSD(T) Energies.

    PubMed

    Zhang, Jun; Dolg, Michael

    2013-07-09

    An efficient way to obtain accurate CCSD and CCSD(T) energies for large systems, i.e., the third-order incremental dual-basis set zero-buffer approach (inc3-db-B0), has been developed and tested. This approach combines the powerful incremental scheme with the dual-basis set method, and along with the new proposed K-means clustering (KM) method and zero-buffer (B0) approximation, can obtain very accurate absolute and relative energies efficiently. We tested the approach for 10 systems of different chemical nature, i.e., intermolecular interactions including hydrogen bonding, dispersion interaction, and halogen bonding; an intramolecular rearrangement reaction; aliphatic and conjugated hydrocarbon chains; three compact covalent molecules; and a water cluster. The results show that the errors for relative energies are <1.94 kJ/mol (or 0.46 kcal/mol), for absolute energies of <0.0026 hartree. By parallelization, our approach can be applied to molecules of more than 30 atoms and more than 100 correlated electrons with high-quality basis set such as cc-pVDZ or cc-pVTZ, saving computational cost by a factor of more than 10-20, compared to traditional implementation. The physical reasons of the success of the inc3-db-B0 approach are also analyzed.

  8. Optimal Settings for the Noncontact Holmium:YAG Stone Fragmentation Popcorn Technique.

    PubMed

    Emiliani, Esteban; Talso, Michele; Cho, Sung-Yong; Baghdadi, Mohammed; Mahmoud, Sadam; Pinheiro, Hugo; Traxer, Olivier

    2017-09-01

    The purpose of this study was to evaluate the popcorn technique using a wide range of holmium laser settings and fiber sizes in a systematic in vitro assessment. Evaluations were done with 4 artificial stones in a collection tube. A fixed ureteroscope was inserted through a ureteral access sheath to provide constant irrigation flow and the laser was placed 1 mm from the bottom. Combinations of 0.5 to 1.5 J, 10 to 20 and 40 Hz, and long and short pulses were tested for 2 and 4 minutes. We used 273 and 365 μm laser fibers. All tests were repeated 3 times. The stones were weighed before and after the experiments to evaluate the setting efficiency. Significant predictors of a highly efficient technique were assessed. A total of 144 tests were performed. Mean starting weight of the stones was 0.23 gm, which was consistent among the groups. After the experiment the median weight difference was 0.07 gm (range 0.01 to 0.24). When designating a 50% reduction in stone volume as the threshold indicating high efficiency, the significant predictors of an efficient popcorn technique were a long pulse (OR 2.7, 95% CI 1.05-7.15), a longer duration (OR 11.4, 95% CI 3.88-33.29), a small (273 μm) laser fiber (OR 0.23, 95% CI 0.08-0.70) and higher power (W) (OR 1.14, 95% CI 1.09-1.20). Higher energy, a longer pulse, frequencies higher than 10 Hz, a longer duration and a smaller laser fiber predict a popcorn technique that is more efficient at reducing stone volume. Copyright © 2017 American Urological Association Education and Research, Inc. Published by Elsevier Inc. All rights reserved.

  9. ITC Guidelines on Quality Control in Scoring, Test Analysis, and Reporting of Test Scores

    ERIC Educational Resources Information Center

    Allalouf, Avi

    2014-01-01

    The Quality Control (QC) Guidelines are intended to increase the efficiency, precision, and accuracy of the scoring, analysis, and reporting process of testing. The QC Guidelines focus on large-scale testing operations where multiple forms of tests are created for use on set dates. However, they may also be used for a wide variety of other testing…

  10. DEVELOPMENT OF DIAGNOSTIC ANALYTICAL AND MECHANICAL ABILITY TESTS THROUGH FACET DESIGN AND ANALYSIS.

    ERIC Educational Resources Information Center

    GUTTMAN, LOUIS,; SCHLESINGER, I.M.

    METHODOLOGY BASED ON FACET THEORY (MODIFIED SET THEORY) WAS USED IN TEST CONSTRUCTION AND ANALYSIS TO PROVIDE AN EFFICIENT TOOL OF EVALUATION FOR VOCATIONAL GUIDANCE AND VOCATIONAL SCHOOL USE. THE TYPE OF TEST DEVELOPMENT UNDERTAKEN WAS LIMITED TO THE USE OF NONVERBAL PICTORIAL ITEMS. ITEMS FOR TESTING ABILITY TO IDENTIFY ELEMENTS BELONGING TO AN…

  11. Measurement of Energy Performances for General-Structured Servers

    NASA Astrophysics Data System (ADS)

    Liu, Ren; Chen, Lili; Li, Pengcheng; Liu, Meng; Chen, Haihong

    2017-11-01

    Energy consumption of servers in data centers increases rapidly along with the wide application of Internet and connected devices. To improve the energy efficiency of servers, voluntary or mandatory energy efficiency programs for servers, including voluntary label program or mandatory energy performance standards have been adopted or being prepared in the US, EU and China. However, the energy performance of servers and testing methods of servers are not well defined. This paper presents matrices to measure the energy performances of general-structured servers. The impacts of various components of servers on their energy performances are also analyzed. Based on a set of normalized workload, the author proposes a standard method for testing energy efficiency of servers. Pilot tests are conducted to assess the energy performance testing methods of servers. The findings of the tests are discussed in the paper.

  12. Comparison of a quasi-3D analysis and experimental performance for three compact radial turbines

    NASA Technical Reports Server (NTRS)

    Simonyi, P. S.; Boyle, R. J.

    1991-01-01

    An experimental aerodynamic evaluation of three compact radial turbine builds was performed. Two rotors which were 40-50 percent shorter in axial length than conventional state-of-the-art radial rotors were tested. A single nozzle design was used. One rotor was tested with the nozzle at two stagger angle settings. A second rotor was tested with the nozzle in only the closed down setting. Experimental results were compared to predicted results from a quasi-3D inviscid and boundary layer analysis, called MTSB (Meridl/Tsonic/Blayer). This analysis was used to predict turbine performance. It has previously been calibrated only for axial, not radial, turbomachinery. The predicted and measured efficiencies were compared at the design point for the three turbines. At the design points the analysis overpredicted the efficiency by less than 1.7 points. Comparisons were also made at off-design operating points. The results of these comparisons showed the importance of an accurate clearance model for efficiency predictions and also that there are deficiencies in the incidence loss model used.

  13. Comparison of a quasi-3D analysis and experimental performance for three compact radial turbines

    NASA Technical Reports Server (NTRS)

    Simonyi, P. S.; Boyle, R. J.

    1991-01-01

    An experimental aerodynamic evaluation of three compact radial turbine builds was performed. Two rotors which were 40 to 50 percent shorter in axial length than conventional state of the art radial rotors were tested. A single nozzle design was used. One rotor was tested with the nozzle at two stagger angle settings. A second rotor was tested with the nozzle in only the closed down setting. Experimental results were compared to predict results from a quasi-3D inviscid and boundary layer analysis, called Meridl/Tsonic/Blayer (MTSB). This analysis was used to predict turbine performance. It has previously been calibrated only for axial, not radial, turbomachinery. The predicted and measured efficiencies were compared at the design point for the three turbines. At the design points the analysis overpredicted the efficiency by less than 1.7 points. Comparisons were also made at off-design operating points. The results of these comparisons showed the importance of an accurate clearance model for efficiency predictions and also that there are deficiencies in the incidence loss model used.

  14. Technology’s present situation and the development prospects of energy efficiency monitoring as well as performance testing & analysis for process flow compressors

    NASA Astrophysics Data System (ADS)

    Li, L.; Zhao, Y.; Wang, L.; Yang, Q.; Liu, G.; Tang, B.; Xiao, J.

    2017-08-01

    In this paper, the background of performance testing of in-service process flow compressors set in user field are introduced, the main technique barriers faced in the field test are summarized, and the factors that result in real efficiencies of most process flow compressors being lower than the guaranteed by manufacturer are analysed. The authors investigated the present operational situation of process flow compressors in China and found that low efficiency operation of flow compressors is because the compressed gas is generally forced to flow back into the inlet pipe for adapting to the process parameters variety. For example, the anti-surge valve is always opened for centrifugal compressor. To improve the operation efficiency of process compressors the energy efficiency monitoring technology was overviewed and some suggestions are proposed in the paper, which is the basis of research on energy efficiency evaluation and/or labelling of process compressors.

  15. 78 FR 35898 - Decision and Order Granting a Waiver to Samsung From the Department of Energy Residential...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-06-14

    ... Refrigerator-Freezer Test Procedures AGENCY: Office of Energy Efficiency and Renewable Energy, Department of... from the DOE electric refrigerator and refrigerator-freezer test procedures for specific basic models set forth in its petition for waiver. In its petition, Samsung provides an alternate test procedure...

  16. 78 FR 35901 - Decision and Order Granting a Waiver to Samsung From the Department of Energy Residential...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-06-14

    ... Refrigerator-Freezer Test Procedures AGENCY: Office of Energy Efficiency and Renewable Energy, Department of... from the DOE electric refrigerator and refrigerator-freezer test procedures for specific basic models set forth in its petition for waiver. In its petition, Samsung provides an alternate test procedure...

  17. A generalized memory test algorithm

    NASA Technical Reports Server (NTRS)

    Milner, E. J.

    1982-01-01

    A general algorithm for testing digital computer memory is presented. The test checks that (1) every bit can be cleared and set in each memory work, and (2) bits are not erroneously cleared and/or set elsewhere in memory at the same time. The algorithm can be applied to any size memory block and any size memory word. It is concise and efficient, requiring the very few cycles through memory. For example, a test of 16-bit-word-size memory requries only 384 cycles through memory. Approximately 15 seconds were required to test a 32K block of such memory, using a microcomputer having a cycle time of 133 nanoseconds.

  18. Efficiency of parallel direct optimization

    NASA Technical Reports Server (NTRS)

    Janies, D. A.; Wheeler, W. C.

    2001-01-01

    Tremendous progress has been made at the level of sequential computation in phylogenetics. However, little attention has been paid to parallel computation. Parallel computing is particularly suited to phylogenetics because of the many ways large computational problems can be broken into parts that can be analyzed concurrently. In this paper, we investigate the scaling factors and efficiency of random addition and tree refinement strategies using the direct optimization software, POY, on a small (10 slave processors) and a large (256 slave processors) cluster of networked PCs running LINUX. These algorithms were tested on several data sets composed of DNA and morphology ranging from 40 to 500 taxa. Various algorithms in POY show fundamentally different properties within and between clusters. All algorithms are efficient on the small cluster for the 40-taxon data set. On the large cluster, multibuilding exhibits excellent parallel efficiency, whereas parallel building is inefficient. These results are independent of data set size. Branch swapping in parallel shows excellent speed-up for 16 slave processors on the large cluster. However, there is no appreciable speed-up for branch swapping with the further addition of slave processors (>16). This result is independent of data set size. Ratcheting in parallel is efficient with the addition of up to 32 processors in the large cluster. This result is independent of data set size. c2001 The Willi Hennig Society.

  19. Effect of wheelchair design on wheeled mobility and propulsion efficiency in less-resourced settings

    PubMed Central

    2017-01-01

    Background Wheelchair research includes both qualitative and quantitative approaches, primarily focuses on functionality and skill performance and is often limited to short testing periods. This is the first study to use the combination of a performance test (i.e. wheelchair propulsion test) and a multiple-day mobility assessment to evaluate wheelchair designs in rural areas of a developing country. Objectives Test the feasibility of using wheel-mounted accelerometers to document bouts of wheeled mobility data in rural settings and use these data to compare how patients respond to different wheelchair designs. Methods A quasi-experimental, pre- and post-test design was used to test the differences between locally manufactured wheelchairs (push rim and tricycle) and an imported intervention product (dual-lever propulsion wheelchair). A one-way repeated measures analysis of variance was used to interpret propulsion and wheeled mobility data. Results There were no statistical differences in bouts of mobility between the locally manufactured and intervention product, which was explained by high amounts of variability within the data. With regard to the propulsion test, push rim users were significantly more efficient when using the intervention product compared with tricycle users. Conclusion Use of wheel-mounted accelerometers as a means to test user mobility proved to be a feasible methodology in rural settings. Variability in wheeled mobility data could be decreased with longer acclimatisation periods. The data suggest that push rim users experience an easier transition to a dual-lever propulsion system. PMID:28936416

  20. Effect of wheelchair design on wheeled mobility and propulsion efficiency in less-resourced settings.

    PubMed

    Stanfill, Christopher J; Jensen, Jody L

    2017-01-01

    Wheelchair research includes both qualitative and quantitative approaches, primarily focuses on functionality and skill performance and is often limited to short testing periods. This is the first study to use the combination of a performance test (i.e. wheelchair propulsion test) and a multiple-day mobility assessment to evaluate wheelchair designs in rural areas of a developing country. Test the feasibility of using wheel-mounted accelerometers to document bouts of wheeled mobility data in rural settings and use these data to compare how patients respond to different wheelchair designs. A quasi-experimental, pre- and post-test design was used to test the differences between locally manufactured wheelchairs (push rim and tricycle) and an imported intervention product (dual-lever propulsion wheelchair). A one-way repeated measures analysis of variance was used to interpret propulsion and wheeled mobility data. There were no statistical differences in bouts of mobility between the locally manufactured and intervention product, which was explained by high amounts of variability within the data. With regard to the propulsion test, push rim users were significantly more efficient when using the intervention product compared with tricycle users. Use of wheel-mounted accelerometers as a means to test user mobility proved to be a feasible methodology in rural settings. Variability in wheeled mobility data could be decreased with longer acclimatisation periods. The data suggest that push rim users experience an easier transition to a dual-lever propulsion system.

  1. Practice makes proficient: pigeons (Columba livia) learn efficient routes on full-circuit navigational traveling salesperson problems.

    PubMed

    Baron, Danielle M; Ramirez, Alejandro J; Bulitko, Vadim; Madan, Christopher R; Greiner, Ariel; Hurd, Peter L; Spetch, Marcia L

    2015-01-01

    Visiting multiple locations and returning to the start via the shortest route, referred to as the traveling salesman (or salesperson) problem (TSP), is a valuable skill for both humans and non-humans. In the current study, pigeons were trained with increasing set sizes of up to six goals, with each set size presented in three distinct configurations, until consistency in route selection emerged. After training at each set size, the pigeons were tested with two novel configurations. All pigeons acquired routes that were significantly more efficient (i.e., shorter in length) than expected by chance selection of the goals. On average, the pigeons also selected routes that were more efficient than expected based on a local nearest-neighbor strategy and were as efficient as the average route generated by a crossing-avoidance strategy. Analysis of the routes taken indicated that they conformed to both a nearest-neighbor and a crossing-avoidance strategy significantly more often than expected by chance. Both the time taken to visit all goals and the actual distance traveled decreased from the first to the last trials of training in each set size. On the first trial with novel configurations, average efficiency was higher than chance, but was not higher than expected from a nearest-neighbor or crossing-avoidance strategy. These results indicate that pigeons can learn to select efficient routes on a TSP problem.

  2. Feasibility of an appliance energy testing and labeling program for Sri Lanka

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Biermayer, Peter; Busch, John; Hakim, Sajid

    2000-04-01

    A feasibility study evaluated the costs and benefits of establishing a program for testing, labeling and setting minimum efficiency standards for appliances and lighting in Sri Lanka. The feasibility study included: refrigerators, air-conditioners, flourescent lighting (ballasts & CFls), ceiling fans, motors, and televisions.

  3. Individual sequences in large sets of gene sequences may be distinguished efficiently by combinations of shared sub-sequences

    PubMed Central

    Gibbs, Mark J; Armstrong, John S; Gibbs, Adrian J

    2005-01-01

    Background Most current DNA diagnostic tests for identifying organisms use specific oligonucleotide probes that are complementary in sequence to, and hence only hybridise with the DNA of one target species. By contrast, in traditional taxonomy, specimens are usually identified by 'dichotomous keys' that use combinations of characters shared by different members of the target set. Using one specific character for each target is the least efficient strategy for identification. Using combinations of shared bisectionally-distributed characters is much more efficient, and this strategy is most efficient when they separate the targets in a progressively binary way. Results We have developed a practical method for finding minimal sets of sub-sequences that identify individual sequences, and could be targeted by combinations of probes, so that the efficient strategy of traditional taxonomic identification could be used in DNA diagnosis. The sizes of minimal sub-sequence sets depended mostly on sequence diversity and sub-sequence length and interactions between these parameters. We found that 201 distinct cytochrome oxidase subunit-1 (CO1) genes from moths (Lepidoptera) were distinguished using only 15 sub-sequences 20 nucleotides long, whereas only 8–10 sub-sequences 6–10 nucleotides long were required to distinguish the CO1 genes of 92 species from the 9 largest orders of insects. Conclusion The presence/absence of sub-sequences in a set of gene sequences can be used like the questions in a traditional dichotomous taxonomic key; hybridisation probes complementary to such sub-sequences should provide a very efficient means for identifying individual species, subtypes or genotypes. Sequence diversity and sub-sequence length are the major factors that determine the numbers of distinguishing sub-sequences in any set of sequences. PMID:15817134

  4. Automation of learning-set testing - The video-task paradigm

    NASA Technical Reports Server (NTRS)

    Washburn, David A.; Hopkins, William D.; Rumbaugh, Duane M.

    1989-01-01

    Researchers interested in studying discrimination learning in primates have typically utilized variations in the Wisconsin General Test Apparatus (WGTA). In the present experiment, a new testing apparatus for the study of primate learning is proposed. In the video-task paradigm, rhesus monkeys (Macaca mulatta) respond to computer-generated stimuli by manipulating a joystick. Using this apparatus, discrimination learning-set data for 2 monkeys were obtained. Performance on Trial 2 exceeded 80 percent within 200 discrimination learning problems. These data illustrate the utility of the video-task paradigm in comparative research. Additionally, the efficient learning and rich data that were characteristic of this study suggest several advantages of the present testing paradigm over traditional WGTA testing.

  5. REVERSAL LEARNING SET AND FUNCTIONAL EQUIVALENCE IN CHILDREN WITH AND WITHOUT AUTISM

    PubMed Central

    Lionello-DeNolf, Karen M.; McIlvane, William J.; Canovas, Daniela S.; de Souza, Deisy G.; Barros, Romariz S.

    2009-01-01

    To evaluate whether children with and without autism could exhibit (a) functional equivalence in the course of yoked repeated-reversal training and (b) reversal learning set, 6 children, in each of two experiments, were exposed to simple discrimination contingencies with three sets of stimuli. The discriminative functions of the set members were yoked and repeatedly reversed. In Experiment 1, all the children (of preschool age) showed gains in the efficiency of reversal learning across reversal problems and behavior that suggested formation of functional equivalence. In Experiment 2, 3 nonverbal children with autism exhibited strong evidence of reversal learning set and 2 showed evidence of functional equivalence. The data suggest a possible relationship between efficiency of reversal learning and functional equivalence test outcomes. Procedural variables may prove important in assessing the potential of young or nonverbal children to classify stimuli on the basis of shared discriminative functions. PMID:20186287

  6. Study on the removal efficiency of UF membranes using bacteriophages in bench-scale and semi-technical scale.

    PubMed

    Kreissel, K; Bösl, M; Lipp, P; Franzreb, M; Hambsch, B

    2012-01-01

    To determine the removal efficiency of ultrafiltration (UF) membranes for nano-particles in the size range of viruses the state of the art uses challenge tests with virus-spiked water. This work focuses on bench-scale and semi-technical scale experiments. Different experimental parameters influencing the removal efficiency of the tested UF membrane modules were analyzed and evaluated for bench- and semi-technical scale experiments. Organic matter in the water matrix highly influenced the removal of the tested bacteriophages MS2 and phiX174. Less membrane fouling (low ΔTMP) led to a reduced phage reduction. Increased flux positively affected phage removal in natural waters. The tested bacteriophages MS2 and phiX174 revealed different removal properties. MS2, which is widely used as a model organism to determine virus removal efficiencies of membranes, mostly showed a better removal than phiX174 for the natural water qualities tested. It seems that MS2 is possibly a less conservative surrogate for human enteric virus removal than phiX174. In bench-scale experiments log removal values (LRV) for MS2 of 2.5-6.0 and of 2.5-4.5 for phiX174 were obtained for the examined range of parameters. Phage removal obtained with differently fabricated semi-technical modules was quite variable for comparable parameter settings, indicating that module fabrication can lead to differing results. Potting temperature and module size were identified as influencing factors. In conclusion, careful attention has to be paid to the choice of experimental settings and module potting when using bench-scale or semi-technical scale experiments for UF membrane challenge tests.

  7. International Experience in Standards and Labeling Programs for Rice Cookers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhou, Nan; Zheng, Nina

    China has had an active program on energy efficiency standards for household appliances since the mid-1990s. Rice cooker is among the first to be subject to such mandatory regulation, since it is one of the most prevalent electric appliances in Chinese households. Since first introduced in 1989, the minimum energy efficiency standard for rice cookers has not been revised. Therefore, the potential for energy saving is considerable. Initial analysis from CNIS indicates that potential carbon savings is likely to reach 7.6 million tons of CO2 by the 10th year of the standard implementation. Since September 2007, CNIS has been workingmore » with various groups to develop the new standard for rice cookers. With The Energy Foundation's support, LBNL has assisted CNIS in the revision of the minimum energy efficiency standard for rice cookers that is expected to be effective in 2009. Specifically, work has been in the following areas: assistance in developing consumer survey on usage pattern of rice cookers, review of international standards, review of international test procedures, comparison of the international standards and test procedures, and assessment of technical options of reducing energy use. This report particularly summarizes the findings of reviewing international standards and technical options of reducing energy consumption. The report consists of an overview of rice cooker standards and labeling programs and testing procedures in Hong Kong, South Korea, Japan and Thailand, and Japan's case study in developing energy efficiency rice cooker technologies and rice cooker efficiency programs. The results from the analysis can be summarized as the follows: Hong Kong has a Voluntary Energy Efficiency Labeling scheme for electric rice cookers initiated in 2001, with revision implemented in 2007; South Korea has both MEPS and Mandatory Energy Efficiency Label targeting the same category of rice cookers as Hong Kong; Thailand's voluntary endorsement labeling program is similar to Hong Kong in program design but has 5 efficiency grades; Japan's program is distinct in its adoption of the 'Top Runner' approach, in which, the future efficiency standards is set based on the efficiency levels of the most efficient product in the current domestic market. Although the standards are voluntary, penalties can still be evoked if the average efficiency target is not met. Both Hong Kong and South Korea's tests involve pouring water into the inner pot equal to 80% of its rated volume; however, white rice is used as a load for its tests in Hong Kong whereas no rice is used for tests in South Korea. In Japan's case, water level specified by the manufactures is used and milled rice is used as a load only partially in the tests. Moreover, Japan does not conduct heat efficiency test but its energy consumption measurements tests are much more complex, with 4 different tests are conducted to determine the annual average energy consumption. Hong Kong and Thailand both set Minimum Allowable Heat Efficiency for different rated wattages. The energy efficiency requirements are identical except that the minimum heat efficiency in Thailand is 1 percentage point higher for all rated power categories. In South Korea, MEPS and label's energy efficiency grades are determined by the rice cooker's Rated Energy Efficiency for induction, non-induction, pressure, nonpressure rice cookers. Japan's target standard values are set for electromagnetic induction heating products and non-electromagnetic induction heating products by different size of rice cookers. Specific formulas are used by type and size depending on the mass of water evaporation of the rice cookers. Japan has been the leading country in technology development of various types of rice cookers, and developed concrete energy efficiency standards for rice cookers. However, as consumers in Japan emphasize the deliciousness of cooked rice over other factors, many types of models were developed to improve the taste of cooked rice. Nonetheless, the efficiency of electromagnetic induction heating (IH) rice cookers in warm mode has improved approximately 12 percent from 1993 to 2004 due to the 'low temperature warming method' developed by manufacturers. The Energy Conservation Center of Japan (IEEJ) releases energy saving products database on the web regularly, on which the energy saving performance of each product is listed and ranked. Energy saving in rice cookers mostly rest with insulation of the pot. Technology developed to improve the energy efficiency of the rice cookers includes providing vacuum layers on both side of the pot, using copper-plated materials, and double stainless layer lid that can be heated and steam can run in between the two layers to speed the heating process.« less

  8. An efficient, movable single-particle detector for use in cryogenic ultra-high vacuum environments.

    PubMed

    Spruck, Kaija; Becker, Arno; Fellenberger, Florian; Grieser, Manfred; von Hahn, Robert; Klinkhamer, Vincent; Novotný, Oldřich; Schippers, Stefan; Vogel, Stephen; Wolf, Andreas; Krantz, Claude

    2015-02-01

    A compact, highly efficient single-particle counting detector for ions of keV/u kinetic energy, movable by a long-stroke mechanical translation stage, has been developed at the Max-Planck-Institut für Kernphysik (Max Planck Institute for Nuclear Physics, MPIK). Both, detector and translation mechanics, can operate at ambient temperatures down to ∼10 K and consist fully of ultra-high vacuum compatible, high-temperature bakeable, and non-magnetic materials. The set-up is designed to meet the technical demands of MPIK's Cryogenic Storage Ring. We present a series of functional tests that demonstrate full suitability for this application and characterise the set-up with regard to its particle detection efficiency.

  9. Assessment of Linear Finite-Difference Poisson-Boltzmann Solvers

    PubMed Central

    Wang, Jun; Luo, Ray

    2009-01-01

    CPU time and memory usage are two vital issues that any numerical solvers for the Poisson-Boltzmann equation have to face in biomolecular applications. In this study we systematically analyzed the CPU time and memory usage of five commonly used finite-difference solvers with a large and diversified set of biomolecular structures. Our comparative analysis shows that modified incomplete Cholesky conjugate gradient and geometric multigrid are the most efficient in the diversified test set. For the two efficient solvers, our test shows that their CPU times increase approximately linearly with the numbers of grids. Their CPU times also increase almost linearly with the negative logarithm of the convergence criterion at very similar rate. Our comparison further shows that geometric multigrid performs better in the large set of tested biomolecules. However, modified incomplete Cholesky conjugate gradient is superior to geometric multigrid in molecular dynamics simulations of tested molecules. We also investigated other significant components in numerical solutions of the Poisson-Boltzmann equation. It turns out that the time-limiting step is the free boundary condition setup for the linear systems for the selected proteins if the electrostatic focusing is not used. Thus, development of future numerical solvers for the Poisson-Boltzmann equation should balance all aspects of the numerical procedures in realistic biomolecular applications. PMID:20063271

  10. Relations between basic and specific motor abilities and player quality of young basketball players.

    PubMed

    Marić, Kristijan; Katić, Ratko; Jelicić, Mario

    2013-05-01

    Subjects from 5 first league clubs from Herzegovina were tested with the purpose of determining the relations of basic and specific motor abilities, as well as the effect of specific abilities on player efficiency in young basketball players (cadets). A battery of 12 tests assessing basic motor abilities and 5 specific tests assessing basketball efficiency were used on a sample of 83 basketball players. Two significant canonical correlations, i.e. linear combinations explained the relation between the set of twelve variables of basic motor space and five variables of situational motor abilities. Underlying the first canonical linear combination is the positive effect of the general motor factor, predominantly defined by jumping explosive power, movement speed of the arms, static strength of the arms and coordination, on specific basketball abilities: movement efficiency, the power of the overarm throw, shooting and passing precision, and the skill of handling the ball. The impact of basic motor abilities of precision and balance on specific abilities of passing and shooting precision and ball handling is underlying the second linear combination. The results of regression correlation analysis between the variable set of specific motor abilities and game efficiency have shown that the ability of ball handling has the largest impact on player quality in basketball cadets, followed by shooting precision and passing precision, and the power of the overarm throw.

  11. A novel semi-transductive learning framework for efficient atypicality detection in chest radiographs

    NASA Astrophysics Data System (ADS)

    Alzubaidi, Mohammad; Balasubramanian, Vineeth; Patel, Ameet; Panchanathan, Sethuraman; Black, John A., Jr.

    2012-03-01

    Inductive learning refers to machine learning algorithms that learn a model from a set of training data instances. Any test instance is then classified by comparing it to the learned model. When the set of training instances lend themselves well to modeling, the use of a model substantially reduces the computation cost of classification. However, some training data sets are complex, and do not lend themselves well to modeling. Transductive learning refers to machine learning algorithms that classify test instances by comparing them to all of the training instances, without creating an explicit model. This can produce better classification performance, but at a much higher computational cost. Medical images vary greatly across human populations, constituting a data set that does not lend itself well to modeling. Our previous work showed that the wide variations seen across training sets of "normal" chest radiographs make it difficult to successfully classify test radiographs with an inductive (modeling) approach, and that a transductive approach leads to much better performance in detecting atypical regions. The problem with the transductive approach is its high computational cost. This paper develops and demonstrates a novel semi-transductive framework that can address the unique challenges of atypicality detection in chest radiographs. The proposed framework combines the superior performance of transductive methods with the reduced computational cost of inductive methods. Our results show that the proposed semitransductive approach provides both effective and efficient detection of atypical regions within a set of chest radiographs previously labeled by Mayo Clinic expert thoracic radiologists.

  12. 75 FR 76968 - Energy Conservation Program for Consumer Products: Decision and Order Granting a Waiver to the...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-12-10

    ... III of the Energy Policy and Conservation Act (EPCA) sets forth a variety of provisions designed to... procedures that are reasonably designed to produce results that measure energy efficiency, energy use, or... more design characteristics that prevent testing according to the prescribed test procedure, or (2...

  13. 76 FR 56339 - Energy Conservation Program for Consumer Products: Test Procedures for Residential Furnaces and...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-09-13

    ...). The new version of this IEC standard includes a number of methodological changes designed to increase... codified) sets forth a variety of provisions designed to improve energy efficiency and established the... prescribed or amended under this section shall be reasonably designed to produce test results which measure...

  14. 75 FR 29823 - Energy Conservation Program for Consumer Products: Test Procedures for Refrigerators...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-05-27

    ...'') sets forth a variety of provisions designed to improve energy efficiency. (All references to EPCA refer... amended under this section shall be reasonably designed to produce test results which measure energy... a cabinet designed for the refrigerated storage of food at temperatures above 32 [deg]F and below 39...

  15. Set size manipulations reveal the boundary conditions of perceptual ensemble learning.

    PubMed

    Chetverikov, Andrey; Campana, Gianluca; Kristjánsson, Árni

    2017-11-01

    Recent evidence suggests that observers can grasp patterns of feature variations in the environment with surprising efficiency. During visual search tasks where all distractors are randomly drawn from a certain distribution rather than all being homogeneous, observers are capable of learning highly complex statistical properties of distractor sets. After only a few trials (learning phase), the statistical properties of distributions - mean, variance and crucially, shape - can be learned, and these representations affect search during a subsequent test phase (Chetverikov, Campana, & Kristjánsson, 2016). To assess the limits of such distribution learning, we varied the information available to observers about the underlying distractor distributions by manipulating set size during the learning phase in two experiments. We found that robust distribution learning only occurred for large set sizes. We also used set size to assess whether the learning of distribution properties makes search more efficient. The results reveal how a certain minimum of information is required for learning to occur, thereby delineating the boundary conditions of learning of statistical variation in the environment. However, the benefits of distribution learning for search efficiency remain unclear. Copyright © 2017 Elsevier Ltd. All rights reserved.

  16. Generating quality word sense disambiguation test sets based on MeSH indexing.

    PubMed

    Fan, Jung-Wei; Friedman, Carol

    2009-11-14

    Word sense disambiguation (WSD) determines the correct meaning of a word that has more than one meaning, and is a critical step in biomedical natural language processing, as interpretation of information in text can be correct only if the meanings of their component terms are correctly identified first. Quality evaluation sets are important to WSD because they can be used as representative samples for developing automatic programs and as referees for comparing different WSD programs. To help create quality test sets for WSD, we developed a MeSH-based automatic sense-tagging method that preferentially annotates terms being topical of the text. Preliminary results were promising and revealed important issues to be addressed in biomedical WSD research. We also suggest that, by cross-validating with 2 or 3 annotators, the method should be able to efficiently generate quality WSD test sets. Online supplement is available at: http://www.dbmi.columbia.edu/~juf7002/AMIA09.

  17. Evaluation of roll designs on a roll-crusher/ crusher/splitter biomass harvester: test bench results

    Treesearch

    Colin Ashmore; Donald L. Sirois; Bryce J. Stokes

    1987-01-01

    Four different roll designs were evaluated on a test bench roll crusher/splitter to determine feeding and crushing efficiencies. For each design, different gap settings for the primary and secondary rolls were tested at two hydraulic cylinder pressures on the primary crush roll to determine their ability to crush and/or feed tree bolts. Seven different diameter classes...

  18. 40 CFR 63.1209 - What are the monitoring requirements?

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... efficiency during the test: (1) Scrubber blowdown must be minimized during a pretest conditioning period and... select a set of operating parameters appropriate for the control device design that you determine to be a...

  19. 40 CFR 63.1209 - What are the monitoring requirements?

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... efficiency during the test: (1) Scrubber blowdown must be minimized during a pretest conditioning period and... select a set of operating parameters appropriate for the control device design that you determine to be a...

  20. 40 CFR 63.1209 - What are the monitoring requirements?

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... efficiency during the test: (1) Scrubber blowdown must be minimized during a pretest conditioning period and... select a set of operating parameters appropriate for the control device design that you determine to be a...

  1. 40 CFR 63.1209 - What are the monitoring requirements?

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... efficiency during the test: (1) Scrubber blowdown must be minimized during a pretest conditioning period and... select a set of operating parameters appropriate for the control device design that you determine to be a...

  2. 40 CFR 63.1209 - What are the monitoring requirements?

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... efficiency during the test: (1) Scrubber blowdown must be minimized during a pretest conditioning period and... select a set of operating parameters appropriate for the control device design that you determine to be a...

  3. Fan and pump noise control

    NASA Technical Reports Server (NTRS)

    Misoda, J.; Magliozzi, B.

    1973-01-01

    The development is described of improved, low noise level fan and pump concepts for the space shuttle. In addition, a set of noise design criteria for small fans and pumps was derived. The concepts and criteria were created by obtaining Apollo hardware test data to correlate and modify existing noise estimating procedures. A set of space shuttle selection criteria was used to determine preliminary fan and pump concepts. These concepts were tested and modified to obtain noise sources and characteristics which yield the design criteria and quiet, efficient space shuttle fan and pump concepts.

  4. Plasma Accelerator and Energy Conversion Research

    DTIC Science & Technology

    1982-10-29

    performance tests have been accomplished. A self-contained recirculating AMTEC device with a thermal to electric conversion efficiency of 19% has been...combined efficiency . These two match up particularly well, because thermionic conversion is a high temperature technique, whereas AMTEC is limited to...EXPERIENTAL: Samples: The samples were prepared with a high rate DC magnetron sputtering apparatus ( SFI model 1 ). The sample set consisted of four

  5. 40 CFR Appendix B to Part 50 - Reference Method for the Determination of Suspended Particulate Matter in the Atmosphere (High...

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... filters used are specified to have a minimum collection efficiency of 99 percent for 0.3 µm (DOP... electronic timers have much better set-point resolution than mechanical timers, but require a battery backup... Collection efficiency: 99 percent minimum as measured by the DOP test (ASTM-2986) for particles of 0.3 µm...

  6. 40 CFR Appendix B to Part 50 - Reference Method for the Determination of Suspended Particulate Matter in the Atmosphere (High...

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... filters used are specified to have a minimum collection efficiency of 99 percent for 0.3 µm (DOP... electronic timers have much better set-point resolution than mechanical timers, but require a battery backup... Collection efficiency: 99 percent minimum as measured by the DOP test (ASTM-2986) for particles of 0.3 µm...

  7. Filtration Efficiency of Functionalized Ceramic Foam Filters for Aluminum Melt Filtration

    NASA Astrophysics Data System (ADS)

    Voigt, Claudia; Jäckel, Eva; Taina, Fabio; Zienert, Tilo; Salomon, Anton; Wolf, Gotthard; Aneziris, Christos G.; Le Brun, Pierre

    2017-02-01

    The influence of filter surface chemistry on the filtration efficiency of cast aluminum alloys was evaluated for four different filter coating compositions (Al2O3—alumina, MgAl2O4—spinel, 3Al2O3·2SiO2—mullite, and TiO2—rutile). The tests were conducted on a laboratory scale with a filtration pilot plant, which facilitates long-term filtration tests (40 to 76 minutes). This test set-up allows the simultaneous use of two LiMCAs (before and after the filter) for the determination of the efficiency of inclusion removal. The four tested filter surface chemistries exhibited good thermal stability and mechanical robustness after 750 kg of molten aluminum had been cast. All four filter types exhibited a mean filtration efficiency of at least 80 pct. However, differences were also observed. The highest filtration efficiencies were obtained with alumina- and spinel-coated filter surfaces (>90 pct), and the complete removal of the largest inclusions (>90 µm) was observed. The efficiency was slightly lower with mullite- and rutile-coated filter surfaces, in particular for large inclusions. These observations are discussed in relation to the properties of the filters, in particular in terms of, for example, the surface roughness.

  8. The efficiency of macroporous polystyrene ion-exchange resins in natural organic matter removal from surface water

    NASA Astrophysics Data System (ADS)

    Urbanowska, Agnieszka; Kabsch-Korbutowicz, Małgorzata

    2017-11-01

    Natural water sources used for water treatment contains various organic and inorganic compounds. Surface waters are commonly contaminated with natural organic matter (NOM). NOM removal from water is important e.g. due to lowering the risk of disinfection by-product formation during chlorination. Ion exchange with the use of synthetic ion-exchange resins is an alternative process to typical NOM removal approach (e.g. coagulation, adsorption or oxidation) as most NOM compounds have anionic character. Moreover, neutral fraction could be removed from water due to its adsorption on resin surface. In this study, applicability of two macroporous, polystyrene ion exchange resins (BD400FD and A100) in NOM removal from water was assessed including comparison of treatment efficiency in various process set-ups and conditions. Moreover, resin regeneration effectivity was determined. Obtained results shown that examined resins could be applied in NOM removal and it should be noticed that column set-up yielded better results (contrary to batch set-up). Among the examined resins A100 one possessed better properties. It was determined that increase of solution pH resulted in a slight decrease in treatment efficiency while higher temperature improved it. It was also observed that regeneration efficiency was comparable in both tested methods but batch set-up required less reagents.

  9. Measurement system for diffraction efficiency of convex gratings

    NASA Astrophysics Data System (ADS)

    Liu, Peng; Chen, Xin-hua; Zhou, Jian-kang; Zhao, Zhi-cheng; Liu, Quan; Luo, Chao; Wang, Xiao-feng; Tang, Min-xue; Shen, Wei-min

    2017-08-01

    A measurement system for diffraction efficiency of convex gratings is designed. The measurement system mainly includes four components as a light source, a front system, a dispersing system that contains a convex grating, and a detector. Based on the definition and measuring principle of diffraction efficiency, the optical scheme of the measurement system is analyzed and the design result is given. Then, in order to validate the feasibility of the designed system, the measurement system is set up and the diffraction efficiency of a convex grating with the aperture of 35 mm, the curvature-radius of 72mm, the blazed angle of 6.4°, the grating period of 2.5μm and the working waveband of 400nm-900nm is tested. Based on GUM (Guide to the Expression of Uncertainty in Measurement), the uncertainties in the measuring results are evaluated. The measured diffraction efficiency data are compared to the theoretical ones, which are calculated based on the grating groove parameters got by an atomic force microscope and Rigorous Couple Wave Analysis, and the reliability of the measurement system is illustrated. Finally, the measurement performance of the system is analyzed and tested. The results show that, the testing accuracy, the testing stability and the testing repeatability are 2.5%, 0.085% and 3.5% , respectively.

  10. Lessons Learned in Designing and Implementing a Computer-Adaptive Test for English

    ERIC Educational Resources Information Center

    Burston, Jack; Neophytou, Maro

    2014-01-01

    This paper describes the lessons learned in designing and implementing a computer-adaptive test (CAT) for English. The early identification of students with weak L2 English proficiency is of critical importance in university settings that have compulsory English language course graduation requirements. The most efficient means of diagnosing the L2…

  11. Improving the Accessibility and Efficiency of Point-of-Care Diagnostics Services in Low- and Middle-Income Countries: Lean and Agile Supply Chain Management.

    PubMed

    Kuupiel, Desmond; Bawontuo, Vitalis; Mashamba-Thompson, Tivani P

    2017-11-29

    Access to point-of-care (POC) diagnostics services is essential for ensuring rapid disease diagnosis, management, control, and surveillance. POC testing services can improve access to healthcare especially where healthcare infrastructure is weak and access to quality and timely medical care is a challenge. Improving the accessibility and efficiency of POC diagnostics services, particularly in resource-limited settings, may be a promising route to improving healthcare outcomes. In this review, the accessibility of POC testing is defined as the distance/proximity to the nearest healthcare facility for POC diagnostics service. This review provides an overview of the impact of POC diagnostics on healthcare outcomes in low- and middle-income countries (LMICs) and factors contributing to the accessibility of POC testing services in LMICs, focusing on characteristics of the supply chain management and quality systems management, characteristics of the geographical location, health infrastructure, and an enabling policy framework for POC diagnostics services. Barriers and challenges related to the accessibility of POC diagnostics in LMICs were also discussed. Bearing in mind the reported barriers and challenges as well as the disease epidemiology in LMICs, we propose a lean and agile supply chain management framework for improving the accessibility and efficiency of POC diagnostics services in these settings.

  12. Improving the Accessibility and Efficiency of Point-of-Care Diagnostics Services in Low- and Middle-Income Countries: Lean and Agile Supply Chain Management

    PubMed Central

    Kuupiel, Desmond; Bawontuo, Vitalis

    2017-01-01

    Access to point-of-care (POC) diagnostics services is essential for ensuring rapid disease diagnosis, management, control, and surveillance. POC testing services can improve access to healthcare especially where healthcare infrastructure is weak and access to quality and timely medical care is a challenge. Improving the accessibility and efficiency of POC diagnostics services, particularly in resource-limited settings, may be a promising route to improving healthcare outcomes. In this review, the accessibility of POC testing is defined as the distance/proximity to the nearest healthcare facility for POC diagnostics service. This review provides an overview of the impact of POC diagnostics on healthcare outcomes in low- and middle-income countries (LMICs) and factors contributing to the accessibility of POC testing services in LMICs, focusing on characteristics of the supply chain management and quality systems management, characteristics of the geographical location, health infrastructure, and an enabling policy framework for POC diagnostics services. Barriers and challenges related to the accessibility of POC diagnostics in LMICs were also discussed. Bearing in mind the reported barriers and challenges as well as the disease epidemiology in LMICs, we propose a lean and agile supply chain management framework for improving the accessibility and efficiency of POC diagnostics services in these settings. PMID:29186013

  13. Validation of the SimSET simulation package for modeling the Siemens Biograph mCT PET scanner

    NASA Astrophysics Data System (ADS)

    Poon, Jonathan K.; Dahlbom, Magnus L.; Casey, Michael E.; Qi, Jinyi; Cherry, Simon R.; Badawi, Ramsey D.

    2015-02-01

    Monte Carlo simulation provides a valuable tool in performance assessment and optimization of system design parameters for PET scanners. SimSET is a popular Monte Carlo simulation toolkit that features fast simulation time, as well as variance reduction tools to further enhance computational efficiency. However, SimSET has lacked the ability to simulate block detectors until its most recent release. Our goal is to validate new features of SimSET by developing a simulation model of the Siemens Biograph mCT PET scanner and comparing the results to a simulation model developed in the GATE simulation suite and to experimental results. We used the NEMA NU-2 2007 scatter fraction, count rates, and spatial resolution protocols to validate the SimSET simulation model and its new features. The SimSET model overestimated the experimental results of the count rate tests by 11-23% and the spatial resolution test by 13-28%, which is comparable to previous validation studies of other PET scanners in the literature. The difference between the SimSET and GATE simulation was approximately 4-8% for the count rate test and approximately 3-11% for the spatial resolution test. In terms of computational time, SimSET performed simulations approximately 11 times faster than GATE simulations. The new block detector model in SimSET offers a fast and reasonably accurate simulation toolkit for PET imaging applications.

  14. Embryogenic callus proliferation and regeneration conditions for genetic transformation of diverse sugarcane cultivars.

    PubMed

    Basnayake, Shiromani W V; Moyle, Richard; Birch, Robert G

    2011-03-01

    Amenability to tissue culture stages required for gene transfer, selection and plant regeneration are the main determinants of genetic transformation efficiency via particle bombardment into sugarcane. The technique is moving from the experimental phase, where it is sufficient to work in a few amenable genotypes, to practical application in a diverse and changing set of elite cultivars. Therefore, we investigated the response to callus initiation, proliferation, regeneration and selection steps required for microprojectile-mediated transformation, in a diverse set of Australian sugarcane cultivars. 12 of 16 tested cultivars were sufficiently amenable to existing routine tissue-culture conditions for practical genetic transformation. Three cultivars required adjustments to 2,4-D levels during callus proliferation, geneticin concentration during selection, and/or light intensity during regeneration. One cultivar gave an extreme necrotic response in leaf spindle explants and produced no callus tissue under the tested culture conditions. It was helpful to obtain spindle explants for tissue culture from plants with good water supply for growth, especially for genotypes that were harder to culture. It was generally possible to obtain several independent transgenic plants per bombardment, with time in callus culture limited to 11-15 weeks. A caution with this efficient transformation system is that separate shoots arose from different primary transformed cells in more than half of tested calli after selection for geneticin resistance. The results across this diverse cultivar set are likely to be a useful guide to key variables for rapid optimisation of tissue culture conditions for efficient genetic transformation of other sugarcane cultivars.

  15. Selected Performance Measurements of the F-15 Active Axisymmetric Thrust-vectoring Nozzle

    NASA Technical Reports Server (NTRS)

    Orme, John S.; Sims, Robert L.

    1998-01-01

    Flight tests recently completed at the NASA Dryden Flight Research Center evaluated performance of a hydromechanically vectored axisymmetric nozzle onboard the F-15 ACTIVE. A flight-test technique whereby strain gages installed onto engine mounts provided for the direct measurement of thrust and vector forces has proven to be extremely valuable. Flow turning and thrust efficiency, as well as nozzle static pressure distributions were measured and analyzed. This report presents results from testing at an altitude of 30,000 ft and a speed of Mach 0.9. Flow turning and thrust efficiency were found to be significantly different than predicted, and moreover, varied substantially with power setting and pitch vector angle. Results of an in-flight comparison of the direct thrust measurement technique and an engine simulation fell within the expected uncertainty bands. Overall nozzle performance at this flight condition demonstrated the F100-PW-229 thrust-vectoring nozzles to be highly capable and efficient.

  16. Selected Performance Measurements of the F-15 ACTIVE Axisymmetric Thrust-Vectoring Nozzle

    NASA Technical Reports Server (NTRS)

    Orme, John S.; Sims, Robert L.

    1999-01-01

    Flight tests recently completed at the NASA Dryden Flight Research Center evaluated performance of a hydromechanically vectored axisymmetric nozzle onboard the F-15 ACTIVE. A flight-test technique whereby strain gages installed onto engine mounts provided for the direct measurement of thrust and vector forces has proven to be extremely valuable. Flow turning and thrust efficiency, as well as nozzle static pressure distributions were measured and analyzed. This report presents results from testing at an altitude of 30,000 ft and a speed of Mach 0.9. Flow turning and thrust efficiency were found to be significantly different than predicted, and moreover, varied substantially with power setting and pitch vector angle. Results of an in-flight comparison of the direct thrust measurement technique and an engine simulation fell within the expected uncertainty bands. Overall nozzle performance at this flight condition demonstrated the F100-PW-229 thrust-vectoring nozzles to be highly capable and efficient.

  17. OPATs: Omnibus P-value association tests.

    PubMed

    Chen, Chia-Wei; Yang, Hsin-Chou

    2017-07-10

    Combining statistical significances (P-values) from a set of single-locus association tests in genome-wide association studies is a proof-of-principle method for identifying disease-associated genomic segments, functional genes and biological pathways. We review P-value combinations for genome-wide association studies and introduce an integrated analysis tool, Omnibus P-value Association Tests (OPATs), which provides popular analysis methods of P-value combinations. The software OPATs programmed in R and R graphical user interface features a user-friendly interface. In addition to analysis modules for data quality control and single-locus association tests, OPATs provides three types of set-based association test: window-, gene- and biopathway-based association tests. P-value combinations with or without threshold and rank truncation are provided. The significance of a set-based association test is evaluated by using resampling procedures. Performance of the set-based association tests in OPATs has been evaluated by simulation studies and real data analyses. These set-based association tests help boost the statistical power, alleviate the multiple-testing problem, reduce the impact of genetic heterogeneity, increase the replication efficiency of association tests and facilitate the interpretation of association signals by streamlining the testing procedures and integrating the genetic effects of multiple variants in genomic regions of biological relevance. In summary, P-value combinations facilitate the identification of marker sets associated with disease susceptibility and uncover missing heritability in association studies, thereby establishing a foundation for the genetic dissection of complex diseases and traits. OPATs provides an easy-to-use and statistically powerful analysis tool for P-value combinations. OPATs, examples, and user guide can be downloaded from http://www.stat.sinica.edu.tw/hsinchou/genetics/association/OPATs.htm. © The Author 2017. Published by Oxford University Press.

  18. A testing-coverage software reliability model considering fault removal efficiency and error generation.

    PubMed

    Li, Qiuying; Pham, Hoang

    2017-01-01

    In this paper, we propose a software reliability model that considers not only error generation but also fault removal efficiency combined with testing coverage information based on a nonhomogeneous Poisson process (NHPP). During the past four decades, many software reliability growth models (SRGMs) based on NHPP have been proposed to estimate the software reliability measures, most of which have the same following agreements: 1) it is a common phenomenon that during the testing phase, the fault detection rate always changes; 2) as a result of imperfect debugging, fault removal has been related to a fault re-introduction rate. But there are few SRGMs in the literature that differentiate between fault detection and fault removal, i.e. they seldom consider the imperfect fault removal efficiency. But in practical software developing process, fault removal efficiency cannot always be perfect, i.e. the failures detected might not be removed completely and the original faults might still exist and new faults might be introduced meanwhile, which is referred to as imperfect debugging phenomenon. In this study, a model aiming to incorporate fault introduction rate, fault removal efficiency and testing coverage into software reliability evaluation is developed, using testing coverage to express the fault detection rate and using fault removal efficiency to consider the fault repair. We compare the performance of the proposed model with several existing NHPP SRGMs using three sets of real failure data based on five criteria. The results exhibit that the model can give a better fitting and predictive performance.

  19. Target Classification of Canonical Scatterers Using Classical Estimation and Dictionary Based Techniques

    DTIC Science & Technology

    2012-03-22

    shapes tested , when the objective parameter set was confined to a dictionary’s de - fined parameter space. These physical characteristics included...8 2.3 Hypothesis Testing and Detection Theory . . . . . . . . . . . . . . . 8 2.4 3-D SAR Scattering Models...basis pursuit de -noising (BPDN) algorithm is chosen to perform extraction due to inherent efficiency and error tolerance. Multiple shape dictionaries

  20. A powerful and efficient set test for genetic markers that handles confounders

    PubMed Central

    Listgarten, Jennifer; Lippert, Christoph; Kang, Eun Yong; Xiang, Jing; Kadie, Carl M.; Heckerman, David

    2013-01-01

    Motivation: Approaches for testing sets of variants, such as a set of rare or common variants within a gene or pathway, for association with complex traits are important. In particular, set tests allow for aggregation of weak signal within a set, can capture interplay among variants and reduce the burden of multiple hypothesis testing. Until now, these approaches did not address confounding by family relatedness and population structure, a problem that is becoming more important as larger datasets are used to increase power. Results: We introduce a new approach for set tests that handles confounders. Our model is based on the linear mixed model and uses two random effects—one to capture the set association signal and one to capture confounders. We also introduce a computational speedup for two random-effects models that makes this approach feasible even for extremely large cohorts. Using this model with both the likelihood ratio test and score test, we find that the former yields more power while controlling type I error. Application of our approach to richly structured Genetic Analysis Workshop 14 data demonstrates that our method successfully corrects for population structure and family relatedness, whereas application of our method to a 15 000 individual Crohn’s disease case–control cohort demonstrates that it additionally recovers genes not recoverable by univariate analysis. Availability: A Python-based library implementing our approach is available at http://mscompbio.codeplex.com. Contact: jennl@microsoft.com or lippert@microsoft.com or heckerma@microsoft.com Supplementary information: Supplementary data are available at Bioinformatics online. PMID:23599503

  1. Single-step reinitialization and extending algorithms for level-set based multi-phase flow simulations

    NASA Astrophysics Data System (ADS)

    Fu, Lin; Hu, Xiangyu Y.; Adams, Nikolaus A.

    2017-12-01

    We propose efficient single-step formulations for reinitialization and extending algorithms, which are critical components of level-set based interface-tracking methods. The level-set field is reinitialized with a single-step (non iterative) "forward tracing" algorithm. A minimum set of cells is defined that describes the interface, and reinitialization employs only data from these cells. Fluid states are extrapolated or extended across the interface by a single-step "backward tracing" algorithm. Both algorithms, which are motivated by analogy to ray-tracing, avoid multiple block-boundary data exchanges that are inevitable for iterative reinitialization and extending approaches within a parallel-computing environment. The single-step algorithms are combined with a multi-resolution conservative sharp-interface method and validated by a wide range of benchmark test cases. We demonstrate that the proposed reinitialization method achieves second-order accuracy in conserving the volume of each phase. The interface location is invariant to reapplication of the single-step reinitialization. Generally, we observe smaller absolute errors than for standard iterative reinitialization on the same grid. The computational efficiency is higher than for the standard and typical high-order iterative reinitialization methods. We observe a 2- to 6-times efficiency improvement over the standard method for serial execution. The proposed single-step extending algorithm, which is commonly employed for assigning data to ghost cells with ghost-fluid or conservative interface interaction methods, shows about 10-times efficiency improvement over the standard method while maintaining same accuracy. Despite their simplicity, the proposed algorithms offer an efficient and robust alternative to iterative reinitialization and extending methods for level-set based multi-phase simulations.

  2. Tests of Five Full-Scale Propellers in the Presence of a Radial and a Liquid-Cooled Engine Nacelle, Including Tests of Two Spinners

    NASA Technical Reports Server (NTRS)

    Biermann, David; Hartman, Edwin P

    1938-01-01

    Wind-tunnel tests are reported of five 3-blade 10-foot propellers operating in front of a radial and a liquid-cooled engine nacelle. The range of blade angles investigated extended from 15 degrees to 45 degrees. Two spinners were tested in conjunction with the liquid-cooled engine nacelle. Comparisons are made between propellers having different blade-shank shapes, blades of different thickness, and different airfoil sections. The results show that propellers operating in front of the liquid-cooled engine nacelle had higher take-off efficiencies than when operating in front of the radial engine nacelle; the peak efficiency was higher only when spinners were employed. One spinner increased the propulsive efficiency of the liquid-cooled unit 6 percent for the highest blade-angle setting investigated and less for lower blade angles. The propeller having airfoil sections extending into the hub was superior to one having round blade shanks. The thick propeller having a Clark y section had a higher take-off efficiency than the thinner one, but its maximum efficiency was possibly lower. Of the three blade sections tested, Clark y, R.A.F. 6, and NACA 2400-34, the Clark y was superior for the high-speed condition, but the R.A.F. 6 excelled for the take-off condition.

  3. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Freitez, Juan A.; Sanchez, Morella; Ruette, Fernando

    Application of simulated annealing (SA) and simplified GSA (SGSA) techniques for parameter optimization of parametric quantum chemistry method (CATIVIC) was performed. A set of organic molecules were selected for test these techniques. Comparison of the algorithms was carried out for error function minimization with respect to experimental values. Results show that SGSA is more efficient than SA with respect to computer time. Accuracy is similar in both methods; however, there are important differences in the final set of parameters.

  4. Pooled nucleic acid testing to identify antiretroviral treatment failure during HIV infection.

    PubMed

    May, Susanne; Gamst, Anthony; Haubrich, Richard; Benson, Constance; Smith, Davey M

    2010-02-01

    Pooling strategies have been used to reduce the costs of polymerase chain reaction-based screening for acute HIV infection in populations in which the prevalence of acute infection is low (less than 1%). Only limited research has been done for conditions in which the prevalence of screening positivity is higher (greater than 1%). We present data on a variety of pooling strategies that incorporate the use of polymerase chain reaction-based quantitative measures to monitor for virologic failure among HIV-infected patients receiving antiretroviral therapy. For a prevalence of virologic failure between 1% and 25%, we demonstrate relative efficiency and accuracy of various strategies. These results could be used to choose the best strategy based on the requirements of individual laboratory and clinical settings such as required turnaround time of results and availability of resources. Virologic monitoring during antiretroviral therapy is not currently being performed in many resource-constrained settings largely because of costs. The presented pooling strategies may be used to significantly reduce the cost compared with individual testing, make such monitoring feasible, and limit the development and transmission of HIV drug resistance in resource-constrained settings. They may also be used to design efficient pooling strategies for other settings with quantitative screening measures.

  5. Efficient Skeletonization of Volumetric Objects.

    PubMed

    Zhou, Yong; Toga, Arthur W

    1999-07-01

    Skeletonization promises to become a powerful tool for compact shape description, path planning, and other applications. However, current techniques can seldom efficiently process real, complicated 3D data sets, such as MRI and CT data of human organs. In this paper, we present an efficient voxel-coding based algorithm for Skeletonization of 3D voxelized objects. The skeletons are interpreted as connected centerlines. consisting of sequences of medial points of consecutive clusters. These centerlines are initially extracted as paths of voxels, followed by medial point replacement, refinement, smoothness, and connection operations. The voxel-coding techniques have been proposed for each of these operations in a uniform and systematic fashion. In addition to preserving basic connectivity and centeredness, the algorithm is characterized by straightforward computation, no sensitivity to object boundary complexity, explicit extraction of ready-to-parameterize and branch-controlled skeletons, and efficient object hole detection. These issues are rarely discussed in traditional methods. A range of 3D medical MRI and CT data sets were used for testing the algorithm, demonstrating its utility.

  6. Fog collecting biomimetic surfaces: Influence of microstructure and wettability.

    PubMed

    Azad, M A K; Ellerbrok, D; Barthlott, W; Koch, K

    2015-01-19

    We analyzed the fog collection efficiency of three different sets of samples: replica (with and without microstructures), copper wire (smooth and microgrooved) and polyolefin mesh (hydrophilic, superhydrophilic and hydrophobic). The collection efficiency of the samples was compared in each set separately to investigate the influence of microstructures and/or the wettability of the surfaces on fog collection. Based on the controlled experimental conditions chosen here large differences in the efficiency were found. We found that microstructured plant replica samples collected 2-3 times higher amounts of water than that of unstructured (smooth) samples. Copper wire samples showed similar results. Moreover, microgrooved wires had a faster dripping of water droplets than that of smooth wires. The superhydrophilic mesh tested here was proved more efficient than any other mesh samples with different wettability. The amount of collected fog by superhydrophilic mesh was about 5 times higher than that of hydrophilic (untreated) mesh and was about 2 times higher than that of hydrophobic mesh.

  7. An Adaptive Fuzzy-Logic Traffic Control System in Conditions of Saturated Transport Stream

    PubMed Central

    Marakhimov, A. R.; Igamberdiev, H. Z.; Umarov, Sh. X.

    2016-01-01

    This paper considers the problem of building adaptive fuzzy-logic traffic control systems (AFLTCS) to deal with information fuzziness and uncertainty in case of heavy traffic streams. Methods of formal description of traffic control on the crossroads based on fuzzy sets and fuzzy logic are proposed. This paper also provides efficient algorithms for implementing AFLTCS and develops the appropriate simulation models to test the efficiency of suggested approach. PMID:27517081

  8. Free-electron laser simulations on the MPP

    NASA Technical Reports Server (NTRS)

    Vonlaven, Scott A.; Liebrock, Lorie M.

    1987-01-01

    Free electron lasers (FELs) are of interest because they provide high power, high efficiency, and broad tunability. FEL simulations can make efficient use of computers of the Massively Parallel Processor (MPP) class because most of the processing consists of applying a simple equation to a set of identical particles. A test version of the KMS Fusion FEL simulation, which resides mainly in the MPPs host computer and only partially in the MPP, has run successfully.

  9. QSAR Study for Carcinogenic Potency of Aromatic Amines Based on GEP and MLPs

    PubMed Central

    Song, Fucheng; Zhang, Anling; Liang, Hui; Cui, Lianhua; Li, Wenlian; Si, Hongzong; Duan, Yunbo; Zhai, Honglin

    2016-01-01

    A new analysis strategy was used to classify the carcinogenicity of aromatic amines. The physical-chemical parameters are closely related to the carcinogenicity of compounds. Quantitative structure activity relationship (QSAR) is a method of predicting the carcinogenicity of aromatic amine, which can reveal the relationship between carcinogenicity and physical-chemical parameters. This study accessed gene expression programming by APS software, the multilayer perceptrons by Weka software to predict the carcinogenicity of aromatic amines, respectively. All these methods relied on molecular descriptors calculated by CODESSA software and eight molecular descriptors were selected to build function equations. As a remarkable result, the accuracy of gene expression programming in training and test sets are 0.92 and 0.82, the accuracy of multilayer perceptrons in training and test sets are 0.84 and 0.74 respectively. The precision of the gene expression programming is obviously superior to multilayer perceptrons both in training set and test set. The QSAR application in the identification of carcinogenic compounds is a high efficiency method. PMID:27854309

  10. Effects of portable computing devices on posture, muscle activation levels and efficiency.

    PubMed

    Werth, Abigail; Babski-Reeves, Kari

    2014-11-01

    Very little research exists on ergonomic exposures when using portable computing devices. This study quantified muscle activity (forearm and neck), posture (wrist, forearm and neck), and performance (gross typing speed and error rates) differences across three portable computing devices (laptop, netbook, and slate computer) and two work settings (desk and computer) during data entry tasks. Twelve participants completed test sessions on a single computer using a test-rest-test protocol (30min of work at one work setting, 15min of rest, 30min of work at the other work setting). The slate computer resulted in significantly more non-neutral wrist, elbow and neck postures, particularly when working on the sofa. Performance on the slate computer was four times less than that of the other computers, though lower muscle activity levels were also found. Potential or injury or illness may be elevated when working on smaller, portable computers in non-traditional work settings. Copyright © 2014 Elsevier Ltd and The Ergonomics Society. All rights reserved.

  11. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Papajak, Ewa; Truhlar, Donald G.

    We present sets of convergent, partially augmented basis set levels corresponding to subsets of the augmented “aug-cc-pV(n+d)Z” basis sets of Dunning and co-workers. We show that for many molecular properties a basis set fully augmented with diffuse functions is computationally expensive and almost always unnecessary. On the other hand, unaugmented cc-pV(n+d)Z basis sets are insufficient for many properties that require diffuse functions. Therefore, we propose using intermediate basis sets. We developed an efficient strategy for partial augmentation, and in this article, we test it and validate it. Sequentially deleting diffuse basis functions from the “aug” basis sets yields the “jul”,more » “jun”, “may”, “apr”, etc. basis sets. Tests of these basis sets for Møller-Plesset second-order perturbation theory (MP2) show the advantages of using these partially augmented basis sets and allow us to recommend which basis sets offer the best accuracy for a given number of basis functions for calculations on large systems. Similar truncations in the diffuse space can be performed for the aug-cc-pVxZ, aug-cc-pCVxZ, etc. basis sets.« less

  12. ωB97M-V: A combinatorially optimized, range-separated hybrid, meta-GGA density functional with VV10 nonlocal correlation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mardirossian, Narbe; Head-Gordon, Martin

    2016-06-07

    A combinatorially optimized, range-separated hybrid, meta-GGA density functional with VV10 nonlocal correlation is presented in this paper. The final 12-parameter functional form is selected from approximately 10 × 10 9 candidate fits that are trained on a training set of 870 data points and tested on a primary test set of 2964 data points. The resulting density functional, ωB97M-V, is further tested for transferability on a secondary test set of 1152 data points. For comparison, ωB97M-V is benchmarked against 11 leading density functionals including M06-2X, ωB97X-D, M08-HX, M11, ωM05-D, ωB97X-V, and MN15. Encouragingly, the overall performance of ωB97M-V on nearlymore » 5000 data points clearly surpasses that of all of the tested density functionals. Finally, in order to facilitate the use of ωB97M-V, its basis set dependence and integration grid sensitivity are thoroughly assessed, and recommendations that take into account both efficiency and accuracy are provided.« less

  13. Microhard MHX2420 Orbital Performance Evaluation Using RT Logic T400CS

    NASA Technical Reports Server (NTRS)

    TintoreGazulla, Oriol; Lombardi, Mark

    2012-01-01

    RT Logic allows simulation of Ground Station - satellite communications: Static tests have been successful. Dynamic tests have been performed for simple passes. Future dynamic tests are needed to simulate real orbit communications. Satellite attitude changes antenna gain. Atmospheric and rain losses need to be added. STK Plug-in will be the next step to improve the dynamic tests. There is a possibility of running longer simulations. Simulation of different losses available in the STK Plug-in. Microhard optimization: Effect of Microhard settings on the data throughput have been understood. Optimized settings improve data throughput for LEO communications. Longer hop intervals make transfer of larger packets more efficient (more time between hops in frequency). Use of FEC (Reed-Solomon) reduces the number of retransmissions for long-range or noisy communications.

  14. Test pattern generation for ILA sequential circuits

    NASA Technical Reports Server (NTRS)

    Feng, YU; Frenzel, James F.; Maki, Gary K.

    1993-01-01

    An efficient method of generating test patterns for sequential machines implemented using one-dimensional, unilateral, iterative logic arrays (ILA's) of BTS pass transistor networks is presented. Based on a transistor level fault model, the method affords a unique opportunity for real-time fault detection with improved fault coverage. The resulting test sets are shown to be equivalent to those obtained using conventional gate level models, thus eliminating the need for additional test patterns. The proposed method advances the simplicity and ease of the test pattern generation for a special class of sequential circuitry.

  15. Development and flight testing of UV optimized Photon Counting CCDs

    NASA Astrophysics Data System (ADS)

    Hamden, Erika T.

    2018-06-01

    I will discuss the latest results from the Hamden UV/Vis Detector Lab and our ongoing work using a UV optimized EMCCD in flight. Our lab is currently testing efficiency and performance of delta-doped, anti-reflection coated EMCCDs, in collaboration with JPL. The lab has been set-up to test quantum efficiency, dark current, clock-induced-charge, and read noise. I will describe our improvements to our circuit boards for lower noise, updates from a new, more flexible NUVU controller, and the integration of an EMCCD in the FIREBall-2 UV spectrograph. I will also briefly describe future plans to conduct radiation testing on delta-doped EMCCDs (both warm, unbiased and cold, biased configurations) thus summer and longer term plans for testing newer photon counting CCDs as I move the HUVD Lab to the University of Arizona in the Fall of 2018.

  16. Laboratory diagnostics in dog-mediated rabies: an overview of performance and a proposed strategy for various settings.

    PubMed

    Duong, Veasna; Tarantola, Arnaud; Ong, Sivuth; Mey, Channa; Choeung, Rithy; Ly, Sowath; Bourhy, Hervé; Dussart, Philippe; Buchy, Philippe

    2016-05-01

    The diagnosis of dog-mediated rabies in humans and animals has greatly benefited from technical advances in the laboratory setting. Approaches to diagnosis now include the detection of rabies virus (RABV), RABV RNA, or RABV antigens. These assays are important tools in the current efforts aimed at the global elimination of dog-mediated rabies. The assays available for use in laboratories are reviewed herein, as well as their strengths and weaknesses, which vary with the types of sample analyzed. Depending on the setting, however, the public health objectives and use of RABV diagnosis in the field will also vary. In non-endemic settings, the detection of all introduced or emergent animal or human cases justifies exhaustive testing. In dog RABV-endemic settings, such as rural areas of developing countries where most cases occur, the availability of or access to testing may be severely constrained. Thus, these issues are also discussed along with a proposed strategy to prioritize testing while access to rabies testing in the resource-poor, highly endemic setting is improved. As the epidemiological situation of rabies in a country evolves, the strategy should shift from that of an endemic setting to one more suitable for a decreased rabies incidence following the implementation of efficient control measures and when nearing the target of dog-mediated rabies elimination. Copyright © 2016 The Authors. Published by Elsevier Ltd.. All rights reserved.

  17. Improving HVAC operational efficiency in small-and medium-size commercial buildings

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kim, Woohyun; Katipamula, Srinivas; Lutes, Robert

    Small- and medium-size (<100,000 sf) commercial buildings (SMBs) represent over 95% of the U.S. commercial building stock and consume over 60% of total site energy consumption. Many of these buildings use rudimentary controls that are mostly manual, with limited scheduling capability, no monitoring, or failure management. Therefore, many of these buildings are operated inefficiently and consume excess energy. SMBs typically use packaged rooftop units (RTUs) that are controlled by an individual thermostat. There is increased urgency to improve the operating efficiency of existing commercial building stock in the United States for many reasons, chief among them being to mitigate themore » climate change impacts. Studies have shown that managing set points and schedules of the RTUs will result in up to 20% energy and cost savings. Another problem associated with RTUs is short cycling, when an RTU goes through ON and OFF cycles too frequently. Excessive cycling can lead to excessive wear and to premature failure of the compressor or its components. Also, short cycling can result in a significantly decreased average efficiency (up to 10%), even if there are no physical failures in the equipment. Ensuring correct use of the zone set points and eliminating frequent cycling of RTUs thereby leading to persistent building operations can significantly increase the operational efficiency of the SMBs. A growing trend is to use low-cost control infrastructure that can enable scalable and cost-effective intelligent building operations. The work reported in this paper describes two algorithms for detecting the zone set point temperature and RTU cycling rate that can be deployed on the low-cost infrastructure. These algorithms only require the zone temperature data for detection. The algorithms have been tested and validated using field data from a number of RTUs from six buildings in different climate locations. Overall, the algorithms were successful in detecting the set points and ON/OFF cycles accurately using the peak detection technique. The paper describes the two algorithms, results from testing the algorithms using field data, how the algorithms can be used to improve SMBs efficiency, and presents related conclusions.« less

  18. Parallel ALLSPD-3D: Speeding Up Combustor Analysis Via Parallel Processing

    NASA Technical Reports Server (NTRS)

    Fricker, David M.

    1997-01-01

    The ALLSPD-3D Computational Fluid Dynamics code for reacting flow simulation was run on a set of benchmark test cases to determine its parallel efficiency. These test cases included non-reacting and reacting flow simulations with varying numbers of processors. Also, the tests explored the effects of scaling the simulation with the number of processors in addition to distributing a constant size problem over an increasing number of processors. The test cases were run on a cluster of IBM RS/6000 Model 590 workstations with ethernet and ATM networking plus a shared memory SGI Power Challenge L workstation. The results indicate that the network capabilities significantly influence the parallel efficiency, i.e., a shared memory machine is fastest and ATM networking provides acceptable performance. The limitations of ethernet greatly hamper the rapid calculation of flows using ALLSPD-3D.

  19. A Dynamic Neural Network Approach to CBM

    DTIC Science & Technology

    2011-03-15

    high efficiency water cooled heat exchanger positioned on the side of the engine. The air temperature was controlled at the desired set-point by...regulating the inlet water flow in the heat exchanger. The temperature of the cooling water was not regulated. The typical set-point for the air charge...temperature was 127 degF, as used in other durability tests carried out in these facilities. Because the heat exchanger controller was optimized for

  20. International Space Station (ISS) Bacterial Filter Elements (BFEs): Filter Efficiency and Pressure Drop Testing of Returned Units

    NASA Technical Reports Server (NTRS)

    Green, Robert D.; Agui, Juan H.; Vijayakumar, R.; Berger, Gordon M.; Perry, Jay L.

    2017-01-01

    The air quality control equipment aboard the International Space Station (ISS) and future deep space exploration vehicles provide the vital function of maintaining a clean cabin environment for the crew and the hardware. This becomes a serious challenge in pressurized space compartments since no outside air ventilation is possible, and a larger particulate load is imposed on the filtration system due to lack of sedimentation. The ISS Environmental Control and Life Support (ECLS) system architecture in the U.S. Segment uses a distributed particulate filtration approach consisting of traditional High-Efficiency Particulate Air (HEPA) filters deployed at multiple locations in each U.S. Seg-ment module; these filters are referred to as Bacterial Filter Elements, or BFEs. In our previous work, we presented results of efficiency and pressure drop measurements for a sample set of two returned BFEs with a service life of 2.5 years. In this follow-on work, we present similar efficiency, pressure drop, and leak tests results for a larger sample set of six returned BFEs. The results of this work can aid the ISS Program in managing BFE logistics inventory through the stations planned lifetime as well as provide insight for managing filter element logistics for future exploration missions. These results also can provide meaningful guidance for particulate filter designs under consideration for future deep space exploration missions.

  1. Filter Efficiency and Pressure Testing of Returned ISS Bacterial Filter Elements (BFEs)

    NASA Technical Reports Server (NTRS)

    Green, Robert D.; Agui, Juan H.; Berger, Gordon M.; Vijayakumar, R.; Perry, Jay L.

    2017-01-01

    The air quality control equipment aboard the International Space Station (ISS) and future deep space exploration vehicles provide the vital function of maintaining a clean cabin environment for the crew and the hardware. This becomes a serious challenge in pressurized space compartments since no outside air ventilation is possible, and a larger particulate load is imposed on the filtration system due to lack of sedimentation. The ISS Environmental Control and Life Support (ECLS) system architecture in the U.S. Segment uses a distributed particulate filtration approach consisting of traditional High-Efficiency Particulate Air (HEPA) filters deployed at multiple locations in each U.S. Seg-ment module; these filters are referred to as Bacterial Filter Elements, or BFEs. In our previous work, we presented results of efficiency and pressure drop measurements for a sample set of two returned BFEs with a service life of 2.5 years. In this follow-on work, we present similar efficiency, pressure drop, and leak tests results for a larger sample set of six returned BFEs. The results of this work can aid the ISS Program in managing BFE logistics inventory through the stations planned lifetime as well as provide insight for managing filter element logistics for future exploration missions. These results also can provide meaningful guidance for particulate filter designs under consideration for future deep space exploration missions.

  2. Determining optimal torsional ultrasound power for cataract surgery with automatic longitudinal pulses at maximum vacuum ex vivo.

    PubMed

    Ronquillo, Cecinio C; Zaugg, Brian; Stagg, Brian; Kirk, Kevin R; Gupta, Isha; Barlow, William R; Pettey, Jeff H; Olson, Randall J

    2014-12-01

    To determine the optimal longitudinal power settings for Infiniti OZil Intelligent Phaco (IP) at varying torsional amplitude settings; and to test the hypothesis that increasing longitudinal power is more important at lower torsional amplitudes to achieve efficient phacoemulsification. Laboratory investigation. setting: John A. Moran Eye Center, University of Utah, Salt Lake City, Utah. procedure: Individual porcine nuclei were fixed in formalin, then cut into 2.0 mm cubes. Lens cube phacoemulsification was done using OZil IP at 60%, 80%, and 100% torsional amplitude with 0%, 10%, 20%, 30%, 50%, 75%, or 100% longitudinal power. All experiments were done using a 20 gauge 0.9 mm bent reverse bevel phaco tip at constant vacuum (550 mm Hg), aspiration rate (40 mL/min), and bottle height (50 cm). main outcome measure: Complete lens particle phacoemulsification (efficiency). Linear regression analysis showed a significant increase in efficiency with increasing longitudinal power at 60% torsional amplitude (R(2) = 0.7269, P = .01) and 80% torsional amplitude (R(2) = 0.6995, P = .02) but not at 100% amplitude (R(2) = 0.3053, P = .2). Baseline comparison of 60% or 80% vs 100% torsional amplitude without longitudinal power showed increased efficiency at 100% (P = .0004). Increasing longitudinal power to 20% abolished the efficiency difference between 80% vs 100% amplitudes. In contrast, 75% longitudinal power abolished the efficiency difference between 60% vs 100% torsional amplitudes. Results suggest that longitudinal power becomes more critical at increasing phacoemulsification efficiencies at torsional amplitudes less than 100%. Increasing longitudinal power does not further increase efficiency at maximal torsional amplitudes. Copyright © 2014 Elsevier Inc. All rights reserved.

  3. Test of the efficiency of three storm water quality models with a rich set of data.

    PubMed

    Ahyerre, M; Henry, F O; Gogien, F; Chabanel, M; Zug, M; Renaudet, D

    2005-01-01

    The objective of this article is to test the efficiency of three different Storm Water Quality Model (SWQM) on the same data set (34 rain events, SS measurements) sampled on a 42 ha watershed in the center of Paris. The models have been calibrated at the scale of the rain event. Considering the mass of pollution calculated per event, the results on the models are satisfactory but that they are in the same order of magnitude as the simple hydraulic approach associated to a constant concentration. In a second time, the mass of pollutant at the outlet of the catchment at the global scale of the 34 events has been calculated. This approach shows that the simple hydraulic calculations gives better results than SWQM. Finally, the pollutographs are analysed, showing that storm water quality models are interesting tools to represent the shape of the pollutographs, and the dynamics of the phenomenon which can be useful in some projects for managers.

  4. Full scale evaluation of diffuser ageing with clean water oxygen transfer tests.

    PubMed

    Krampe, J

    2011-01-01

    Aeration is a crucial part of the biological wastewater treatment in activated sludge systems and the main energy user of WWTPs. Approximately 50 to 60% of the total energy consumption of a WWTP can be attributed to the aeration system. The performance of the aeration system, and in the case of fine bubble diffused aeration the diffuser performance, has a significant impact on the overall plant efficiency. This paper seeks to isolate the changes of the diffuser performance over time by eliminating all other influencing parameters like sludge retention time, surfactants and reactor layout. To achieve this, different diffusers have been installed and tested in parallel treatment trains in two WWTPs. The diffusers have been performance tested in clean water tests under new conditions and after one year of operation. A set of material property tests describing the diffuser membrane quality was also performed. The results showed a significant drop in the performance of the EPDM diffuser in the first year which resulted in similar oxygen transfer efficiency around 16 g/m3/m for all tested systems. Even though the tested silicone diffusers did not show a drop in performance they had a low efficiency in the initial tests. The material properties indicate that the EPDM performance loss is partly due to the washout of additives.

  5. A testing-coverage software reliability model considering fault removal efficiency and error generation

    PubMed Central

    Li, Qiuying; Pham, Hoang

    2017-01-01

    In this paper, we propose a software reliability model that considers not only error generation but also fault removal efficiency combined with testing coverage information based on a nonhomogeneous Poisson process (NHPP). During the past four decades, many software reliability growth models (SRGMs) based on NHPP have been proposed to estimate the software reliability measures, most of which have the same following agreements: 1) it is a common phenomenon that during the testing phase, the fault detection rate always changes; 2) as a result of imperfect debugging, fault removal has been related to a fault re-introduction rate. But there are few SRGMs in the literature that differentiate between fault detection and fault removal, i.e. they seldom consider the imperfect fault removal efficiency. But in practical software developing process, fault removal efficiency cannot always be perfect, i.e. the failures detected might not be removed completely and the original faults might still exist and new faults might be introduced meanwhile, which is referred to as imperfect debugging phenomenon. In this study, a model aiming to incorporate fault introduction rate, fault removal efficiency and testing coverage into software reliability evaluation is developed, using testing coverage to express the fault detection rate and using fault removal efficiency to consider the fault repair. We compare the performance of the proposed model with several existing NHPP SRGMs using three sets of real failure data based on five criteria. The results exhibit that the model can give a better fitting and predictive performance. PMID:28750091

  6. Efficient enumeration of monocyclic chemical graphs with given path frequencies

    PubMed Central

    2014-01-01

    Background The enumeration of chemical graphs (molecular graphs) satisfying given constraints is one of the fundamental problems in chemoinformatics and bioinformatics because it leads to a variety of useful applications including structure determination and development of novel chemical compounds. Results We consider the problem of enumerating chemical graphs with monocyclic structure (a graph structure that contains exactly one cycle) from a given set of feature vectors, where a feature vector represents the frequency of the prescribed paths in a chemical compound to be constructed and the set is specified by a pair of upper and lower feature vectors. To enumerate all tree-like (acyclic) chemical graphs from a given set of feature vectors, Shimizu et al. and Suzuki et al. proposed efficient branch-and-bound algorithms based on a fast tree enumeration algorithm. In this study, we devise a novel method for extending these algorithms to enumeration of chemical graphs with monocyclic structure by designing a fast algorithm for testing uniqueness. The results of computational experiments reveal that the computational efficiency of the new algorithm is as good as those for enumeration of tree-like chemical compounds. Conclusions We succeed in expanding the class of chemical graphs that are able to be enumerated efficiently. PMID:24955135

  7. An Efficient Functional Test Generation Method For Processors Using Genetic Algorithms

    NASA Astrophysics Data System (ADS)

    Hudec, Ján; Gramatová, Elena

    2015-07-01

    The paper presents a new functional test generation method for processors testing based on genetic algorithms and evolutionary strategies. The tests are generated over an instruction set architecture and a processor description. Such functional tests belong to the software-oriented testing. Quality of the tests is evaluated by code coverage of the processor description using simulation. The presented test generation method uses VHDL models of processors and the professional simulator ModelSim. The rules, parameters and fitness functions were defined for various genetic algorithms used in automatic test generation. Functionality and effectiveness were evaluated using the RISC type processor DP32.

  8. Economical and accurate protocol for calculating hydrogen-bond-acceptor strengths.

    PubMed

    El Kerdawy, Ahmed; Tautermann, Christofer S; Clark, Timothy; Fox, Thomas

    2013-12-23

    A series of density functional/basis set combinations and second-order Møller-Plesset calculations have been used to test their ability to reproduce the trends observed experimentally for the strengths of hydrogen-bond acceptors in order to identify computationally efficient techniques for routine use in the computational drug-design process. The effects of functionals, basis sets, counterpoise corrections, and constraints on the optimized geometries were tested and analyzed, and recommendations (M06-2X/cc-pVDZ and X3LYP/cc-pVDZ with single-point counterpoise corrections or X3LYP/aug-cc-pVDZ without counterpoise) were made for suitable moderately high-throughput techniques.

  9. Online Updating of Statistical Inference in the Big Data Setting.

    PubMed

    Schifano, Elizabeth D; Wu, Jing; Wang, Chun; Yan, Jun; Chen, Ming-Hui

    2016-01-01

    We present statistical methods for big data arising from online analytical processing, where large amounts of data arrive in streams and require fast analysis without storage/access to the historical data. In particular, we develop iterative estimating algorithms and statistical inferences for linear models and estimating equations that update as new data arrive. These algorithms are computationally efficient, minimally storage-intensive, and allow for possible rank deficiencies in the subset design matrices due to rare-event covariates. Within the linear model setting, the proposed online-updating framework leads to predictive residual tests that can be used to assess the goodness-of-fit of the hypothesized model. We also propose a new online-updating estimator under the estimating equation setting. Theoretical properties of the goodness-of-fit tests and proposed estimators are examined in detail. In simulation studies and real data applications, our estimator compares favorably with competing approaches under the estimating equation setting.

  10. Impact of Laboratory Test Use Strategies in a Turkish Hospital

    PubMed Central

    Yılmaz, Fatma Meriç; Kahveci, Rabia; Aksoy, Altan; Özer Kucuk, Emine; Akın, Tezcan; Mathew, Joseph Lazar; Meads, Catherine; Zengin, Nurullah

    2016-01-01

    Objectives Eliminating unnecessary laboratory tests is a good way to reduce costs while maintain patient safety. The aim of this study was to define and process strategies to rationalize laboratory use in Ankara Numune Training and Research Hospital (ANH) and calculate potential savings in costs. Methods A collaborative plan was defined by hospital managers; joint meetings with ANHTA and laboratory professors were set; the joint committee invited relevant staff for input, and a laboratory efficiency committee was created. Literature was reviewed systematically to identify strategies used to improve laboratory efficiency. Strategies that would be applicable in local settings were identified for implementation, processed, and the impact on clinical use and costs assessed for 12 months. Results Laboratory use in ANH differed enormously among clinics. Major use was identified in internal medicine. The mean number of tests per patient was 15.8. Unnecessary testing for chloride, folic acid, free prostate specific antigen, hepatitis and HIV testing were observed. Test panel use was pinpointed as the main cause of overuse of the laboratory and the Hospital Information System test ordering page was reorganized. A significant decrease (between 12.6–85.0%) was observed for the tests that were taken to an alternative page on the computer screen. The one year study saving was equivalent to 371,183 US dollars. Conclusion Hospital-based committees including laboratory professionals and clinicians can define hospital based problems and led to a standardized approach to test use that can help clinicians reduce laboratory costs through appropriate use of laboratory tests. PMID:27077653

  11. The Artificial Neural Networks Based on Scalarization Method for a Class of Bilevel Biobjective Programming Problem

    PubMed Central

    Chen, Zhong; Liu, June; Li, Xiong

    2017-01-01

    A two-stage artificial neural network (ANN) based on scalarization method is proposed for bilevel biobjective programming problem (BLBOP). The induced set of the BLBOP is firstly expressed as the set of minimal solutions of a biobjective optimization problem by using scalar approach, and then the whole efficient set of the BLBOP is derived by the proposed two-stage ANN for exploring the induced set. In order to illustrate the proposed method, seven numerical examples are tested and compared with results in the classical literature. Finally, a practical problem is solved by the proposed algorithm. PMID:29312446

  12. Overall and blade element performance of a 1.20-pressure-ratio fan stage with rotor blades reset -5 deg

    NASA Technical Reports Server (NTRS)

    Lewis, G. W., Jr.; Osborn, W. M.; Moore, R. D.

    1976-01-01

    A 51-cm-diam model of a fan stage for a short haul aircraft was tested in a single stage-compressor research facility. The rotor blades were set 5 deg toward the axial direction (opened) from design setting angle. Surveys of the air flow conditions ahead of the rotor, between the rotor and stator, and behind the stator were made over the stable operating range of the stage. At the design speed of 213.3 m/sec and a weight flow of 31.5 kg/sec, the stage pressure ratio and efficiency were 1.195 and 0.88, respectively. The design speed rotor peak efficiency of 0.91 occurred at the same flow rate.

  13. Overall and blade element performance of a 1.20 pressure ratio fan stage with rotor blades reset -7 deg

    NASA Technical Reports Server (NTRS)

    Lewis, G. W., Jr.; Kovich, G.

    1976-01-01

    A 51-cm-diam model of a fan stage for short haul aircraft was tested in a single stage compressor research facility. The rotor blades were set 7 deg toward the axial direction (opened) from the design setting angle. Surveys of the air flow conditions ahead of the rotor, between the rotor and stator, and behind the stator were made over the stable operating range of the stage. At the design speed and a weight flow of 30.9 kg/sec, the stage pressure ratio and efficiency were 1.205 and 0.85, respectively. The design speed rotor peak efficiency of 0.90 occurred at a flow rate of 32.5 kg/sec.

  14. A support vector machine based test for incongruence between sets of trees in tree space

    PubMed Central

    2012-01-01

    Background The increased use of multi-locus data sets for phylogenetic reconstruction has increased the need to determine whether a set of gene trees significantly deviate from the phylogenetic patterns of other genes. Such unusual gene trees may have been influenced by other evolutionary processes such as selection, gene duplication, or horizontal gene transfer. Results Motivated by this problem we propose a nonparametric goodness-of-fit test for two empirical distributions of gene trees, and we developed the software GeneOut to estimate a p-value for the test. Our approach maps trees into a multi-dimensional vector space and then applies support vector machines (SVMs) to measure the separation between two sets of pre-defined trees. We use a permutation test to assess the significance of the SVM separation. To demonstrate the performance of GeneOut, we applied it to the comparison of gene trees simulated within different species trees across a range of species tree depths. Applied directly to sets of simulated gene trees with large sample sizes, GeneOut was able to detect very small differences between two set of gene trees generated under different species trees. Our statistical test can also include tree reconstruction into its test framework through a variety of phylogenetic optimality criteria. When applied to DNA sequence data simulated from different sets of gene trees, results in the form of receiver operating characteristic (ROC) curves indicated that GeneOut performed well in the detection of differences between sets of trees with different distributions in a multi-dimensional space. Furthermore, it controlled false positive and false negative rates very well, indicating a high degree of accuracy. Conclusions The non-parametric nature of our statistical test provides fast and efficient analyses, and makes it an applicable test for any scenario where evolutionary or other factors can lead to trees with different multi-dimensional distributions. The software GeneOut is freely available under the GNU public license. PMID:22909268

  15. Efficient Detection of Copy Number Mutations in PMS2 Exons with a Close Homolog.

    PubMed

    Herman, Daniel S; Smith, Christina; Liu, Chang; Vaughn, Cecily P; Palaniappan, Selvi; Pritchard, Colin C; Shirts, Brian H

    2018-07-01

    Detection of 3' PMS2 copy-number mutations that cause Lynch syndrome is difficult because of highly homologous pseudogenes. To improve the accuracy and efficiency of clinical screening for these mutations, we developed a new method to analyze standard capture-based, next-generation sequencing data to identify deletions and duplications in PMS2 exons 9 to 15. The approach captures sequences using PMS2 targets, maps sequences randomly among regions with equal mapping quality, counts reads aligned to homologous exons and introns, and flags read count ratios outside of empirically derived reference ranges. The method was trained on 1352 samples, including 8 known positives, and tested on 719 samples, including 17 known positives. Clinical implementation of the first version of this method detected new mutations in the training (N = 7) and test (N = 2) sets that had not been identified by our initial clinical testing pipeline. The described final method showed complete sensitivity in both sample sets and false-positive rates of 5% (training) and 7% (test), dramatically decreasing the number of cases needing additional mutation evaluation. This approach leveraged the differences between gene and pseudogene to distinguish between PMS2 and PMS2CL copy-number mutations. These methods enable efficient and sensitive Lynch syndrome screening for 3' PMS2 copy-number mutations and may be applied similarly to other genomic regions with highly homologous pseudogenes. Copyright © 2018 American Society for Investigative Pathology and the Association for Molecular Pathology. Published by Elsevier Inc. All rights reserved.

  16. Validation of the Information/Communications Technology Literacy Test

    DTIC Science & Technology

    2016-10-01

    nested set. Table 11 presents the results of incremental validity analyses for job knowledge/performance criteria by MOS. Figure 7 presents much...Systems Operator-Analyst (25B) and Nodal Network Systems Operator-Maintainer (25N) MOS. This report documents technical procedures and results of the...research effort. Results suggest that the ICTL test has potential as a valid and highly efficient predictor of valued outcomes in Signal school MOS. Not

  17. A sensitivity analysis of central flat-plate photovoltaic systems and implications for national photovoltaics program planning

    NASA Technical Reports Server (NTRS)

    Crosetti, M. R.

    1985-01-01

    The sensitivity of the National Photovoltaic Research Program goals to changes in individual photovoltaic system parameters is explored. Using the relationship between lifetime cost and system performance parameters, tests were made to see how overall photovoltaic system energy costs are affected by changes in the goals set for module cost and efficiency, system component costs and efficiencies, operation and maintenance costs, and indirect costs. The results are presented in tables and figures for easy reference.

  18. An efficient transport solver for tokamak plasmas

    DOE PAGES

    Park, Jin Myung; Murakami, Masanori; St. John, H. E.; ...

    2017-01-03

    A simple approach to efficiently solve a coupled set of 1-D diffusion-type transport equations with a stiff transport model for tokamak plasmas is presented based on the 4th order accurate Interpolated Differential Operator scheme along with a nonlinear iteration method derived from a root-finding algorithm. Here, numerical tests using the Trapped Gyro-Landau-Fluid model show that the presented high order method provides an accurate transport solution using a small number of grid points with robust nonlinear convergence.

  19. A fuzzy-based data transformation for feature extraction to increase classification performance with small medical data sets.

    PubMed

    Li, Der-Chiang; Liu, Chiao-Wen; Hu, Susan C

    2011-05-01

    Medical data sets are usually small and have very high dimensionality. Too many attributes will make the analysis less efficient and will not necessarily increase accuracy, while too few data will decrease the modeling stability. Consequently, the main objective of this study is to extract the optimal subset of features to increase analytical performance when the data set is small. This paper proposes a fuzzy-based non-linear transformation method to extend classification related information from the original data attribute values for a small data set. Based on the new transformed data set, this study applies principal component analysis (PCA) to extract the optimal subset of features. Finally, we use the transformed data with these optimal features as the input data for a learning tool, a support vector machine (SVM). Six medical data sets: Pima Indians' diabetes, Wisconsin diagnostic breast cancer, Parkinson disease, echocardiogram, BUPA liver disorders dataset, and bladder cancer cases in Taiwan, are employed to illustrate the approach presented in this paper. This research uses the t-test to evaluate the classification accuracy for a single data set; and uses the Friedman test to show the proposed method is better than other methods over the multiple data sets. The experiment results indicate that the proposed method has better classification performance than either PCA or kernel principal component analysis (KPCA) when the data set is small, and suggest creating new purpose-related information to improve the analysis performance. This paper has shown that feature extraction is important as a function of feature selection for efficient data analysis. When the data set is small, using the fuzzy-based transformation method presented in this work to increase the information available produces better results than the PCA and KPCA approaches. Copyright © 2011 Elsevier B.V. All rights reserved.

  20. Prospective study of the feasibility and effectiveness of a second-trimester quadruple test for Down syndrome in Thailand.

    PubMed

    Kaewsuksai, Peeranan; Jitsurong, Siroj

    2017-11-01

    To evaluate the feasibility and effectiveness of a quadruple test for Down syndrome in the second trimester of pregnancy in clinical settings in Thailand. From October 2015 to September 2016, a prospective study was undertaken in 19 hospitals in Songkhla province, Thailand. Women with a singleton pregnancy of 14-18 weeks were enrolled and underwent the quadruple test. The risk cutoff value was set at 1:250. All women with a positive test (risk ≥1:250) were offered amniocentesis. Women were followed up until delivery. Among 2375 women, 206 (8.7%) had a positive quadruple test; 98 (47.6%) of these women voluntarily underwent amniocentesis. Overall, seven pregnancies were complicated with chromosomal abnormalities (2.9 cases in 1000), including four cases of Down syndrome (1.7 in 1000) and three of other abnormalities. The detection, false-positive, and accuracy rates of the quadruple test for Down syndrome were 75.0%, 8.6%, and 91.4%, respectively. The quadruple test was found to be a feasible and efficient method for screening for Down syndrome in the second trimester of pregnancy in a Thai clinical setting. The test should be performed for pregnant women before an invasive test for Down syndrome. © 2017 International Federation of Gynecology and Obstetrics.

  1. Sub-Scale Testing and Development of the J-2X Fuel Turbopump Inducer

    NASA Technical Reports Server (NTRS)

    Sargent, Scott R.; Becht, David G.

    2011-01-01

    In the early stages of the J-2X upper stage engine program, various inducer configurations proposed for use in the fuel turbopump (FTP) were tested in water. The primary objectives of this test effort were twofold. First, to obtain a more comprehensive data set than that which existed in the Pratt & Whitney Rocketdyne (PWR) historical archives from the original J-2S program, and second, to supplement that data set with information regarding the cavitation induced vibrations for both the historical J-2S configuration as well as those tested for the J-2X program. The J-2X FTP inducer, which actually consists of an inducer stage mechanically attached to a kicker stage, underwent 4 primary iterations utilizing sub-scaled test articles manufactured and tested in PWR's Engineering Development Laboratory (EDL). The kicker remained unchanged throughout the test series. The four inducer configurations tested retained many of the basic design features of the J-2S inducer, but also included variations on leading edge blade thickness and blade angle distribution, primarily aimed at improving suction performance at higher flow coefficients. From these data sets, the effects of the tested design variables on hydrodynamic performance and cavitation instabilities were discerned. A limited comparison of impact to the inducer efficiency was determined as well.

  2. Descent advisor preliminary field test

    NASA Technical Reports Server (NTRS)

    Green, Steven M.; Vivona, Robert A.; Sanford, Beverly

    1995-01-01

    A field test of the Descent Advisor (DA) automation tool was conducted at the Denver Air Route Traffic Control Center in September 1994. DA is being developed to assist Center controllers in the efficient management and control of arrival traffic. DA generates advisories, based on trajectory predictions, to achieve accurate meter-fix arrival times in a fuel efficient manner while assisting the controller with the prediction and resolution of potential conflicts. The test objectives were to evaluate the accuracy of DA trajectory predictions for conventional- and flight-management-system-equipped jet transports, to identify significant sources of trajectory prediction error, and to investigate procedural and training issues (both air and ground) associated with DA operations. Various commercial aircraft (97 flights total) and a Boeing 737-100 research aircraft participated in the test. Preliminary results from the primary test set of 24 commercial flights indicate a mean DA arrival time prediction error of 2.4 sec late with a standard deviation of 13.1 sec. This paper describes the field test and presents preliminary results for the commercial flights.

  3. The association of debt financing with not-for-profit hospitals' operational and capital-investment efficiency.

    PubMed

    Magnus, Stephen A; Wheeler, John R C; Smith, Dean G

    2004-01-01

    Increased debt in companies can motivate both operational and capital-investment efficiency. This positive influence of debt is attributed to creditors' oversight of corporate behavior and the need to generate cash flows to service debt. Our study investigates whether debt has a similar relationship with efficiency in not-for-profit hospitals. Using statistical analysis of a database of audited financial statements of not-for-profit hospitals, we test whether debt is associated with six distinct measures of operational and capital-investment efficiency. We find that debt either has no association with efficiency or predicts decreased efficiency. Possible explanations are that creditors' oversight is less tight in the not-for-profit setting and that debt may at times motivate excessive capital investment because of a legal requirement to tie tax-exempt debt with a capital-investment project.

  4. Efficient Data Generation and Publication as a Test Tool

    NASA Technical Reports Server (NTRS)

    Einstein, Craig Jakob

    2017-01-01

    A tool to facilitate the generation and publication of test data was created to test the individual components of a command and control system designed to launch spacecraft. Specifically, this tool was built to ensure messages are properly passed between system components. The tool can also be used to test whether the appropriate groups have access (read/write privileges) to the correct messages. The messages passed between system components take the form of unique identifiers with associated values. These identifiers are alphanumeric strings that identify the type of message and the additional parameters that are contained within the message. The values that are passed with the message depend on the identifier. The data generation tool allows for the efficient creation and publication of these messages. A configuration file can be used to set the parameters of the tool and also specify which messages to pass.

  5. Implementation of Chaotic Gaussian Particle Swarm Optimization for Optimize Learning-to-Rank Software Defect Prediction Model Construction

    NASA Astrophysics Data System (ADS)

    Buchari, M. A.; Mardiyanto, S.; Hendradjaya, B.

    2018-03-01

    Finding the existence of software defect as early as possible is the purpose of research about software defect prediction. Software defect prediction activity is required to not only state the existence of defects, but also to be able to give a list of priorities which modules require a more intensive test. Therefore, the allocation of test resources can be managed efficiently. Learning to rank is one of the approach that can provide defect module ranking data for the purposes of software testing. In this study, we propose a meta-heuristic chaotic Gaussian particle swarm optimization to improve the accuracy of learning to rank software defect prediction approach. We have used 11 public benchmark data sets as experimental data. Our overall results has demonstrated that the prediction models construct using Chaotic Gaussian Particle Swarm Optimization gets better accuracy on 5 data sets, ties in 5 data sets and gets worse in 1 data sets. Thus, we conclude that the application of Chaotic Gaussian Particle Swarm Optimization in Learning-to-Rank approach can improve the accuracy of the defect module ranking in data sets that have high-dimensional features.

  6. On the transferability of RegCM4: Europe, Africa and Asia

    NASA Astrophysics Data System (ADS)

    Belda, Michal; Halenka, Tomas

    2013-04-01

    Simulations driven by ERA-interim reanalysis for CORDEX domains covering Europe, Africa and Asia have been performed using RegCM4 at 50 km resolution. The same settings are used in basic simulations and preliminary evaluation of model performance for individual regions will be presented. Several settings of different options is tested and sensitivity of selected ones will be shown in individual regions. Secant Mercator projection is introduced for Africa providing more efficient model geometry setting, the impact of proper emissivity inclusion is compared especially for Africa and Asia desserts. CRU data are used for the validation.

  7. Effect of water quality on residential water heater life-cycle efficiency. Annual report, September 1983-August 1984

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stickford, G.H.; Talbert, S.G.; Newman, D.C.

    A 3-year field test program is under way for the Gas Research Institute to quantify the effect of scale buildup on the performance of residential water heaters, and to determine the benefits and limitations of common water-treatment methods. In this program, the performance of gas and electric heaters is being monitored in test laboratories set up in selected U.S. cities. The efficiency of heaters operating on hard water is measured and compared with the performance of heaters operating on treated water. Corrosion tests are also being conducted on each type of water tested to determine the effect of water treatmentmore » on the corrosion of the water heating system. During this reporting period Battelle has established operating hard water test facilities at four test sites: (1) Battelle, (2) the Roswell Test Facility in Roswell, New Mexico, (3) the Water Quality Association in Lisle, Illinois, and (4) the Marshall Municipal Utilities in Marshall, Minnesota. At each of these sites 12 water heaters have been installed and are operating on accelerated draw cycles. The recovery efficiency of each heater has been carefully measured, and the heaters have been operating from 4 months at one site to 7 months at another. At two of the test sites, the recovery efficiency of each heater has been remeasured after 6 months of operation. No significant degradation in heater performance due to scale buildup was observed in these heaters for the equivalent of 2 to 3 years of typical residential use.« less

  8. Development and implementation of energy efficiency standards and labeling programs in China: Progress and challenges

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhou, Nan; Khanna, Nina Zheng; Fridley, David

    Over the last twenty years, with growing policy emphasis on improving energy efficiency and reducing environmental pollution and carbon emissions, China has implemented a series of new minimum energy performance standards (MEPS) and mandatory and voluntary energy labels to improve appliance energy efficiency. As China begins planning for the next phase of standards and labeling (S&L) program development under the 12th Five Year Plan, an evaluation of recent program developments and future directions is needed to identify gaps that still exist when compared with international best practices. The review of China’s S&L program development and implementation in comparison with majormore » findings from international experiences reveal that there are still areas of improvement, particularly when compared to success factors observed across leading international S&L program. China currently lacks a formalized regulatory process for standard-setting and do not have any legal or regulatory guidance on elements of S&L development such as stakeholder participation or the issue of legal precedence between conflicting national, industrial and local standards. Consequently, China’s laws regarding standard-setting and management of the mandatory energy label program could be updated, as they have not been amended or revised recently and no longer reflects the current situation. While China uses similar principles for choosing target products as the U.S., Australia, EU and Japan, including high energy-consumption, mature industry and testing procedure and stakeholder support, recent MEPS revisions have generally aimed at only eliminating the bottom 20% efficiency of the market. Setting a firm principle based on maximizing energy savings that are technically feasible and economically justified may help improve the stringency of China’s MEPS program and reduce the need for frequent revisions. China also lacks robust survey data and relies primarily on market research data in relatively simple techno-economic analyses used to determine its efficiency standards levels rather than the specific sets of analyses and tools used internationally. Based on international experiences, inclusion of more detailed energy consumption surveys in the Chinese national census surveys and statistical reporting systems could help provide the necessary data for more comprehensive standard-setting analyses. In terms of stakeholder participation in the standards development process, stakeholder participation in China is limited to membership on technical committees responsible for developing or revising standards and generally do not include environmental groups, consumer associations, utilities and other NGOs. Increasing stakeholder involvement to broader interest groups could help garner more support and feedback in the S&L implementation process. China has emerged as a leader in a national verification testing scheme with complementary pilot checktesting projects, but it still faces challenges with insufficient funding, low local awareness amongst some regulatory agencies and resistance to check-testing by some manufacturers, limited product sampling scope, and testing inconsistency and incomparability of results. Thus, further financial and staff resources and capacity building will be needed to overcome these remaining challenges and to expand impacts evaluations to assess the actual effectiveness of implementation and enforcement.« less

  9. Advanced oxidation-based treatment of furniture industry wastewater.

    PubMed

    Tichonovas, Martynas; Krugly, Edvinas; Grybauskas, Arturas; Jankūnaitė, Dalia; Račys, Viktoras; Martuzevičius, Dainius

    2017-07-16

    The paper presents a study on the treatment of the furniture industry wastewater in a bench scale advanced oxidation reactor. The researched technology utilized a simultaneous application of ozone, ultraviolet radiation and surface-immobilized TiO 2 nanoparticle catalyst. Various combinations of processes were tested, including photolysis, photocatalysis, ozonation, catalytic ozonation, photolytic ozonation and photocatalytic ozonation were tested against the efficiency of degradation. The efficiency of the processes was primarily characterized by the total organic carbon (TOC) analysis, indicating the remaining organic material in the wastewater after the treatment, while the toxicity changes in wastewater were researched by Daphnia magna toxicity tests. Photocatalytic ozonation was confirmed as the most effective combination of processes (99.3% of TOC reduction during 180 min of treatment), also being the most energy efficient (4.49-7.83 MJ/g). Photocatalytic ozonation and photolytic ozonation remained efficient across a wide range of pH (3-9), but the pH was an important factor in photocatalysis. The toxicity of wastewater depended on the duration of the treatment: half treated water was highly toxic, while fully treated water did not possess any toxicity. Our results indicate that photocatalytic ozonation has a high potential for the upscaling and application in industrial settings.

  10. Test set up description and performances for HAWAII-2RG detector characterization at ESTEC

    NASA Astrophysics Data System (ADS)

    Crouzet, P.-E.; ter Haar, J.; de Wit, F.; Beaufort, T.; Butler, B.; Smit, H.; van der Luijt, C.; Martin, D.

    2012-07-01

    In the frame work of the European Space Agency's Cosmic Vision program, the Euclid mission has the objective to map the geometry of the Dark Universe. Galaxies and clusters of galaxies will be observed in the visible and near-infrared wavelengths by an imaging and spectroscopic channel. For the Near Infrared Spectrometer instrument (NISP), the state-of-the-art HAWAII-2RG detectors will be used, associated with the SIDECAR ASIC readout electronic which will perform the image frame acquisitions. To characterize and validate the performance of these detectors, a test bench has been designed, tested and validated. This publication describes the pre-tests performed to build the set up dedicated to dark current measurements and tests requiring reasonably uniform light levels (such as for conversion gain measurements). Successful cryogenic and vacuum tests on commercial LEDs and photodiodes are shown. An optimized feed through in stainless steel with a V-groove to pot the flex cable connecting the SIDECAR ASIC to the room temperature board (JADE2) has been designed and tested. The test set up for quantum efficiency measurements consisting of a lamp, a monochromator, an integrating sphere and set of cold filters, and which is currently under construction will ensure a uniform illumination across the detector with variations lower than 2%. A dedicated spot projector for intra-pixel measurements has been designed and built to reach a spot diameter of 5 μm at 920nm with 2nm of bandwidth [1].

  11. Considerations for setting up an order entry system for nuclear medicine tests.

    PubMed

    Hara, Narihiro; Onoguchi, Masahisa; Nishida, Toshihiko; Honda, Minoru; Houjou, Osamu; Yuhi, Masaru; Takayama, Teruhiko; Ueda, Jun

    2007-12-01

    Integrating the Healthcare Enterprise-Japan (IHE-J) was established in Japan in 2001 and has been working to standardize health information and make it accessible on the basis of the fundamental Integrating Healthcare Enterprise (IHE) specifications. However, because specialized operations are used in nuclear medicine tests, online sharing of patient information and test order information from the order entry system as shown by the scheduled workflow (SWF) is difficult, making information inconsistent throughout the facility and uniform management of patient information impossible. Therefore, we examined the basic design (subsystem design) for order entry systems, which are considered an important aspect of information management for nuclear medicine tests and needs to be consistent with the system used throughout the rest of the facility. There are many items that are required by the subsystem when setting up an order entry system for nuclear medicine tests. Among these items, those that are the most important in the order entry system are constructed using exclusion settings, because of differences in the conditions for using radiopharmaceuticals and contrast agents and appointment frame settings for differences in the imaging method and test items. To establish uniform management of patient information for nuclear medicine tests throughout the facility, it is necessary to develop an order entry system with exclusion settings and appointment frames as standard features. Thereby, integration of health information with the Radiology Information System (RIS) or Picture Archiving Communication System (PACS) based on Digital Imaging Communications in Medicine (DICOM) standards and real-time health care assistance can be attained, achieving the IHE agenda of improving health care service and efficiently sharing information.

  12. Intensity vs. Duration: Comparing the Effects of a Fluency-Based Reading Intervention Program, in After-School vs. Summer School Settings

    ERIC Educational Resources Information Center

    Katzir, Tami; Goldberg, Alyssa; Aryeh, Terry Joffe Ben; Donnelley, Katharine; Wolf, Maryanne

    2013-01-01

    Two versions of RAVE-O, a fluency-based reading intervention were examined over a 2-intervention period: a 9-month, 44-hour afterschool intervention program, and a month long, 44-hour summer intervention program. 80 children in grades 1-3 were tested on the two subtests of the Test of Word-Reading Efficiency and were assigned to one of 6 groups…

  13. Pettit works with the SLICE at the MSG in the U.S. Laboratory

    NASA Image and Video Library

    2012-03-09

    ISS030-E-128918 (9 March 2012) --- NASA astronaut Don Pettit, Expedition 30 flight engineer, works with the Structure and Liftoff In Combustion Experiment (SLICE) at the Microgravity Sciences Glovebox (MSG) in the Destiny laboratory of the International Space Station. Pettit conducted three sets of flame tests, followed by a fan calibration. This test will lead to increased efficiency and reduced pollutant emission for practical combustion devices.

  14. Feasibility study of current pulse induced 2-bit/4-state multilevel programming in phase-change memory

    NASA Astrophysics Data System (ADS)

    Liu, Yan; Fan, Xi; Chen, Houpeng; Wang, Yueqing; Liu, Bo; Song, Zhitang; Feng, Songlin

    2017-08-01

    In this brief, multilevel data storage for phase-change memory (PCM) has attracted more attention in the memory market to implement high capacity memory system and reduce cost-per-bit. In this work, we present a universal programing method of SET stair-case current pulse in PCM cells, which can exploit the optimum programing scheme to achieve 2-bit/ 4state resistance-level with equal logarithm interval. SET stair-case waveform can be optimized by TCAD real time simulation to realize multilevel data storage efficiently in an arbitrary phase change material. Experimental results from 1 k-bit PCM test-chip have validated the proposed multilevel programing scheme. This multilevel programming scheme has improved the information storage density, robustness of resistance-level, energy efficient and avoiding process complexity.

  15. Improving record linkage performance in the presence of missing linkage data.

    PubMed

    Ong, Toan C; Mannino, Michael V; Schilling, Lisa M; Kahn, Michael G

    2014-12-01

    Existing record linkage methods do not handle missing linking field values in an efficient and effective manner. The objective of this study is to investigate three novel methods for improving the accuracy and efficiency of record linkage when record linkage fields have missing values. By extending the Fellegi-Sunter scoring implementations available in the open-source Fine-grained Record Linkage (FRIL) software system we developed three novel methods to solve the missing data problem in record linkage, which we refer to as: Weight Redistribution, Distance Imputation, and Linkage Expansion. Weight Redistribution removes fields with missing data from the set of quasi-identifiers and redistributes the weight from the missing attribute based on relative proportions across the remaining available linkage fields. Distance Imputation imputes the distance between the missing data fields rather than imputing the missing data value. Linkage Expansion adds previously considered non-linkage fields to the linkage field set to compensate for the missing information in a linkage field. We tested the linkage methods using simulated data sets with varying field value corruption rates. The methods developed had sensitivity ranging from .895 to .992 and positive predictive values (PPV) ranging from .865 to 1 in data sets with low corruption rates. Increased corruption rates lead to decreased sensitivity for all methods. These new record linkage algorithms show promise in terms of accuracy and efficiency and may be valuable for combining large data sets at the patient level to support biomedical and clinical research. Copyright © 2014 Elsevier Inc. All rights reserved.

  16. Machine learning search for variable stars

    NASA Astrophysics Data System (ADS)

    Pashchenko, Ilya N.; Sokolovsky, Kirill V.; Gavras, Panagiotis

    2018-04-01

    Photometric variability detection is often considered as a hypothesis testing problem: an object is variable if the null hypothesis that its brightness is constant can be ruled out given the measurements and their uncertainties. The practical applicability of this approach is limited by uncorrected systematic errors. We propose a new variability detection technique sensitive to a wide range of variability types while being robust to outliers and underestimated measurement uncertainties. We consider variability detection as a classification problem that can be approached with machine learning. Logistic Regression (LR), Support Vector Machines (SVM), k Nearest Neighbours (kNN), Neural Nets (NN), Random Forests (RF), and Stochastic Gradient Boosting classifier (SGB) are applied to 18 features (variability indices) quantifying scatter and/or correlation between points in a light curve. We use a subset of Optical Gravitational Lensing Experiment phase two (OGLE-II) Large Magellanic Cloud (LMC) photometry (30 265 light curves) that was searched for variability using traditional methods (168 known variable objects) as the training set and then apply the NN to a new test set of 31 798 OGLE-II LMC light curves. Among 205 candidates selected in the test set, 178 are real variables, while 13 low-amplitude variables are new discoveries. The machine learning classifiers considered are found to be more efficient (select more variables and fewer false candidates) compared to traditional techniques using individual variability indices or their linear combination. The NN, SGB, SVM, and RF show a higher efficiency compared to LR and kNN.

  17. Environmental testing of a diode-laser-pumped Nd:YAG laser and a set of diode-laser-arrays

    NASA Technical Reports Server (NTRS)

    Hemmati, H.; Lesh, J. R.

    1989-01-01

    Results of the environmental test of a compact, rigid and lightweight diode-laser-pumped Nd:YAG laser module are discussed. All optical elements are bonded onto the module using space applicable epoxy, and two 200 mW diode laser arrays for pump sources are used to achieve 126 mW of CW output with about 7 percent electrical-to-optical conversion efficiency. This laser assembly and a set of 20 semiconductor diode laser arrays were environmentally tested by being subjected to vibrational and thermal conditions similar to those experienced during launch of the Space Shuttle, and both performed well. Nevertheless, some damage to the laser front facet in diode lasers was observed. Significant degradation was observed only on lasers which performed poorly in the life test. Improvements in the reliability of the Nd:YAG laser are suggested.

  18. Biomolecular surface construction by PDE transform

    PubMed Central

    Zheng, Qiong; Yang, Siyang; Wei, Guo-Wei

    2011-01-01

    This work proposes a new framework for the surface generation based on the partial differential equation (PDE) transform. The PDE transform has recently been introduced as a general approach for the mode decomposition of images, signals, and data. It relies on the use of arbitrarily high order PDEs to achieve the time-frequency localization, control the spectral distribution, and regulate the spatial resolution. The present work provides a new variational derivation of high order PDE transforms. The fast Fourier transform is utilized to accomplish the PDE transform so as to avoid stringent stability constraints in solving high order PDEs. As a consequence, the time integration of high order PDEs can be done efficiently with the fast Fourier transform. The present approach is validated with a variety of test examples in two and three-dimensional settings. We explore the impact of the PDE transform parameters, such as the PDE order and propagation time, on the quality of resulting surfaces. Additionally, we utilize a set of 10 proteins to compare the computational efficiency of the present surface generation method and the MSMS approach in Cartesian meshes. Moreover, we analyze the present method by examining some benchmark indicators of biomolecular surface, i.e., surface area, surface enclosed volume, solvation free energy and surface electrostatic potential. A test set of 13 protein molecules is used in the present investigation. The electrostatic analysis is carried out via the Poisson-Boltzmann equation model. To further demonstrate the utility of the present PDE transform based surface method, we solve the Poisson-Nernst-Planck (PNP) equations with a PDE transform surface of a protein. Second order convergence is observed for the electrostatic potential and concentrations. Finally, to test the capability and efficiency of the present PDE transform based surface generation method, we apply it to the construction of an excessively large biomolecule, a virus surface capsid. Virus surface morphologies of different resolutions are attained by adjusting the propagation time. Therefore, the present PDE transform provides a multiresolution analysis in the surface visualization. Extensive numerical experiment and comparison with an established surface model indicate that the present PDE transform is a robust, stable and efficient approach for biomolecular surface generation in Cartesian meshes. PMID:22582140

  19. Efficient analytical implementation of the DOT Riemann solver for the de Saint Venant-Exner morphodynamic model

    NASA Astrophysics Data System (ADS)

    Carraro, F.; Valiani, A.; Caleffi, V.

    2018-03-01

    Within the framework of the de Saint Venant equations coupled with the Exner equation for morphodynamic evolution, this work presents a new efficient implementation of the Dumbser-Osher-Toro (DOT) scheme for non-conservative problems. The DOT path-conservative scheme is a robust upwind method based on a complete Riemann solver, but it has the drawback of requiring expensive numerical computations. Indeed, to compute the non-linear time evolution in each time step, the DOT scheme requires numerical computation of the flux matrix eigenstructure (the totality of eigenvalues and eigenvectors) several times at each cell edge. In this work, an analytical and compact formulation of the eigenstructure for the de Saint Venant-Exner (dSVE) model is introduced and tested in terms of numerical efficiency and stability. Using the original DOT and PRICE-C (a very efficient FORCE-type method) as reference methods, we present a convergence analysis (error against CPU time) to study the performance of the DOT method with our new analytical implementation of eigenstructure calculations (A-DOT). In particular, the numerical performance of the three methods is tested in three test cases: a movable bed Riemann problem with analytical solution; a problem with smooth analytical solution; a test in which the water flow is characterised by subcritical and supercritical regions. For a given target error, the A-DOT method is always the most efficient choice. Finally, two experimental data sets and different transport formulae are considered to test the A-DOT model in more practical case studies.

  20. Swab Protocol for Rapid Laboratory Diagnosis of Cutaneous Anthrax

    PubMed Central

    Marston, Chung K.; Bhullar, Vinod; Baker, Daniel; Rahman, Mahmudur; Hossain, M. Jahangir; Chakraborty, Apurba; Khan, Salah Uddin; Hoffmaster, Alex R.

    2012-01-01

    The clinical laboratory diagnosis of cutaneous anthrax is generally established by conventional microbiological methods, such as culture and directly straining smears of clinical specimens. However, these methods rely on recovery of viable Bacillus anthracis cells from swabs of cutaneous lesions and often yield negative results. This study developed a rapid protocol for detection of B. anthracis on clinical swabs. Three types of swabs, flocked-nylon, rayon, and polyester, were evaluated by 3 extraction methods, the swab extraction tube system (SETS), sonication, and vortex. Swabs were spiked with virulent B. anthracis cells, and the methods were compared for their efficiency over time by culture and real-time PCR. Viability testing indicated that the SETS yielded greater recovery of B. anthracis from 1-day-old swabs; however, reduced viability was consistent for the 3 extraction methods after 7 days and nonviability was consistent by 28 days. Real-time PCR analysis showed that the PCR amplification was not impacted by time for any swab extraction method and that the SETS method provided the lowest limit of detection. When evaluated using lesion swabs from cutaneous anthrax outbreaks, the SETS yielded culture-negative, PCR-positive results. This study demonstrated that swab extraction methods differ in their efficiency of recovery of viable B. anthracis cells. Furthermore, the results indicated that culture is not reliable for isolation of B. anthracis from swabs at ≥7 days. Thus, we recommend the use of the SETS method with subsequent testing by culture and real-time PCR for diagnosis of cutaneous anthrax from clinical swabs of cutaneous lesions. PMID:23035192

  1. Generalized Functional Linear Models for Gene-based Case-Control Association Studies

    PubMed Central

    Mills, James L.; Carter, Tonia C.; Lobach, Iryna; Wilson, Alexander F.; Bailey-Wilson, Joan E.; Weeks, Daniel E.; Xiong, Momiao

    2014-01-01

    By using functional data analysis techniques, we developed generalized functional linear models for testing association between a dichotomous trait and multiple genetic variants in a genetic region while adjusting for covariates. Both fixed and mixed effect models are developed and compared. Extensive simulations show that Rao's efficient score tests of the fixed effect models are very conservative since they generate lower type I errors than nominal levels, and global tests of the mixed effect models generate accurate type I errors. Furthermore, we found that the Rao's efficient score test statistics of the fixed effect models have higher power than the sequence kernel association test (SKAT) and its optimal unified version (SKAT-O) in most cases when the causal variants are both rare and common. When the causal variants are all rare (i.e., minor allele frequencies less than 0.03), the Rao's efficient score test statistics and the global tests have similar or slightly lower power than SKAT and SKAT-O. In practice, it is not known whether rare variants or common variants in a gene are disease-related. All we can assume is that a combination of rare and common variants influences disease susceptibility. Thus, the improved performance of our models when the causal variants are both rare and common shows that the proposed models can be very useful in dissecting complex traits. We compare the performance of our methods with SKAT and SKAT-O on real neural tube defects and Hirschsprung's disease data sets. The Rao's efficient score test statistics and the global tests are more sensitive than SKAT and SKAT-O in the real data analysis. Our methods can be used in either gene-disease genome-wide/exome-wide association studies or candidate gene analyses. PMID:25203683

  2. EUCLID/NISP GRISM qualification model AIT/AIV campaign: optical, mechanical, thermal and vibration tests

    NASA Astrophysics Data System (ADS)

    Caillat, A.; Costille, A.; Pascal, S.; Rossin, C.; Vives, S.; Foulon, B.; Sanchez, P.

    2017-09-01

    Dark matter and dark energy mysteries will be explored by the Euclid ESA M-class space mission which will be launched in 2020. Millions of galaxies will be surveyed through visible imagery and NIR imagery and spectroscopy in order to map in three dimensions the Universe at different evolution stages over the past 10 billion years. The massive NIR spectroscopic survey will be done efficiently by the NISP instrument thanks to the use of grisms (for "Grating pRISMs") developed under the responsibility of the LAM. In this paper, we present the verification philosophy applied to test and validate each grism before the delivery to the project. The test sequence covers a large set of verifications: optical tests to validate efficiency and WFE of the component, mechanical tests to validate the robustness to vibration, thermal tests to validate its behavior in cryogenic environment and a complete metrology of the assembled component. We show the test results obtained on the first grism Engineering and Qualification Model (EQM) which will be delivered to the NISP project in fall 2016.

  3. Simulation System of Car Crash Test in C-NCAP Analysis Based on an Improved Apriori Algorithm*

    NASA Astrophysics Data System (ADS)

    Xiang, LI

    In order to analysis car crash test in C-NCAP, an improved algorithm is given based on Apriori algorithm in this paper. The new algorithm is implemented with vertical data layout, breadth first searching, and intersecting. It takes advantage of the efficiency of vertical data layout and intersecting, and prunes candidate frequent item sets like Apriori. Finally, the new algorithm is applied in simulation of car crash test analysis system. The result shows that the relations will affect the C-NCAP test results, and it can provide a reference for the automotive design.

  4. Performance of a lateral flow immunochromatography test for the rapid diagnosis of active tuberculosis in a large multicentre study in areas with different clinical settings and tuberculosis exposure levels.

    PubMed

    Manga, Selene; Perales, Rocio; Reaño, Maria; D'Ambrosio, Lia; Migliori, Giovanni Battista; Amicosante, Massimo

    2016-11-01

    Tuberculosis (TB) continues to cause an outsized burden of morbidity and mortality worldwide, still missing efficient and largely accessible diagnostic tools determining an appropriate control of the disease. Serological tests have the potentially to impact TB diagnosis, in particular in extreme clinical settings. The diagnostic performances of the TB-XT HEMA EXPRESS (HEMA-EXPRESS) immunochromatographic rapid test for active TB diagnosis, based on use of multiple Mycobacterium tuberculosis (MTB) specific antigens, have been evaluated in a large study multicentre TB case-finding study, in populations with different exposure level to TB. A total of 1,386 subjects were enrolled in the six participating centres in Peru: 290 active-TB and 1,096 unaffected subjects. The TB prevalence (overall 20.5%) varied between 4.0% and 41.1% in the different study groups. Overall, the HEMA-EXPRESS test had 30.6% sensitivity (range 3.9-77.9%) and 84.6% specificity (range 51.6-97.3%). A significant inverse correlation between test accuracy (overall 73.5%, range 40.4-96.4%) and TB prevalence in the various study populations was observed (Pearson's r=-0.7985; P=0.05). HEMA-EXPRESS, is rapid and relatively inexpensive test suitable for routine use in TB diagnosis. In low TB prevalence conditions, test performance appears in line with WHO Target Product Profile for TB diagnostics. Performances appear suboptimal in high TB prevalence settings. Appropriate set-up in operative clinical settings has to be considered for novel serological tests for TB diagnosis, particularly for formats suitable for point-of-care use.

  5. swga: a primer design toolkit for selective whole genome amplification.

    PubMed

    Clarke, Erik L; Sundararaman, Sesh A; Seifert, Stephanie N; Bushman, Frederic D; Hahn, Beatrice H; Brisson, Dustin

    2017-07-15

    Population genomic analyses are often hindered by difficulties in obtaining sufficient numbers of genomes for analysis by DNA sequencing. Selective whole-genome amplification (SWGA) provides an efficient approach to amplify microbial genomes from complex backgrounds for sequence acquisition. However, the process of designing sets of primers for this method has many degrees of freedom and would benefit from an automated process to evaluate the vast number of potential primer sets. Here, we present swga , a program that identifies primer sets for SWGA and evaluates them for efficiency and selectivity. We used swga to design and test primer sets for the selective amplification of Wolbachia pipientis genomic DNA from infected Drosophila melanogaster and Mycobacterium tuberculosis from human blood. We identify primer sets that successfully amplify each against their backgrounds and describe a general method for using swga for arbitrary targets. In addition, we describe characteristics of primer sets that correlate with successful amplification, and present guidelines for implementation of SWGA to detect new targets. Source code and documentation are freely available on https://www.github.com/eclarke/swga . The program is implemented in Python and C and licensed under the GNU Public License. ecl@mail.med.upenn.edu. Supplementary data are available at Bioinformatics online. © The Author(s) 2017. Published by Oxford University Press.

  6. Efficiency, equity and feasibility of strategies to identify the poor: an application to premium exemptions under National Health Insurance in Ghana.

    PubMed

    Jehu-Appiah, Caroline; Aryeetey, Genevieve; Spaan, Ernst; Agyepong, Irene; Baltussen, Rob

    2010-05-01

    This paper outlines the potential strategies to identify the poor, and assesses their feasibility, efficiency and equity. Analyses are illustrated for the case of premium exemptions under National Health Insurance (NHI) in Ghana. A literature search in Medline search was performed to identify strategies to identify the poor. Models were developed including information on demography and poverty, and costs and errors of in- and exclusion of these strategies in two regions in Ghana. Proxy means testing (PMT), participatory welfare ranking (PWR), and geographic targeting (GT) are potentially useful strategies to identify the poor, and vary in terms of their efficiency, equity and feasibility. Costs to exempt one poor individual range between US$11.63 and US$66.67, and strategies may exclude up to 25% of the poor. Feasibility of strategies is dependent on their aptness in rural/urban settings, and administrative capacity to implement. A decision framework summarizes the above information to guide policy making. We recommend PMT as an optimal strategy in relative low poverty incidence urbanized settings, PWR as an optimal strategy in relative low poverty incidence rural settings, and GT as an optimal strategy in high incidence poverty settings. This paper holds important lessons not only for NHI in Ghana but also for other countries implementing exemption policies. Copyright (c) 2009 Elsevier Ireland Ltd. All rights reserved.

  7. Design and Implementation of a Prototype Ontology Aided Knowledge Discovery Assistant (OAKDA) Application

    DTIC Science & Technology

    2006-12-01

    speed of search engines improves the efficiency of such methods, effectiveness is not improved. The objective of this thesis is to construct and test...interest, users are assisted in finding a relevant set of key terms that will aid the search engines in narrowing, widening, or refocusing a Web search

  8. Instrument Development and Validation of the Infant and Toddler Assessment for Quality Improvement

    ERIC Educational Resources Information Center

    Perlman, Michal; Brunsek, Ashley; Hepditch, Anne; Gray, Karen; Falenchuck, Olesya

    2017-01-01

    Research Findings: There is a growing need for accurate and efficient measures of classroom quality in early childhood education and care (ECEC) settings. Observational measures are costly, as their administration generally takes 3-5 hr per classroom. This article outlines the process of development and preliminary concurrent validity testing of…

  9. Robust and efficient method for matching features in omnidirectional images

    NASA Astrophysics Data System (ADS)

    Zhu, Qinyi; Zhang, Zhijiang; Zeng, Dan

    2018-04-01

    Binary descriptors have been widely used in many real-time applications due to their efficiency. These descriptors are commonly designed for perspective images but perform poorly on omnidirectional images, which are severely distorted. To address this issue, this paper proposes tangent plane BRIEF (TPBRIEF) and adapted log polar grid-based motion statistics (ALPGMS). TPBRIEF projects keypoints to a unit sphere and applies the fixed test set in BRIEF descriptor on the tangent plane of the unit sphere. The fixed test set is then backprojected onto the original distorted images to construct the distortion invariant descriptor. TPBRIEF directly enables keypoint detecting and feature describing on original distorted images, whereas other approaches correct the distortion through image resampling, which introduces artifacts and adds time cost. With ALPGMS, omnidirectional images are divided into circular arches named adapted log polar grids. Whether a match is true or false is then determined by simply thresholding the match numbers in a grid pair where the two matched points located. Experiments show that TPBRIEF greatly improves the feature matching accuracy and ALPGMS robustly removes wrong matches. Our proposed method outperforms the state-of-the-art methods.

  10. Impact of heavy soiling on the power output of PV modules

    NASA Astrophysics Data System (ADS)

    Schill, Christian; Brachmann, Stefan; Heck, Markus; Weiss, Karl-Anders; Koehl, Michael

    2011-09-01

    Fraunhofer ISE is running a PV-module outdoor testing set-up on the Gran Canaria island, one of the Canary Island located west of Morroco in the Atlantic Ocean. The performance of the modules is assessed by IV-curve monitoring every 10 minutes. The electronic set-up of the monitoring system - consisting of individual electronic loads for each module which go into an MPP-tracking mode between the IV-measurements - will be described in detail. Soiling of the exposed modules happened because of building constructions nearby. We decided not to clean the modules, but the radiation sensors and recorded the decrease of the power output and the efficiency over time. The efficiency dropped to 20 % within 5 months before a heavy rain and subsequently the service personnel on site cleaned the modules. A smaller rain-fall in between washed the dust partly away and accumulated it at the lower part of the module, what could be concluded from the shape of the IV-curves, which were similar to partial shading by hot-spot-tests and by partial snow cover.

  11. Simulation of the transient processes of load rejection under different accident conditions in a hydroelectric generating set

    NASA Astrophysics Data System (ADS)

    Guo, W. C.; Yang, J. D.; Chen, J. P.; Peng, Z. Y.; Zhang, Y.; Chen, C. C.

    2016-11-01

    Load rejection test is one of the essential tests that carried out before the hydroelectric generating set is put into operation formally. The test aims at inspecting the rationality of the design of the water diversion and power generation system of hydropower station, reliability of the equipment of generating set and the dynamic characteristics of hydroturbine governing system. Proceeding from different accident conditions of hydroelectric generating set, this paper presents the transient processes of load rejection corresponding to different accident conditions, and elaborates the characteristics of different types of load rejection. Then the numerical simulation method of different types of load rejection is established. An engineering project is calculated to verify the validity of the method. Finally, based on the numerical simulation results, the relationship among the different types of load rejection and their functions on the design of hydropower station and the operation of load rejection test are pointed out. The results indicate that: The load rejection caused by the accident within the hydroelectric generating set is realized by emergency distributing valve, and it is the basis of the optimization for the closing law of guide vane and the calculation of regulation and guarantee. The load rejection caused by the accident outside the hydroelectric generating set is realized by the governor. It is the most efficient measure to inspect the dynamic characteristics of hydro-turbine governing system, and its closure rate of guide vane set in the governor depends on the optimization result in the former type load rejection.

  12. System Design Verification for Closed Loop Control of Oxygenation With Concentrator Integration.

    PubMed

    Gangidine, Matthew M; Blakeman, Thomas C; Branson, Richard D; Johannigman, Jay A

    2016-05-01

    Addition of an oxygen concentrator into a control loop furthers previous work in autonomous control of oxygenation. Software integrates concentrator and ventilator function from a single control point, ensuring maximum efficiency by placing a pulse of oxygen at the beginning of the breath. We sought to verify this system. In a test lung, fraction of inspired oxygen (FIO2) levels and additional data were monitored. Tests were run across a range of clinically relevant ventilator settings in volume control mode, for both continuous flow and pulse dose flow oxygenation. Results showed the oxygen concentrator could maintain maximum pulse output (192 mL) up to 16 breaths per minute. Functionality was verified across ranges of tidal volumes and respiratory rates, with and without positive end-expiratory pressure, in continuous flow and pulse dose modes. For a representative test at respiratory rate 16 breaths per minute, tidal volume 550 mL, without positive end-expiratory pressure, pulse dose oxygenation delivered peak FIO2 of 76.83 ± 1.41%, and continuous flow 47.81 ± 0.08%; pulse dose flow provided a higher FIO2 at all tested setting combinations compared to continuous flow (p < 0.001). These tests verify a system that provides closed loop control of oxygenation while integrating time-coordinated pulse-doses from an oxygen concentrator. This allows the most efficient use of resources in austere environments. Reprint & Copyright © 2016 Association of Military Surgeons of the U.S.

  13. Characterization of three new condensation particle counters for sub-3 nm particle detection during the Helsinki CPC workshop: the ADI versatile water CPC, TSI 3777 nano enhancer and boosted TSI 3010

    NASA Astrophysics Data System (ADS)

    Kangasluoma, Juha; Hering, Susanne; Picard, David; Lewis, Gregory; Enroth, Joonas; Korhonen, Frans; Kulmala, Markku; Sellegri, Karine; Attoui, Michel; Petäjä, Tuukka

    2017-06-01

    In this study we characterized the performance of three new particle counters able to detect particles smaller than 3 nm during the Helsinki condensation particle counter (CPC) workshop in summer 2016: the Aerosol Dynamics Inc. (ADI; Berkeley, USA) versatile water condensation particle counter (vWCPC), TSI 3777 nano enhancer (TSI Inc., Shoreview, USA) and modified and boosted TSI 3010-type CPC from Université Blaise Pascal called a B3010. The performance of all CPCs was first measured with charged tungsten oxide test particles at temperature settings which resulted in supersaturation low enough to not detect any ions produced by a radioactive source. Due to similar measured detection efficiencies, additional comparison between the 3777 and vWCPC were conducted using electrically neutral tungsten oxide test particles and with positively charged tetradodecylammonium bromide. Furthermore, the detection efficiencies of the 3777 and vWCPC were measured with boosted temperature settings yielding supersaturation which was at the onset of homogeneous nucleation for the 3777 or confined within the range of liquid water for the ADI vWCPC. Finally, CPC-specific tests were conducted to probe the response of the 3777 to various inlet flow relative humidities, of the B3010 to various inlet flow rates and of the vWCPC to various particle concentrations. For the 3777 and vWCPC the measured 50 % detection diameters (d50s) were in the range of 1.3-2.4 nm for the tungsten oxide particles, depending on the particle charging state and CPC temperature settings, between 2.5 and 3.3 nm for the organic test aerosol, and in the range of 3.2-3.4 nm for tungsten oxide for the B3010.

  14. Towards optimal experimental tests on the reality of the quantum state

    NASA Astrophysics Data System (ADS)

    Knee, George C.

    2017-02-01

    The Barrett-Cavalcanti-Lal-Maroney (BCLM) argument stands as the most effective means of demonstrating the reality of the quantum state. Its advantages include being derived from very few assumptions, and a robustness to experimental error. Finding the best way to implement the argument experimentally is an open problem, however, and involves cleverly choosing sets of states and measurements. I show that techniques from convex optimisation theory can be leveraged to numerically search for these sets, which then form a recipe for experiments that allow for the strongest statements about the ontology of the wavefunction to be made. The optimisation approach presented is versatile, efficient and can take account of the finite errors present in any real experiment. I find significantly improved low-cardinality sets which are guaranteed partially optimal for a BCLM test in low Hilbert space dimension. I further show that mixed states can be more optimal than pure states.

  15. Characterization of Scintillating X-ray Optical Fiber Sensors

    PubMed Central

    Sporea, Dan; Mihai, Laura; Vâţă, Ion; McCarthy, Denis; O'Keeffe, Sinead; Lewis, Elfed

    2014-01-01

    The paper presents a set of tests carried out in order to evaluate the design characteristics and the operating performance of a set of six X-ray extrinsic optical fiber sensors. The extrinsic sensor we developed is intended to be used as a low energy X-ray detector for monitoring radiation levels in radiotherapy, industrial applications and for personnel dosimetry. The reproducibility of the manufacturing process and the characteristics of the sensors were assessed. The sensors dynamic range, linearity, sensitivity, and reproducibility are evaluated through radioluminescence measurements, X-ray fluorescence and X-ray imaging investigations. Their response to the operating conditions of the excitation source was estimated. The effect of the sensors design and implementation, on the collecting efficiency of the radioluminescence signal was measured. The study indicated that the sensors are efficient only in the first 5 mm of the tip, and that a reflective coating can improve their response. Additional tests were done to investigate the concentricity of the sensors tip against the core of the optical fiber guiding the optical signal. The influence of the active material concentration on the sensor response to X-ray was studied. The tests were carried out by measuring the radioluminescence signal with an optical fiber spectrometer and with a Multi-Pixel Photon Counter. PMID:24556676

  16. An ``Openable,'' High-Strength Gradient Set for Orthopedic MRI

    NASA Astrophysics Data System (ADS)

    Crozier, Stuart; Roffmann, Wolfgang U.; Luescher, Kurt; Snape-Jenkinson, Christopher; Forbes, Lawrence K.; Doddrell, David M.

    1999-07-01

    A novel three-axis gradient set and RF resonator for orthopedic MRI has been designed and constructed. The set is openable and may be wrapped around injured joints. The design methodology used was the minimization of magnetic field spherical harmonics by simulated annealing. Splitting of the longitudinal coil presents the major design challenge to a fully openable gradient set and in order to efficiently design such coils, we have developed a new fast algorithm for determining the magnetic field spherical harmonics generated by an arc of multiturn wire. The algorithm allows a realistic impression of the effect of split longitudinal designs. A prototype set was constructed based on the new designs and tested in a 2-T clinical research system. The set generated 12 mT/m/A with a linear region of 12 cm and a switching time of 100 μs, conforming closely with theoretical predictions. Preliminary images from the set are presented.

  17. Descent Advisor Preliminary Field Test

    NASA Technical Reports Server (NTRS)

    Green, Steven M.; Vivona, Robert A.; Sanford, Beverly

    1995-01-01

    A field test of the Descent Advisor (DA) automation tool was conducted at the Denver Air Route Traffic Control Center in September 1994. DA is being developed to assist Center controllers in the efficient management and control of arrival traffic. DA generates advisories, based on trajectory predictions, to achieve accurate meter-fix arrival times in a fuel efficient manner while assisting the controller with the prediction and resolution of potential conflicts. The test objectives were: (1) to evaluate the accuracy of DA trajectory predictions for conventional and flight-management system equipped jet transports, (2) to identify significant sources of trajectory prediction error, and (3) to investigate procedural and training issues (both air and ground) associated with DA operations. Various commercial aircraft (97 flights total) and a Boeing 737-100 research aircraft participated in the test. Preliminary results from the primary test set of 24 commercial flights indicate a mean DA arrival time prediction error of 2.4 seconds late with a standard deviation of 13.1 seconds. This paper describes the field test and presents preliminary results for the commercial flights.

  18. Accurate, precise, and efficient theoretical methods to calculate anion-π interaction energies in model structures.

    PubMed

    Mezei, Pál D; Csonka, Gábor I; Ruzsinszky, Adrienn; Sun, Jianwei

    2015-01-13

    A correct description of the anion-π interaction is essential for the design of selective anion receptors and channels and important for advances in the field of supramolecular chemistry. However, it is challenging to do accurate, precise, and efficient calculations of this interaction, which are lacking in the literature. In this article, by testing sets of 20 binary anion-π complexes of fluoride, chloride, bromide, nitrate, or carbonate ions with hexafluorobenzene, 1,3,5-trifluorobenzene, 2,4,6-trifluoro-1,3,5-triazine, or 1,3,5-triazine and 30 ternary π-anion-π' sandwich complexes composed from the same monomers, we suggest domain-based local-pair natural orbital coupled cluster energies extrapolated to the complete basis-set limit as reference values. We give a detailed explanation of the origin of anion-π interactions, using the permanent quadrupole moments, static dipole polarizabilities, and electrostatic potential maps. We use symmetry-adapted perturbation theory (SAPT) to calculate the components of the anion-π interaction energies. We examine the performance of the direct random phase approximation (dRPA), the second-order screened exchange (SOSEX), local-pair natural-orbital (LPNO) coupled electron pair approximation (CEPA), and several dispersion-corrected density functionals (including generalized gradient approximation (GGA), meta-GGA, and double hybrid density functional). The LPNO-CEPA/1 results show the best agreement with the reference results. The dRPA method is only slightly less accurate and precise than the LPNO-CEPA/1, but it is considerably more efficient (6-17 times faster) for the binary complexes studied in this paper. For 30 ternary π-anion-π' sandwich complexes, we give dRPA interaction energies as reference values. The double hybrid functionals are much more efficient but less accurate and precise than dRPA. The dispersion-corrected double hybrid PWPB95-D3(BJ) and B2PLYP-D3(BJ) functionals perform better than the GGA and meta-GGA functionals for the present test set.

  19. Challenging local realism with human choices.

    PubMed

    2018-05-01

    A Bell test is a randomized trial that compares experimental observations against the philosophical worldview of local realism 1 , in which the properties of the physical world are independent of our observation of them and no signal travels faster than light. A Bell test requires spatially distributed entanglement, fast and high-efficiency detection and unpredictable measurement settings 2,3 . Although technology can satisfy the first two of these requirements 4-7 , the use of physical devices to choose settings in a Bell test involves making assumptions about the physics that one aims to test. Bell himself noted this weakness in using physical setting choices and argued that human 'free will' could be used rigorously to ensure unpredictability in Bell tests 8 . Here we report a set of local-realism tests using human choices, which avoids assumptions about predictability in physics. We recruited about 100,000 human participants to play an online video game that incentivizes fast, sustained input of unpredictable selections and illustrates Bell-test methodology 9 . The participants generated 97,347,490 binary choices, which were directed via a scalable web platform to 12 laboratories on five continents, where 13 experiments tested local realism using photons 5,6 , single atoms 7 , atomic ensembles 10 and superconducting devices 11 . Over a 12-hour period on 30 November 2016, participants worldwide provided a sustained data flow of over 1,000 bits per second to the experiments, which used different human-generated data to choose each measurement setting. The observed correlations strongly contradict local realism and other realistic positions in bipartite and tripartite 12 scenarios. Project outcomes include closing the 'freedom-of-choice loophole' (the possibility that the setting choices are influenced by 'hidden variables' to correlate with the particle properties 13 ), the utilization of video-game methods 14 for rapid collection of human-generated randomness, and the use of networking techniques for global participation in experimental science.

  20. High-Efficiency Hall Thruster Discharge Power Converter

    NASA Technical Reports Server (NTRS)

    Jaquish, Thomas

    2015-01-01

    Busek Company, Inc., is designing, building, and testing a new printed circuit board converter. The new converter consists of two series or parallel boards (slices) intended to power a high-voltage Hall accelerator (HiVHAC) thruster or other similarly sized electric propulsion devices. The converter accepts 80- to 160-V input and generates 200- to 700-V isolated output while delivering continually adjustable 300-W to 3.5-kW power. Busek built and demonstrated one board that achieved nearly 94 percent efficiency the first time it was turned on, with projected efficiency exceeding 97 percent following timing software optimization. The board has a projected specific mass of 1.2 kg/kW, achieved through high-frequency switching. In Phase II, Busek optimized to exceed 97 percent efficiency and built a second prototype in a form factor more appropriate for flight. This converter then was integrated with a set of upgraded existing boards for powering magnets and the cathode. The program culminated with integrating the entire power processing unit and testing it on a Busek thruster and on NASA's HiVHAC thruster.

  1. Modelling submerged coastal environments: Remote sensing technologies, techniques, and comparative analysis

    NASA Astrophysics Data System (ADS)

    Dillon, Chris

    Built upon remote sensing and GIS littoral zone characterization methodologies of the past decade, a series of loosely coupled models aimed to test, compare and synthesize multi-beam SONAR (MBES), Airborne LiDAR Bathymetry (ALB), and satellite based optical data sets in the Gulf of St. Lawrence, Canada, eco-region. Bathymetry and relative intensity metrics for the MBES and ALB data sets were run through a quantitative and qualitative comparison, which included outputs from the Benthic Terrain Modeller (BTM) tool. Substrate classification based on relative intensities of respective data sets and textural indices generated using grey level co-occurrence matrices (GLCM) were investigated. A spatial modelling framework built in ArcGIS(TM) for the derivation of bathymetric data sets from optical satellite imagery was also tested for proof of concept and validation. Where possible, efficiencies and semi-automation for repeatable testing was achieved using ArcGIS(TM) ModelBuilder. The findings from this study could assist future decision makers in the field of coastal management and hydrographic studies. Keywords: Seafloor terrain characterization, Benthic Terrain Modeller (BTM), Multi-beam SONAR, Airborne LiDAR Bathymetry, Satellite Derived Bathymetry, ArcGISTM ModelBuilder, Textural analysis, Substrate classification.

  2. ICCD: interactive continuous collision detection between deformable models using connectivity-based culling.

    PubMed

    Tang, Min; Curtis, Sean; Yoon, Sung-Eui; Manocha, Dinesh

    2009-01-01

    We present an interactive algorithm for continuous collision detection between deformable models. We introduce multiple techniques to improve the culling efficiency and the overall performance of continuous collision detection. First, we present a novel formulation for continuous normal cones and use these normal cones to efficiently cull large regions of the mesh as part of self-collision tests. Second, we introduce the concept of "procedural representative triangles" to remove all redundant elementary tests between nonadjacent triangles. Finally, we exploit the mesh connectivity and introduce the concept of "orphan sets" to eliminate redundant elementary tests between adjacent triangle primitives. In practice, we can reduce the number of elementary tests by two orders of magnitude. These culling techniques have been combined with bounding volume hierarchies and can result in one order of magnitude performance improvement as compared to prior collision detection algorithms for deformable models. We highlight the performance of our algorithm on several benchmarks, including cloth simulations, N-body simulations, and breaking objects.

  3. A technology mapping based on graph of excitations and outputs for finite state machines

    NASA Astrophysics Data System (ADS)

    Kania, Dariusz; Kulisz, Józef

    2017-11-01

    A new, efficient technology mapping method of FSMs, dedicated for PAL-based PLDs is proposed. The essence of the method consists in searching for the minimal set of PAL-based logic blocks that cover a set of multiple-output implicants describing the transition and output functions of an FSM. The method is based on a new concept of graph: the Graph of Excitations and Outputs. The proposed algorithm was tested using the FSM benchmarks. The obtained results were compared with the classical technology mapping of FSM.

  4. Use of an auxiliary basis set to describe the polarization in the fragment molecular orbital method

    NASA Astrophysics Data System (ADS)

    Fedorov, Dmitri G.; Kitaura, Kazuo

    2014-03-01

    We developed a dual basis approach within the fragment molecular orbital formalism enabling efficient and accurate use of large basis sets. The method was tested on water clusters and polypeptides and applied to perform geometry optimization of chignolin (PDB: 1UAO) in solution at the level of DFT/6-31++G∗∗, obtaining a structure in agreement with experiment (RMSD of 0.4526 Å). The polarization in polypeptides is discussed with a comparison of the α-helix and β-strand.

  5. An improved method to detect correct protein folds using partial clustering.

    PubMed

    Zhou, Jianjun; Wishart, David S

    2013-01-16

    Structure-based clustering is commonly used to identify correct protein folds among candidate folds (also called decoys) generated by protein structure prediction programs. However, traditional clustering methods exhibit a poor runtime performance on large decoy sets. We hypothesized that a more efficient "partial" clustering approach in combination with an improved scoring scheme could significantly improve both the speed and performance of existing candidate selection methods. We propose a new scheme that performs rapid but incomplete clustering on protein decoys. Our method detects structurally similar decoys (measured using either C(α) RMSD or GDT-TS score) and extracts representatives from them without assigning every decoy to a cluster. We integrated our new clustering strategy with several different scoring functions to assess both the performance and speed in identifying correct or near-correct folds. Experimental results on 35 Rosetta decoy sets and 40 I-TASSER decoy sets show that our method can improve the correct fold detection rate as assessed by two different quality criteria. This improvement is significantly better than two recently published clustering methods, Durandal and Calibur-lite. Speed and efficiency testing shows that our method can handle much larger decoy sets and is up to 22 times faster than Durandal and Calibur-lite. The new method, named HS-Forest, avoids the computationally expensive task of clustering every decoy, yet still allows superior correct-fold selection. Its improved speed, efficiency and decoy-selection performance should enable structure prediction researchers to work with larger decoy sets and significantly improve their ab initio structure prediction performance.

  6. An improved method to detect correct protein folds using partial clustering

    PubMed Central

    2013-01-01

    Background Structure-based clustering is commonly used to identify correct protein folds among candidate folds (also called decoys) generated by protein structure prediction programs. However, traditional clustering methods exhibit a poor runtime performance on large decoy sets. We hypothesized that a more efficient “partial“ clustering approach in combination with an improved scoring scheme could significantly improve both the speed and performance of existing candidate selection methods. Results We propose a new scheme that performs rapid but incomplete clustering on protein decoys. Our method detects structurally similar decoys (measured using either Cα RMSD or GDT-TS score) and extracts representatives from them without assigning every decoy to a cluster. We integrated our new clustering strategy with several different scoring functions to assess both the performance and speed in identifying correct or near-correct folds. Experimental results on 35 Rosetta decoy sets and 40 I-TASSER decoy sets show that our method can improve the correct fold detection rate as assessed by two different quality criteria. This improvement is significantly better than two recently published clustering methods, Durandal and Calibur-lite. Speed and efficiency testing shows that our method can handle much larger decoy sets and is up to 22 times faster than Durandal and Calibur-lite. Conclusions The new method, named HS-Forest, avoids the computationally expensive task of clustering every decoy, yet still allows superior correct-fold selection. Its improved speed, efficiency and decoy-selection performance should enable structure prediction researchers to work with larger decoy sets and significantly improve their ab initio structure prediction performance. PMID:23323835

  7. Diagnostic efficiency of the AUDIT-C in U.S. veterans with military service since September 11, 2001.

    PubMed

    Crawford, Eric F; Fulton, Jessica J; Swinkels, Cindy M; Beckham, Jean C; Calhoun, Patrick S

    2013-09-01

    Alcohol screening with the 3-item alcohol use disorders identification test (AUDIT-C) has been implemented throughout the U.S. Veterans Health Administration. Validation of the AUDIT-C, however, has been conducted with samples of primarily older veterans. This study examined the diagnostic efficiency of the AUDIT-C in a younger cohort of veterans who served during Operation Enduring Freedom and/or Operation Iraqi Freedom (OEF/OIF). Veteran participants (N=1775) completed the alcohol use disorders identification test (AUDIT) and underwent the structured clinical interview for DSM-IV-TR for Axis I disorders (SCID) in research settings within four VA medical Centers. Areas under receiver operating characteristic curves (AUCs) measured the effiency of the full AUDIT and AUDIT-C in identifying SCID-based diagnoses of past year alcohol abuse or dependence. Both measures performed well in detecting alcohol use disorders. In the full sample, the AUDIT had a better AUC (.908; .881-.935) than the AUDIT-C (.859; .826-.893; p<.0001). It is notable that this same result was found among men but not women, perhaps due to reduced power. Diagnostic efficiency statistics for the AUDIT and AUDIT-C were consistent with results from older veteran samples. The diagnostic efficiency of both measures did not vary with race or age. Both the AUDIT and AUDIT-C appear to be valid instruments for identifying alcohol abuse or dependence among the most recent cohort of U.S. veterans with service during OEF/OIF within research settings. Published by Elsevier Ireland Ltd.

  8. Item Difficulty in the Evaluation of Computer-Based Instruction: An Example from Neuroanatomy

    ERIC Educational Resources Information Center

    Chariker, Julia H.; Naaz, Farah; Pani, John R.

    2012-01-01

    This article reports large item effects in a study of computer-based learning of neuroanatomy. Outcome measures of the efficiency of learning, transfer of learning, and generalization of knowledge diverged by a wide margin across test items, with certain sets of items emerging as particularly difficult to master. In addition, the outcomes of…

  9. The Complete Toolkit for Building High-Performance Work Teams.

    ERIC Educational Resources Information Center

    Golden, Nancy; Gall, Joyce P.

    This workbook is designed for leaders and members of work teams in educational and social-service systems. It presents in a systematic fashion a set of tested facilitation tools that will allow teams to work more efficiently and harmoniously, enabling them to achieve their goals, to deal directly with both personal and work-related issues that…

  10. Using HFire for spatial modeling of fire in shrublands

    Treesearch

    Seth H. Peterson; Marco E. Morais; Jean M. Carlson; Philip E. Dennison; Dar A. Roberts; Max A. Moritz; David R. Weise

    2009-01-01

    An efficient raster fire-spread model named HFire is introduced. HFire can simulate single-fire events or long-term fire regimes, using the same fire-spread algorithm. This paper describes the HFire algorithm, benchmarks the model using a standard set of tests developed for FARSITE, and compares historical and predicted fire spread perimeters for three southern...

  11. Utilizing Response Time Distributions for Item Selection in CAT

    ERIC Educational Resources Information Center

    Fan, Zhewen; Wang, Chun; Chang, Hua-Hua; Douglas, Jeffrey

    2012-01-01

    Traditional methods for item selection in computerized adaptive testing only focus on item information without taking into consideration the time required to answer an item. As a result, some examinees may receive a set of items that take a very long time to finish, and information is not accrued as efficiently as possible. The authors propose two…

  12. Automatic registration of terrestrial point clouds based on panoramic reflectance images and efficient BaySAC

    NASA Astrophysics Data System (ADS)

    Kang, Zhizhong

    2013-10-01

    This paper presents a new approach to automatic registration of terrestrial laser scanning (TLS) point clouds utilizing a novel robust estimation method by an efficient BaySAC (BAYes SAmpling Consensus). The proposed method directly generates reflectance images from 3D point clouds, and then using SIFT algorithm extracts keypoints to identify corresponding image points. The 3D corresponding points, from which transformation parameters between point clouds are computed, are acquired by mapping the 2D ones onto the point cloud. To remove false accepted correspondences, we implement a conditional sampling method to select the n data points with the highest inlier probabilities as a hypothesis set and update the inlier probabilities of each data point using simplified Bayes' rule for the purpose of improving the computation efficiency. The prior probability is estimated by the verification of the distance invariance between correspondences. The proposed approach is tested on four data sets acquired by three different scanners. The results show that, comparing with the performance of RANSAC, BaySAC leads to less iterations and cheaper computation cost when the hypothesis set is contaminated with more outliers. The registration results also indicate that, the proposed algorithm can achieve high registration accuracy on all experimental datasets.

  13. Improving the efficiency of an Er:YAG laser on enamel and dentin.

    PubMed

    Rizcalla, Nicolas; Bader, Carl; Bortolotto, Tissiana; Krejci, Ivo

    2012-02-01

    To evaluate the influence of air pressure, water flow rate, and pulse frequency on the removal speed of enamel and dentin as well as on their surface morphology. Twenty-four bovine incisors were horizontally cut in slices. Each sample was mounted on an experimental assembly, allowing precise orientation. Eighteen cavities were prepared, nine in enamel and nine in dentin. Specific parameters for frequency, water flow rate, and air pressure were applied for each experimental group. Three groups were randomly formed according to the air pressure settings. Cavity depth was measured using a digital micrometer gauge, and surface morphology was checked by means of scanning electron microscopy. Data was analyzed with ANOVA and Duncan post hoc test. Irradiation at 25 Hz for enamel and 30 Hz for dentin provided the best ablation rates within this study, but efficiency decreased if the frequency was raised further. Greater tissue ablation was found with water flow rate set to low and dropped with higher values. Air pressure was found to have an interaction with the other settings, since ablation rates varied with different air pressure values. Fine-tuning of all parameters to get a good ablation rate with minimum surface damage seems to be key in achieving optimal efficiency for cavity preparation with an Er:YAG laser.

  14. Efficient organ localization using multi-label convolutional neural networks in thorax-abdomen CT scans

    NASA Astrophysics Data System (ADS)

    Efrain Humpire-Mamani, Gabriel; Arindra Adiyoso Setio, Arnaud; van Ginneken, Bram; Jacobs, Colin

    2018-04-01

    Automatic localization of organs and other structures in medical images is an important preprocessing step that can improve and speed up other algorithms such as organ segmentation, lesion detection, and registration. This work presents an efficient method for simultaneous localization of multiple structures in 3D thorax-abdomen CT scans. Our approach predicts the location of multiple structures using a single multi-label convolutional neural network for each orthogonal view. Each network takes extra slices around the current slice as input to provide extra context. A sigmoid layer is used to perform multi-label classification. The output of the three networks is subsequently combined to compute a 3D bounding box for each structure. We used our approach to locate 11 structures of interest. The neural network was trained and evaluated on a large set of 1884 thorax-abdomen CT scans from patients undergoing oncological workup. Reference bounding boxes were annotated by human observers. The performance of our method was evaluated by computing the wall distance to the reference bounding boxes. The bounding boxes annotated by the first human observer were used as the reference standard for the test set. Using the best configuration, we obtained an average wall distance of 3.20~+/-~7.33 mm in the test set. The second human observer achieved 1.23~+/-~3.39 mm. For all structures, the results were better than those reported in previously published studies. In conclusion, we proposed an efficient method for the accurate localization of multiple organs. Our method uses multiple slices as input to provide more context around the slice under analysis, and we have shown that this improves performance. This method can easily be adapted to handle more organs.

  15. Error response test system and method using test mask variable

    NASA Technical Reports Server (NTRS)

    Gender, Thomas K. (Inventor)

    2006-01-01

    An error response test system and method with increased functionality and improved performance is provided. The error response test system provides the ability to inject errors into the application under test to test the error response of the application under test in an automated and efficient manner. The error response system injects errors into the application through a test mask variable. The test mask variable is added to the application under test. During normal operation, the test mask variable is set to allow the application under test to operate normally. During testing, the error response test system can change the test mask variable to introduce an error into the application under test. The error response system can then monitor the application under test to determine whether the application has the correct response to the error.

  16. Stream Kriging: Incremental and recursive ordinary Kriging over spatiotemporal data streams

    NASA Astrophysics Data System (ADS)

    Zhong, Xu; Kealy, Allison; Duckham, Matt

    2016-05-01

    Ordinary Kriging is widely used for geospatial interpolation and estimation. Due to the O (n3) time complexity of solving the system of linear equations, ordinary Kriging for a large set of source points is computationally intensive. Conducting real-time Kriging interpolation over continuously varying spatiotemporal data streams can therefore be especially challenging. This paper develops and tests two new strategies for improving the performance of an ordinary Kriging interpolator adapted to a stream-processing environment. These strategies rely on the expectation that, over time, source data points will frequently refer to the same spatial locations (for example, where static sensor nodes are generating repeated observations of a dynamic field). First, an incremental strategy improves efficiency in cases where a relatively small proportion of previously processed spatial locations are absent from the source points at any given iteration. Second, a recursive strategy improves efficiency in cases where there is substantial set overlap between the sets of spatial locations of source points at the current and previous iterations. These two strategies are evaluated in terms of their computational efficiency in comparison to ordinary Kriging algorithm. The results show that these two strategies can reduce the time taken to perform the interpolation by up to 90%, and approach average-case time complexity of O (n2) when most but not all source points refer to the same locations over time. By combining the approaches developed in this paper with existing heuristic ordinary Kriging algorithms, the conclusions indicate how further efficiency gains could potentially be accrued. The work ultimately contributes to the development of online ordinary Kriging interpolation algorithms, capable of real-time spatial interpolation with large streaming data sets.

  17. Control of diesel soot and NOx emissions with a particulate trap and EGR.

    PubMed

    Liu, Rui-xiang; Gao, Xi-yan; Yang, De-sheng; Xu, Xiao-guang

    2005-01-01

    The exhaust gas recirculation (EGR), coupled with a high-collection efficiency particulate trap to simultaneously control smoke and NOx emissions from diesel engines were studied. This ceramic trap developed previously provided the soot cleaning efficiency of 99%, the regeneration efficiency reaches 80% and the ratio of success reaches 97%, which make EGR used in diesel possible. At the presence of EGR, opening of the regeneration control valve of the trap was over again optimized to compensate for the decrease of the oxygen concentration in the exhaust gas resulted from EGR. The results indicated the cleaning efficiency and regeneration performance of the trap were maintained at the same level except that the back pressure increased faster. A new EGR system was developed, which is based on a wide range oxygen (UEGO) sensor. Experiments were carried out under steady state conditions while maintaining the engine speed at 1600 r/min, setting the engine loads at 0%, 25%, 50%, 75% and 100% respectively. Throughout each test the EGR rate was kept at nine different settings and data were taken with the gas analyzer and UEGO sensor. Then, the EGR rate and engine load maps, which showed the tendencies of NOx, CO and HC emissions from diesel engine, were made using the measured data. Using the maps, the author set up the EGR regulation, the relationship between the optimal amounts of EGR flow and the equivalence ratio, sigma, where sigma = 14.5/AFR.

  18. Pollinator Interactions with Yellow Starthistle (Centaurea solstitialis) across Urban, Agricultural, and Natural Landscapes

    PubMed Central

    Leong, Misha; Kremen, Claire; Roderick, George K.

    2014-01-01

    Pollinator-plant relationships are found to be particularly vulnerable to land use change. Yet despite extensive research in agricultural and natural systems, less attention has focused on these interactions in neighboring urban areas and its impact on pollination services. We investigated pollinator-plant interactions in a peri-urban landscape on the outskirts of the San Francisco Bay Area, California, where urban, agricultural, and natural land use types interface. We made standardized observations of floral visitation and measured seed set of yellow starthistle (Centaurea solstitialis), a common grassland invasive, to test the hypotheses that increasing urbanization decreases 1) rates of bee visitation, 2) viable seed set, and 3) the efficiency of pollination (relationship between bee visitation and seed set). We unexpectedly found that bee visitation was highest in urban and agricultural land use contexts, but in contrast, seed set rates in these human-altered landscapes were lower than in natural sites. An explanation for the discrepancy between floral visitation and seed set is that higher plant diversity in urban and agricultural areas, as a result of more introduced species, decreases pollinator efficiency. If these patterns are consistent across other plant species, the novel plant communities created in these managed landscapes and the generalist bee species that are favored by human-altered environments will reduce pollination services. PMID:24466050

  19. The charging security study of electric vehicle charging spot based on automatic testing platform

    NASA Astrophysics Data System (ADS)

    Li, Yulan; Yang, Zhangli; Zhu, Bin; Ran, Shengyi

    2018-03-01

    With the increasing of charging spots, the testing of charging security and interoperability becomes more and more urgent and important. In this paper, an interface simulator for ac charging test is designed, the automatic testing platform for electric vehicle charging spots is set up and used to test and analyze the abnormal state during the charging process. On the platform, the charging security and interoperability of ac charging spots and IC-CPD can be checked efficiently, the test report can be generated automatically with No artificial reading error. From the test results, the main reason why the charging spot is not qualified is that the power supply cannot be cut off in the prescribed time when the charging anomaly occurs.

  20. GEE-based SNP set association test for continuous and discrete traits in family-based association studies.

    PubMed

    Wang, Xuefeng; Lee, Seunggeun; Zhu, Xiaofeng; Redline, Susan; Lin, Xihong

    2013-12-01

    Family-based genetic association studies of related individuals provide opportunities to detect genetic variants that complement studies of unrelated individuals. Most statistical methods for family association studies for common variants are single marker based, which test one SNP a time. In this paper, we consider testing the effect of an SNP set, e.g., SNPs in a gene, in family studies, for both continuous and discrete traits. Specifically, we propose a generalized estimating equations (GEEs) based kernel association test, a variance component based testing method, to test for the association between a phenotype and multiple variants in an SNP set jointly using family samples. The proposed approach allows for both continuous and discrete traits, where the correlation among family members is taken into account through the use of an empirical covariance estimator. We derive the theoretical distribution of the proposed statistic under the null and develop analytical methods to calculate the P-values. We also propose an efficient resampling method for correcting for small sample size bias in family studies. The proposed method allows for easily incorporating covariates and SNP-SNP interactions. Simulation studies show that the proposed method properly controls for type I error rates under both random and ascertained sampling schemes in family studies. We demonstrate through simulation studies that our approach has superior performance for association mapping compared to the single marker based minimum P-value GEE test for an SNP-set effect over a range of scenarios. We illustrate the application of the proposed method using data from the Cleveland Family GWAS Study. © 2013 WILEY PERIODICALS, INC.

  1. A comparison of effectiveness of hepatitis B screening and linkage to care among foreign-born populations in clinical and nonclinical settings.

    PubMed

    Chandrasekar, Edwin; Kaur, Ravneet; Song, Sharon; Kim, Karen E

    2015-01-01

    Hepatitis B (HBV) is an urgent, unmet public health issue that affects Asian Americans disproportionately. Of the estimated 1.2 million living with chronic hepatitis B in USA, more than 50% are of Asian ethnicity, despite the fact that Asian Americans constitute less than 6% of the total US population. The Centers for Disease Control and Prevention recommends HBV screening of persons who are at high risk for the disease. Yet, large numbers of Asian Americans have not been diagnosed or tested, in large part because of perceived cultural and linguistic barriers. Primary care physicians are at the front line of the US health care system, and are in a position to identify individuals and families at risk. Clinical settings integrated into Asian American communities, where physicians are on staff and wellness care is emphasized, can provide testing for HBV. In this study, the Asian Health Coalition and its community partners conducted HBV screenings and follow-up linkage to care in both clinical and nonclinical settings. The nonclinic settings included health fair events organized by churches and social services agencies, and were able to reach large numbers of individuals. Twice as many Asian Americans were screened in nonclinical settings than in health clinics. Chi-square and independent samples t-test showed that participants from the two settings did not differ in test positivity, sex, insurance status, years of residence in USA, or education. Additionally, the same proportion of individuals found to be infected in the two groups underwent successful linkage to care. Nonclinical settings were as effective as clinical settings in screening for HBV, as well as in making treatment options available to those who tested positive; demographic factors did not confound the similarities. Further research is needed to evaluate if linkage to care can be accomplished equally efficiently on a larger scale.

  2. Developing interpretable models with optimized set reduction for identifying high risk software components

    NASA Technical Reports Server (NTRS)

    Briand, Lionel C.; Basili, Victor R.; Hetmanski, Christopher J.

    1993-01-01

    Applying equal testing and verification effort to all parts of a software system is not very efficient, especially when resources are limited and scheduling is tight. Therefore, one needs to be able to differentiate low/high fault frequency components so that testing/verification effort can be concentrated where needed. Such a strategy is expected to detect more faults and thus improve the resulting reliability of the overall system. This paper presents the Optimized Set Reduction approach for constructing such models, intended to fulfill specific software engineering needs. Our approach to classification is to measure the software system and build multivariate stochastic models for predicting high risk system components. We present experimental results obtained by classifying Ada components into two classes: is or is not likely to generate faults during system and acceptance test. Also, we evaluate the accuracy of the model and the insights it provides into the error making process.

  3. Plant X-tender: An extension of the AssemblX system for the assembly and expression of multigene constructs in plants.

    PubMed

    Lukan, Tjaša; Machens, Fabian; Coll, Anna; Baebler, Špela; Messerschmidt, Katrin; Gruden, Kristina

    2018-01-01

    Cloning multiple DNA fragments for delivery of several genes of interest into the plant genome is one of the main technological challenges in plant synthetic biology. Despite several modular assembly methods developed in recent years, the plant biotechnology community has not widely adopted them yet, probably due to the lack of appropriate vectors and software tools. Here we present Plant X-tender, an extension of the highly efficient, scar-free and sequence-independent multigene assembly strategy AssemblX, based on overlap-depended cloning methods and rare-cutting restriction enzymes. Plant X-tender consists of a set of plant expression vectors and the protocols for most efficient cloning into the novel vector set needed for plant expression and thus introduces advantages of AssemblX into plant synthetic biology. The novel vector set covers different backbones and selection markers to allow full design flexibility. We have included ccdB counterselection, thereby allowing the transfer of multigene constructs into the novel vector set in a straightforward and highly efficient way. Vectors are available as empty backbones and are fully flexible regarding the orientation of expression cassettes and addition of linkers between them, if required. We optimised the assembly and subcloning protocol by testing different scar-less assembly approaches: the noncommercial SLiCE and TAR methods and the commercial Gibson assembly and NEBuilder HiFi DNA assembly kits. Plant X-tender was applicable even in combination with low efficient homemade chemically competent or electrocompetent Escherichia coli. We have further validated the developed procedure for plant protein expression by cloning two cassettes into the newly developed vectors and subsequently transferred them to Nicotiana benthamiana in a transient expression setup. Thereby we show that multigene constructs can be delivered into plant cells in a streamlined and highly efficient way. Our results will support faster introduction of synthetic biology into plant science.

  4. Plant X-tender: An extension of the AssemblX system for the assembly and expression of multigene constructs in plants

    PubMed Central

    Machens, Fabian; Coll, Anna; Baebler, Špela; Messerschmidt, Katrin; Gruden, Kristina

    2018-01-01

    Cloning multiple DNA fragments for delivery of several genes of interest into the plant genome is one of the main technological challenges in plant synthetic biology. Despite several modular assembly methods developed in recent years, the plant biotechnology community has not widely adopted them yet, probably due to the lack of appropriate vectors and software tools. Here we present Plant X-tender, an extension of the highly efficient, scar-free and sequence-independent multigene assembly strategy AssemblX, based on overlap-depended cloning methods and rare-cutting restriction enzymes. Plant X-tender consists of a set of plant expression vectors and the protocols for most efficient cloning into the novel vector set needed for plant expression and thus introduces advantages of AssemblX into plant synthetic biology. The novel vector set covers different backbones and selection markers to allow full design flexibility. We have included ccdB counterselection, thereby allowing the transfer of multigene constructs into the novel vector set in a straightforward and highly efficient way. Vectors are available as empty backbones and are fully flexible regarding the orientation of expression cassettes and addition of linkers between them, if required. We optimised the assembly and subcloning protocol by testing different scar-less assembly approaches: the noncommercial SLiCE and TAR methods and the commercial Gibson assembly and NEBuilder HiFi DNA assembly kits. Plant X-tender was applicable even in combination with low efficient homemade chemically competent or electrocompetent Escherichia coli. We have further validated the developed procedure for plant protein expression by cloning two cassettes into the newly developed vectors and subsequently transferred them to Nicotiana benthamiana in a transient expression setup. Thereby we show that multigene constructs can be delivered into plant cells in a streamlined and highly efficient way. Our results will support faster introduction of synthetic biology into plant science. PMID:29300787

  5. Fuel-Conservation Guidance System for Powered-Lift Aircraft

    NASA Technical Reports Server (NTRS)

    Erzberger, Heinz; McLean, John D.

    1981-01-01

    A technique is described for the design of fuel-conservative guidance systems and is applied to a system that was flight tested on board NASA's sugmentor wing jet STOL research aircraft. An important operational feature of the system is its ability to rapidly synthesize fuel-efficient trajectories for a large set of initial aircraft positions, altitudes, and headings. This feature allows the aircraft to be flown efficiently under conditions of changing winds and air traffic control vectors. Rapid synthesis of fuel-efficient trajectories is accomplished in the airborne computer by fast-time trajectory integration using a simplified dynamic performance model of the aircraft. This technique also ensures optimum flap deployment and, for powered-lift STOL aircraft, optimum transition to low-speed flight. Also included in the design is accurate prediction of touchdown time for use in four-dimensional guidance applications. Flight test results have demonstrated that the automatically synthesized trajectories produce significant fuel savings relative to manually flown conventional approaches.

  6. Environmental diversity as a surrogate for species representation.

    PubMed

    Beier, Paul; de Albuquerque, Fábio Suzart

    2015-10-01

    Because many species have not been described and most species ranges have not been mapped, conservation planners often use surrogates for conservation planning, but evidence for surrogate effectiveness is weak. Surrogates are well-mapped features such as soil types, landforms, occurrences of an easily observed taxon (discrete surrogates), and well-mapped environmental conditions (continuous surrogate). In the context of reserve selection, the idea is that a set of sites selected to span diversity in the surrogate will efficiently represent most species. Environmental diversity (ED) is a rarely used surrogate that selects sites to efficiently span multivariate ordination space. Because it selects across continuous environmental space, ED should perform better than discrete surrogates (which necessarily ignore within-bin and between-bin heterogeneity). Despite this theoretical advantage, ED appears to have performed poorly in previous tests of its ability to identify 50 × 50 km cells that represented vertebrates in Western Europe. Using an improved implementation of ED, we retested ED on Western European birds, mammals, reptiles, amphibians, and combined terrestrial vertebrates. We also tested ED on data sets for plants of Zimbabwe, birds of Spain, and birds of Arizona (United States). Sites selected using ED represented European mammals no better than randomly selected cells, but they represented species in the other 7 data sets with 20% to 84% effectiveness. This far exceeds the performance in previous tests of ED, and exceeds the performance of most discrete surrogates. We believe ED performed poorly in previous tests because those tests considered only a few candidate explanatory variables and used suboptimal forms of ED's selection algorithm. We suggest future work on ED focus on analyses at finer grain sizes more relevant to conservation decisions, explore the effect of selecting the explanatory variables most associated with species turnover, and investigate whether nonclimate abiotic variables can provide useful surrogates in an ED framework. © 2015 Society for Conservation Biology.

  7. Randomized, Blinded Pilot Testing of Nonconventional Stimulation Patterns and Shapes in Parkinson's Disease and Essential Tremor: Evidence for Further Evaluating Narrow and Biphasic Pulses

    PubMed Central

    Akbar, Umer; Raike, Robert S.; Hack, Nawaz; Hess, Christopher W.; Skinner, Jared; Martinez‐Ramirez, Daniel; DeJesus, Sol

    2016-01-01

    Objectives Evidence suggests that nonconventional programming may improve deep brain stimulation (DBS) therapy for movement disorders. The primary objective was to assess feasibility of testing the tolerability of several nonconventional settings in Parkinson's disease (PD) and essential tremor (ET) subjects in a single office visit. Secondary objectives were to explore for potential efficacy signals and to assess the energy demand on the implantable pulse‐generators (IPGs). Materials and Methods A custom firmware (FW) application was developed and acutely uploaded to the IPGs of eight PD and three ET subjects, allowing delivery of several nonconventional DBS settings, including narrow pulse widths, square biphasic pulses, and irregular pulse patterns. Standard clinical rating scales and several objective measures were used to compare motor outcomes with sham, clinically‐optimal and nonconventional settings. Blinded and randomized testing was conducted in a traditional office setting. Results Overall, the nonconventional settings were well tolerated. Under these conditions it was also possible to detect clinically‐relevant differences in DBS responses using clinical rating scales but not objective measures. Compared to the clinically‐optimal settings, some nonconventional settings appeared to offer similar benefit (e.g., narrow pulse widths) and others lesser benefit. Moreover, the results suggest that square biphasic pulses may deliver greater benefit. No unexpected IPG efficiency disadvantages were associated with delivering nonconventional settings. Conclusions It is feasible to acutely screen nonconventional DBS settings using controlled study designs in traditional office settings. Simple IPG FW upgrades may provide more DBS programming options for optimizing therapy. Potential advantages of narrow and biphasic pulses deserve follow up. PMID:27000764

  8. Randomized, Blinded Pilot Testing of Nonconventional Stimulation Patterns and Shapes in Parkinson's Disease and Essential Tremor: Evidence for Further Evaluating Narrow and Biphasic Pulses.

    PubMed

    Akbar, Umer; Raike, Robert S; Hack, Nawaz; Hess, Christopher W; Skinner, Jared; Martinez-Ramirez, Daniel; DeJesus, Sol; Okun, Michael S

    2016-06-01

    Evidence suggests that nonconventional programming may improve deep brain stimulation (DBS) therapy for movement disorders. The primary objective was to assess feasibility of testing the tolerability of several nonconventional settings in Parkinson's disease (PD) and essential tremor (ET) subjects in a single office visit. Secondary objectives were to explore for potential efficacy signals and to assess the energy demand on the implantable pulse-generators (IPGs). A custom firmware (FW) application was developed and acutely uploaded to the IPGs of eight PD and three ET subjects, allowing delivery of several nonconventional DBS settings, including narrow pulse widths, square biphasic pulses, and irregular pulse patterns. Standard clinical rating scales and several objective measures were used to compare motor outcomes with sham, clinically-optimal and nonconventional settings. Blinded and randomized testing was conducted in a traditional office setting. Overall, the nonconventional settings were well tolerated. Under these conditions it was also possible to detect clinically-relevant differences in DBS responses using clinical rating scales but not objective measures. Compared to the clinically-optimal settings, some nonconventional settings appeared to offer similar benefit (e.g., narrow pulse widths) and others lesser benefit. Moreover, the results suggest that square biphasic pulses may deliver greater benefit. No unexpected IPG efficiency disadvantages were associated with delivering nonconventional settings. It is feasible to acutely screen nonconventional DBS settings using controlled study designs in traditional office settings. Simple IPG FW upgrades may provide more DBS programming options for optimizing therapy. Potential advantages of narrow and biphasic pulses deserve follow up. © 2016 The Authors. Neuromodulation: Technology at the Neural Interface published by Wiley Periodicals, Inc. on behalf of International Neuromodulation Society.

  9. Objective comparison of 4 nonlongitudinal ultrasound modalities regarding efficiency and chatter.

    PubMed

    DeMill, David L; Zaugg, Brian E; Pettey, Jeff H; Jensen, Jason D; Jardine, Griffin J; Wong, Gilbert; Olson, Randall J

    2012-06-01

    To compare efficiency and chatter of Infiniti Ozil with and without Intelligent Phacoemulsification (IP) and the Signature Ellips with and without FX. John A. Moran Eye Center, University of Utah, Salt Lake City, Utah, USA. Experimental study. Brunescent 2.0 mm human lens cubes were created by an instrument devised for this study. Cubes were tested (10 per test) for time of particle removal (efficiency) and for the number of times the lens particle bounced off the tip (chatter) at 300 mm Hg and 550 mm Hg, 50% and 100% power, and 50% and 100% amplitudes (amplitude for Ozil only). Of the ultrasound settings, efficiency varied from a mean of 3.3 seconds ± 1.4 (SD) to 50.4 ± 11.7 seconds and chatter from 0.0 to 52.0 ± 16.7 bounces per run. The Ozil-IP was generally more efficient than the Ozil and the Ellips FX more efficient than the Ellips. At optimized values, the Ozil-IP and Ellips-FX were similar. In general, efficiency and chatter were better at 550 mm Hg and at 50% power. The amplitude effect was complex. Efficiency closely correlated with chatter (Pearson r(2) = .31, P<.0001). Objective comparison of phacoemulsification efficiency and chatter found that optimized Ozil-IP and Ellips-FX were similar in both parameters and in general, both performed better than preceding technology. The study parameters can significantly affect efficiency and chatter, which strongly correlate with each other. Copyright © 2012 ASCRS and ESCRS. Published by Elsevier Inc. All rights reserved.

  10. Syringe test screening of microbial gas production activity: Cases denitrification and biogas formation.

    PubMed

    Østgaard, Kjetill; Kowarz, Viktoria; Shuai, Wang; Henry, Ingrid A; Sposob, Michal; Haugen, Hildegunn Hegna; Bakke, Rune

    2017-01-01

    Mass produced plastic syringes may be applied as vessels for cheap, simple and large scale batch culture testing. As illustrated for the cases of denitrification and of biogas formation, metabolic activity was monitored by direct reading of the piston movement due to the gas volume formed. Pressure buildup due to friction was shown to be moderate. A piston pull and slide back routine can be applied before recording gas volume to minimize experimental errors due to friction. Inoculum handling and activity may be conveniently standardized as illustrated by applying biofilm carriers. A robust set of positive as well as negative controls ("blanks") should be included to ensure quality of the actual testing. The denitrification test showed saturation response at increasing amounts of inoculum in the form of adapted moving bed biofilm reactor (MBBR) carriers, with well correlated nitrate consumption vs. gas volume formed. As shown, the denitrification test efficiently screened different inocula at standardized substrates. Also, different substrates were successfully screened and compared at standardized inocula. The biogas potential test showed efficient screening of different substrates with effects of relative amounts of carbohydrate, protein, fat. A second case with CO 2 capture reclaimer waste as substrate demonstrated successful use of co-feeding to support waste treatment and how temperature effects on kinetics and stoichiometry can be observed. In total, syringe test screening of microbial gas production seems highly efficient at a low cost when properly applied. Copyright © 2016 Elsevier B.V. All rights reserved.

  11. A test matrix sequencer for research test facility automation

    NASA Technical Reports Server (NTRS)

    Mccartney, Timothy P.; Emery, Edward F.

    1990-01-01

    The hardware and software configuration of a Test Matrix Sequencer, a general purpose test matrix profiler that was developed for research test facility automation at the NASA Lewis Research Center, is described. The system provides set points to controllers and contact closures to data systems during the course of a test. The Test Matrix Sequencer consists of a microprocessor controlled system which is operated from a personal computer. The software program, which is the main element of the overall system is interactive and menu driven with pop-up windows and help screens. Analog and digital input/output channels can be controlled from a personal computer using the software program. The Test Matrix Sequencer provides more efficient use of aeronautics test facilities by automating repetitive tasks that were once done manually.

  12. The long-term performance of electrically charged filters in a ventilation system.

    PubMed

    Raynor, Peter C; Chae, Soo Jae

    2004-07-01

    The efficiency and pressure drop of filters made from polyolefin fibers carrying electrical charges were compared with efficiency and pressure drop for filters made from uncharged glass fibers to determine if the efficiency of the charged filters changed with use. Thirty glass fiber filters and 30 polyolefin fiber filters were placed in different, but nearly identical, air-handling units that supplied outside air to a large building. Using two kinds of real-time aerosol counting and sizing instruments, the efficiency of both sets of filters was measured repeatedly for more than 19 weeks while the air-handling units operated almost continuously. Pressure drop was recorded by the ventilation system's computer control. Measurements showed that the efficiency of the glass fiber filters remained almost constant with time. However, the charged polyolefin fiber filters exhibited large efficiency reductions with time before the efficiency began to increase again toward the end of the test. For particles 0.6 microm in diameter, the efficiency of the polyolefin fiber filters declined from 85% to 45% after 11 weeks before recovering to 65% at the end of the test. The pressure drops of the glass fiber filters increased by about 0.40 in. H2O, whereas the pressure drop of the polyolefin fiber filters increased by only 0.28 in. H2O. The results indicate that dust loading reduces the effectiveness of electrical charges on filter fibers. Copyright 2004 JOEH, LLC

  13. Compressed/reconstructed test images for CRAF/Cassini

    NASA Technical Reports Server (NTRS)

    Dolinar, S.; Cheung, K.-M.; Onyszchuk, I.; Pollara, F.; Arnold, S.

    1991-01-01

    A set of compressed, then reconstructed, test images submitted to the Comet Rendezvous Asteroid Flyby (CRAF)/Cassini project is presented as part of its evaluation of near lossless high compression algorithms for representing image data. A total of seven test image files were provided by the project. The seven test images were compressed, then reconstructed with high quality (root mean square error of approximately one or two gray levels on an 8 bit gray scale), using discrete cosine transforms or Hadamard transforms and efficient entropy coders. The resulting compression ratios varied from about 2:1 to about 10:1, depending on the activity or randomness in the source image. This was accomplished without any special effort to optimize the quantizer or to introduce special postprocessing to filter the reconstruction errors. A more complete set of measurements, showing the relative performance of the compression algorithms over a wide range of compression ratios and reconstruction errors, shows that additional compression is possible at a small sacrifice in fidelity.

  14. Model checking for linear temporal logic: An efficient implementation

    NASA Technical Reports Server (NTRS)

    Sherman, Rivi; Pnueli, Amir

    1990-01-01

    This report provides evidence to support the claim that model checking for linear temporal logic (LTL) is practically efficient. Two implementations of a linear temporal logic model checker is described. One is based on transforming the model checking problem into a satisfiability problem; the other checks an LTL formula for a finite model by computing the cross-product of the finite state transition graph of the program with a structure containing all possible models for the property. An experiment was done with a set of mutual exclusion algorithms and tested safety and liveness under fairness for these algorithms.

  15. Improving Gas Furnace Performance: A Field and Laboratory Study at End of Life

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brand, L.; Yee, S.; Baker, J.

    2015-02-01

    In 2010, natural gas provided 54% of total residential space heating energy the U.S. on a source basis, or 3.5 Quadrillion Btu. Natural gas burned in furnaces accounted for 92% of that total, and boilers and other equipment made up the remainder. A better understanding of installed furnace performance is a key to energy savings for this significant energy usage. Natural gas furnace performance can be measured in many ways. The annual fuel utilization efficiency (AFUE) rating provides a fixed value under specified conditions, akin to the EPA miles per gallon rating for new vehicles. The AFUE rating is providedmore » by the manufacturer to the consumer and is a way to choose between models tested on the same basis. This value is commonly used in energy modeling calculations. ASHRAE 103 is a consensus furnace testing standard developed by the engineering community. The procedure provided in the standard covers heat-up, cool down, condensate heat loss, and steady-state conditions and an imposed oversize factor. The procedure can be used to evaluate furnace performance with specified conditions or with some variation chosen by the tester. In this report the ASHRAE 103 test result will be referred to as Annualized Efficiency (AE) to avoid confusion, and any non-standard test conditions will be noted. Aside from these two laboratory tests, steady state or flue loss efficiency can be measured in the field under many conditions; typically as found or tuned to the manufacturers recommended settings. In this report, AE and steady-state efficiency will be used as measures of furnace performance.« less

  16. Use of Synthetic Single-Stranded Oligonucleotides as Artificial Test Soiling for Validation of Surgical Instrument Cleaning Processes

    PubMed Central

    Wilhelm, Nadja; Perle, Nadja; Simmoteit, Robert; Schlensak, Christian; Wendel, Hans P.; Avci-Adali, Meltem

    2014-01-01

    Surgical instruments are often strongly contaminated with patients' blood and tissues, possibly containing pathogens. The reuse of contaminated instruments without adequate cleaning and sterilization can cause postoperative inflammation and the transmission of infectious diseases from one patient to another. Thus, based on the stringent sterility requirements, the development of highly efficient, validated cleaning processes is necessary. Here, we use for the first time synthetic single-stranded DNA (ssDNA_ODN), which does not appear in nature, as a test soiling to evaluate the cleaning efficiency of routine washing processes. Stainless steel test objects were coated with a certain amount of ssDNA_ODN. After cleaning, the amount of residual ssDNA_ODN on the test objects was determined using quantitative real-time PCR. The established method is highly specific and sensitive, with a detection limit of 20 fg, and enables the determination of the cleaning efficiency of medical cleaning processes under different conditions to obtain optimal settings for the effective cleaning and sterilization of instruments. The use of this highly sensitive method for the validation of cleaning processes can prevent, to a significant extent, the insufficient cleaning of surgical instruments and thus the transmission of pathogens to patients. PMID:24672793

  17. Design and implementation of a controlled clinical trial to evaluate the effectiveness and efficiency of routine opt-out rapid human immunodeficiency virus screening in the emergency department.

    PubMed

    Haukoos, Jason S; Hopkins, Emily; Byyny, Richard L; Conroy, Amy A; Silverman, Morgan; Eisert, Sheri; Thrun, Mark; Wilson, Michael; Boyett, Brian; Heffelfinger, James D

    2009-08-01

    In 2006, the Centers for Disease Control and Prevention (CDC) released revised recommendations for performing human immunodeficiency virus (HIV) testing in health care settings, including implementing routine rapid HIV screening, the use of an integrated opt-out consent, and limited prevention counseling. Emergency departments (EDs) have been a primary focus of these efforts. These revised CDC recommendations were primarily based on feasibility studies and have not been evaluated through the application of rigorous research methods. This article describes the design and implementation of a large prospective controlled clinical trial to evaluate the CDC's recommendations in an ED setting. From April 15, 2007, through April 15, 2009, a prospective quasi-experimental equivalent time-samples clinical trial was performed to compare the clinical effectiveness and efficiency of routine (nontargeted) opt-out rapid HIV screening (intervention) to physician-directed diagnostic rapid HIV testing (control) in a high-volume urban ED. In addition, three nested observational studies were performed to evaluate the cost-effectiveness and patient and staff acceptance of the two rapid HIV testing methods. This article describes the rationale, methodologies, and study design features of this program evaluation clinical trial. It also provides details regarding the integration of the principal clinical trial and its nested observational studies. Such ED-based trials are rare, but serve to provide valid comparisons between testing approaches. Investigators should consider similar methodology when performing future ED-based health services research.

  18. Biomolecular surface construction by PDE transform.

    PubMed

    Zheng, Qiong; Yang, Siyang; Wei, Guo-Wei

    2012-03-01

    This work proposes a new framework for the surface generation based on the partial differential equation (PDE) transform. The PDE transform has recently been introduced as a general approach for the mode decomposition of images, signals, and data. It relies on the use of arbitrarily high-order PDEs to achieve the time-frequency localization, control the spectral distribution, and regulate the spatial resolution. The present work provides a new variational derivation of high-order PDE transforms. The fast Fourier transform is utilized to accomplish the PDE transform so as to avoid stringent stability constraints in solving high-order PDEs. As a consequence, the time integration of high-order PDEs can be done efficiently with the fast Fourier transform. The present approach is validated with a variety of test examples in two-dimensional and three-dimensional settings. We explore the impact of the PDE transform parameters, such as the PDE order and propagation time, on the quality of resulting surfaces. Additionally, we utilize a set of 10 proteins to compare the computational efficiency of the present surface generation method and a standard approach in Cartesian meshes. Moreover, we analyze the present method by examining some benchmark indicators of biomolecular surface, that is, surface area, surface-enclosed volume, solvation free energy, and surface electrostatic potential. A test set of 13 protein molecules is used in the present investigation. The electrostatic analysis is carried out via the Poisson-Boltzmann equation model. To further demonstrate the utility of the present PDE transform-based surface method, we solve the Poisson-Nernst-Planck equations with a PDE transform surface of a protein. Second-order convergence is observed for the electrostatic potential and concentrations. Finally, to test the capability and efficiency of the present PDE transform-based surface generation method, we apply it to the construction of an excessively large biomolecule, a virus surface capsid. Virus surface morphologies of different resolutions are attained by adjusting the propagation time. Therefore, the present PDE transform provides a multiresolution analysis in the surface visualization. Extensive numerical experiment and comparison with an established surface model indicate that the present PDE transform is a robust, stable, and efficient approach for biomolecular surface generation in Cartesian meshes. Copyright © 2012 John Wiley & Sons, Ltd.

  19. Item Bank Development for a Revised Pediatric Evaluation of Disability Inventory (PEDI)

    ERIC Educational Resources Information Center

    Dumas, Helene; Fragala-Pinkham, Maria; Haley, Stephen; Coster, Wendy; Kramer, Jessica; Kao, Ying-Chia; Moed, Richard

    2010-01-01

    The Pediatric Evaluation of Disability Inventory (PEDI) is a useful clinical and research assessment, but it has limitations in content, age range, and efficiency. The purpose of this article is to describe the development of the item bank for a new computer adaptive testing version of the PEDI (PEDI-CAT). An expanded item set and response options…

  20. The Effects of Judgment-Based Stratum Classifications on the Efficiency of Stratum Scored CATs.

    ERIC Educational Resources Information Center

    Finney, Sara J.; Smith, Russell W.; Wise, Steven L.

    Two operational item pools were used to investigate the performance of stratum computerized adaptive tests (CATs) when items were assigned to strata based on empirical estimates of item difficulty or human judgments of item difficulty. Items from the first data set consisted of 54 5-option multiple choice items from a form of the ACT mathematics…

  1. The NASA/industry Design Analysis Methods for Vibrations (DAMVIBS) program: McDonnell-Douglas Helicopter Company achievements

    NASA Technical Reports Server (NTRS)

    Toossi, Mostafa; Weisenburger, Richard; Hashemi-Kia, Mostafa

    1993-01-01

    This paper presents a summary of some of the work performed by McDonnell Douglas Helicopter Company under NASA Langley-sponsored rotorcraft structural dynamics program known as DAMVIBS (Design Analysis Methods for VIBrationS). A set of guidelines which is applicable to dynamic modeling, analysis, testing, and correlation of both helicopter airframes and a large variety of structural finite element models is presented. Utilization of these guidelines and the key features of their applications to vibration modeling of helicopter airframes are discussed. Correlation studies with the test data, together with the development and applications of a set of efficient finite element model checkout procedures, are demonstrated on a large helicopter airframe finite element model. Finally, the lessons learned and the benefits resulting from this program are summarized.

  2. Heterosexual anal intercourse among community and clinical settings in Cape Town, South Africa.

    PubMed

    Kalichman, S C; Simbayi, L C; Cain, D; Jooste, S

    2009-10-01

    Anal intercourse is an efficient mode of HIV transmission and may play a role in the heterosexual HIV epidemics of southern Africa. However, little information is available on the anal sex practices of heterosexual individuals in South Africa. To examine the occurrence of anal intercourse in samples drawn from community and clinic settings. Anonymous surveys collected from convenience samples of 2593 men and 1818 women in two townships and one large city sexually transmitted infection (STI) clinic in Cape Town. Measures included demographics, HIV risk history, substance use and 3-month retrospective sexual behaviour. A total of 14% (n = 360) men and 10% (n = 172) women reported engaging in anal intercourse in the past 3 months. Men used condoms during 67% and women 50% of anal intercourse occasions. Anal intercourse was associated with younger age, being unmarried, having a history of STI, exchanging sex, using substances, having been tested for HIV and testing HIV positive. Anal intercourse is reported relatively less frequently than unprotected vaginal intercourse among heterosexual individuals. The low prevalence of anal intercourse among heterosexual individuals may be offset by its greater efficiency for transmitting HIV. Anal sex should be discussed in heterosexual HIV prevention programming.

  3. Assessing the value of different data sets and modeling schemes for flow and transport simulations

    NASA Astrophysics Data System (ADS)

    Hyndman, D. W.; Dogan, M.; Van Dam, R. L.; Meerschaert, M. M.; Butler, J. J., Jr.; Benson, D. A.

    2014-12-01

    Accurate modeling of contaminant transport has been hampered by an inability to characterize subsurface flow and transport properties at a sufficiently high resolution. However mathematical extrapolation combined with different measurement methods can provide realistic three-dimensional fields of highly heterogeneous hydraulic conductivity (K). This study demonstrates an approach to evaluate the time, cost, and efficiency of subsurface K characterization. We quantify the value of different data sets at the highly heterogeneous Macro Dispersion Experiment (MADE) Site in Mississippi, which is a flagship test site that has been used for several macro- and small-scale tracer tests that revealed non-Gaussian tracer behavior. Tracer data collected at the site are compared to models that are based on different types and resolution of geophysical and hydrologic data. We present a cost-benefit analysis of several techniques including: 1) flowmeter K data, 2) direct-push K data, 3) ground penetrating radar, and 4) two stochastic methods to generate K fields. This research provides an initial assessment of the level of data necessary to accurately simulate solute transport with the traditional advection dispersion equation; it also provides a basis to design lower cost and more efficient remediation schemes at highly heterogeneous sites.

  4. Item Difficulty in the Evaluation of Computer-Based Instruction: An Example from Neuroanatomy

    PubMed Central

    Chariker, Julia H.; Naaz, Farah; Pani, John R.

    2012-01-01

    This article reports large item effects in a study of computer-based learning of neuroanatomy. Outcome measures of the efficiency of learning, transfer of learning, and generalization of knowledge diverged by a wide margin across test items, with certain sets of items emerging as particularly difficult to master. In addition, the outcomes of comparisons between instructional methods changed with the difficulty of the items to be learned. More challenging items better differentiated between instructional methods. This set of results is important for two reasons. First, it suggests that instruction may be more efficient if sets of consistently difficult items are the targets of instructional methods particularly suited to them. Second, there is wide variation in the published literature regarding the outcomes of empirical evaluations of computer-based instruction. As a consequence, many questions arise as to the factors that may affect such evaluations. The present paper demonstrates that the level of challenge in the material that is presented to learners is an important factor to consider in the evaluation of a computer-based instructional system. PMID:22231801

  5. Level set method with automatic selective local statistics for brain tumor segmentation in MR images.

    PubMed

    Thapaliya, Kiran; Pyun, Jae-Young; Park, Chun-Su; Kwon, Goo-Rak

    2013-01-01

    The level set approach is a powerful tool for segmenting images. This paper proposes a method for segmenting brain tumor images from MR images. A new signed pressure function (SPF) that can efficiently stop the contours at weak or blurred edges is introduced. The local statistics of the different objects present in the MR images were calculated. Using local statistics, the tumor objects were identified among different objects. In this level set method, the calculation of the parameters is a challenging task. The calculations of different parameters for different types of images were automatic. The basic thresholding value was updated and adjusted automatically for different MR images. This thresholding value was used to calculate the different parameters in the proposed algorithm. The proposed algorithm was tested on the magnetic resonance images of the brain for tumor segmentation and its performance was evaluated visually and quantitatively. Numerical experiments on some brain tumor images highlighted the efficiency and robustness of this method. Crown Copyright © 2013. Published by Elsevier Ltd. All rights reserved.

  6. Item difficulty in the evaluation of computer-based instruction: an example from neuroanatomy.

    PubMed

    Chariker, Julia H; Naaz, Farah; Pani, John R

    2012-01-01

    This article reports large item effects in a study of computer-based learning of neuroanatomy. Outcome measures of the efficiency of learning, transfer of learning, and generalization of knowledge diverged by a wide margin across test items, with certain sets of items emerging as particularly difficult to master. In addition, the outcomes of comparisons between instructional methods changed with the difficulty of the items to be learned. More challenging items better differentiated between instructional methods. This set of results is important for two reasons. First, it suggests that instruction may be more efficient if sets of consistently difficult items are the targets of instructional methods particularly suited to them. Second, there is wide variation in the published literature regarding the outcomes of empirical evaluations of computer-based instruction. As a consequence, many questions arise as to the factors that may affect such evaluations. The present article demonstrates that the level of challenge in the material that is presented to learners is an important factor to consider in the evaluation of a computer-based instructional system. Copyright © 2011 American Association of Anatomists.

  7. Efficient rehabilitation care for joint replacement patients: skilled nursing facility or inpatient rehabilitation facility?

    PubMed

    Tian, Wenqiang; DeJong, Gerben; Horn, Susan D; Putman, Koen; Hsieh, Ching-Hui; DaVanzo, Joan E

    2012-01-01

    There has been lengthy debate as to which setting, skilled nursing facility (SNF) or inpatient rehabilitation facility (IRF), is more efficient in treating joint replacement patients. This study aims to determine the efficiency of rehabilitation care provided by SNF and IRF to joint replacement patients with respect to both payment and length of stay (LOS). This study used a prospective multisite observational cohort design. Tobit models were used to examine the association between setting of care and efficiency. The study enrolled 948 knee replacement patients and 618 hip replacement patients from 11 IRFs and 7 SNFs between February 2006 and February 2007. Output was measured by motor functional independence measure (FIM) score at discharge. Efficiency was measured in 3 ways: payment efficiency, LOS efficiency, and stochastic frontier analysis efficiency. IRF patients incurred higher expenditures per case but also achieved larger motor FIM gains in shorter LOS than did SNF patients. Setting of care was not a strong predictor of overall efficiency of rehabilitation care. Great variation in characteristics existed within IRFs or SNFs and severity groups. Medium-volume facilities among both SNFs and IRFs were most efficient. Early rehabilitation was consistently predictive of efficient treatment. The advantage of either setting is not clear-cut. Definition of efficiency depends in part on preference between cost and time. SNFs are more payment efficient; IRFs are more LOS efficient. Variation within SNFs and IRFs blurred setting differences; a simple comparison between SNF and IRF may not be appropriate.

  8. Evaluation of the Treatment Process of Landfill Leachate Using the Toxicity Assessment Method

    PubMed Central

    Qiu, Aifeng; Cai, Qiang; Zhao, Yuan; Guo, Yingqing; Zhao, Liqian

    2016-01-01

    Landfill leachate is composed of a complex composition with strong biological toxicity. The combined treatment process of coagulation and sedimentation, anaerobics, electrolysis, and aerobics was set up to treat landfill leachate. This paper explores the effect of different operational parameters of coagulation and sedimentation tanks and electrolytic cells, while investigating the combined process for the removal efficiency of physicochemical indices after processing the landfill leachate. Meanwhile, a battery of toxicity tests with Vibrio fischeri, zebrafish larvae, and embryos were conducted to evaluate acute toxicity and calculated the toxicity reduction efficiency after each treatment process. The combined treatment process resulted in a 100% removal efficiency of Cu, Cd and Zn, and a 93.50% and an 87.44% removal efficiency of Ni and Cr, respectively. The overall removal efficiency of chemical oxygen demand (COD), ammonium nitrogen (NH4+-N), and total nitrogen (TN) were 93.57%, 97.46% and 73.60%, respectively. In addition, toxicity test results showed that the acute toxicity of landfill leachate had also been reduced significantly: toxicity units (TU) decreased from 84.75 to 12.00 for zebrafish larvae, from 82.64 to 10.55 for zebrafish embryos, and from 3.41 to 0.63 for Vibrio fischeri. The combined treatment process was proved to be an efficient treatment method to remove heavy metals, COD, NH4+-N, and acute bio-toxicity of landfill leachate. PMID:28009808

  9. Evaluation of the Treatment Process of Landfill Leachate Using the Toxicity Assessment Method.

    PubMed

    Qiu, Aifeng; Cai, Qiang; Zhao, Yuan; Guo, Yingqing; Zhao, Liqian

    2016-12-21

    Landfill leachate is composed of a complex composition with strong biological toxicity. The combined treatment process of coagulation and sedimentation, anaerobics, electrolysis, and aerobics was set up to treat landfill leachate. This paper explores the effect of different operational parameters of coagulation and sedimentation tanks and electrolytic cells, while investigating the combined process for the removal efficiency of physicochemical indices after processing the landfill leachate. Meanwhile, a battery of toxicity tests with Vibrio fischeri , zebrafish larvae, and embryos were conducted to evaluate acute toxicity and calculated the toxicity reduction efficiency after each treatment process. The combined treatment process resulted in a 100% removal efficiency of Cu, Cd and Zn, and a 93.50% and an 87.44% removal efficiency of Ni and Cr, respectively. The overall removal efficiency of chemical oxygen demand (COD), ammonium nitrogen (NH₄⁺-N), and total nitrogen (TN) were 93.57%, 97.46% and 73.60%, respectively. In addition, toxicity test results showed that the acute toxicity of landfill leachate had also been reduced significantly: toxicity units (TU) decreased from 84.75 to 12.00 for zebrafish larvae, from 82.64 to 10.55 for zebrafish embryos, and from 3.41 to 0.63 for Vibrio fischeri . The combined treatment process was proved to be an efficient treatment method to remove heavy metals, COD, NH₄⁺-N, and acute bio-toxicity of landfill leachate.

  10. Bi-fuel System - Gasoline/LPG in A Used 4-Stroke Motorcycle - Fuel Injection Type

    NASA Astrophysics Data System (ADS)

    Suthisripok, Tongchit; Phusakol, Nachaphat; Sawetkittirut, Nuttapol

    2017-10-01

    Bi-fuel-Gasoline/LPG system has been effectively and efficiently used in gasoline vehicles with less pollutants emission. The motorcycle tested was a used Honda AirBlade i110 - fuel injection type. A 3-litre LPG storage tank, an electronic fuel control unit, a 1-mm LPG injector and a regulator were securely installed. The converted motorcycle can be started with either gasoline or LPG. The safety relief valve was set below 48 kPa and over 110 kPa. The motorcycle was tuned at the relative rich air-fuel ratio (λ) of 0.85-0.90 to attain the best power output. From dynamometer tests over the speed range of 65-100 km/h, the average power output when fuelling LPG was 5.16 hp; dropped 3.9% from the use of gasoline91. The average LPG consumption rate from the city road test at the average speed of 60 km/h was 40.1 km/l, about 17.7% more. This corresponded to lower LPG’s energy density of about 16.2%. In emission, the CO and HC concentrations were 44.4% and 26.5% lower. Once a standard gas equipment set with ECU and LPG injector were securely installed and the engine was properly tuned up to suit LPG’s characteristics, the converted bi-fuel motorcycle offers efficiently, safely and economically performance with environmental friendly emission.

  11. Common IED exploitation target set ontology

    NASA Astrophysics Data System (ADS)

    Russomanno, David J.; Qualls, Joseph; Wowczuk, Zenovy; Franken, Paul; Robinson, William

    2010-04-01

    The Common IED Exploitation Target Set (CIEDETS) ontology provides a comprehensive semantic data model for capturing knowledge about sensors, platforms, missions, environments, and other aspects of systems under test. The ontology also includes representative IEDs; modeled as explosives, camouflage, concealment objects, and other background objects, which comprise an overall threat scene. The ontology is represented using the Web Ontology Language and the SPARQL Protocol and RDF Query Language, which ensures portability of the acquired knowledge base across applications. The resulting knowledge base is a component of the CIEDETS application, which is intended to support the end user sensor test and evaluation community. CIEDETS associates a system under test to a subset of cataloged threats based on the probability that the system will detect the threat. The associations between systems under test, threats, and the detection probabilities are established based on a hybrid reasoning strategy, which applies a combination of heuristics and simplified modeling techniques. Besides supporting the CIEDETS application, which is focused on efficient and consistent system testing, the ontology can be leveraged in a myriad of other applications, including serving as a knowledge source for mission planning tools.

  12. Combining multiple tools outperforms individual methods in gene set enrichment analyses.

    PubMed

    Alhamdoosh, Monther; Ng, Milica; Wilson, Nicholas J; Sheridan, Julie M; Huynh, Huy; Wilson, Michael J; Ritchie, Matthew E

    2017-02-01

    Gene set enrichment (GSE) analysis allows researchers to efficiently extract biological insight from long lists of differentially expressed genes by interrogating them at a systems level. In recent years, there has been a proliferation of GSE analysis methods and hence it has become increasingly difficult for researchers to select an optimal GSE tool based on their particular dataset. Moreover, the majority of GSE analysis methods do not allow researchers to simultaneously compare gene set level results between multiple experimental conditions. The ensemble of genes set enrichment analyses (EGSEA) is a method developed for RNA-sequencing data that combines results from twelve algorithms and calculates collective gene set scores to improve the biological relevance of the highest ranked gene sets. EGSEA's gene set database contains around 25 000 gene sets from sixteen collections. It has multiple visualization capabilities that allow researchers to view gene sets at various levels of granularity. EGSEA has been tested on simulated data and on a number of human and mouse datasets and, based on biologists' feedback, consistently outperforms the individual tools that have been combined. Our evaluation demonstrates the superiority of the ensemble approach for GSE analysis, and its utility to effectively and efficiently extrapolate biological functions and potential involvement in disease processes from lists of differentially regulated genes. EGSEA is available as an R package at http://www.bioconductor.org/packages/EGSEA/ . The gene sets collections are available in the R package EGSEAdata from http://www.bioconductor.org/packages/EGSEAdata/ . monther.alhamdoosh@csl.com.au mritchie@wehi.edu.au. Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press.

  13. The multilingual matrix test: Principles, applications, and comparison across languages: A review.

    PubMed

    Kollmeier, Birger; Warzybok, Anna; Hochmuth, Sabine; Zokoll, Melanie A; Uslar, Verena; Brand, Thomas; Wagener, Kirsten C

    2015-01-01

    A review of the development, evaluation, and application of the so-called 'matrix sentence test' for speech intelligibility testing in a multilingual society is provided. The format allows for repeated use with the same patient in her or his native language even if the experimenter does not understand the language. Using a closed-set format, the syntactically fixed, semantically unpredictable sentences (e.g. 'Peter bought eight white ships') provide a vocabulary of 50 words (10 alternatives for each position in the sentence). The principles (i.e. construction, optimization, evaluation, and validation) for 14 different languages are reviewed. Studies of the influence of talker, language, noise, the training effect, open vs. closed conduct of the test, and the subjects' language proficiency are reported and application examples are discussed. The optimization principles result in a steep intelligibility function and a high homogeneity of the speech materials presented and test lists employed, yielding a high efficiency and excellent comparability across languages. The characteristics of speakers generally dominate the differences across languages. The matrix test format with the principles outlined here is recommended for producing efficient, reliable, and comparable speech reception thresholds across different languages.

  14. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kim, Woohyun; Katipamula, Srinivas; Lutes, Robert G.

    Small- and medium-sized (<100,000 sf) commercial buildings (SMBs) represent over 95% of the U.S. commercial building stock and consume over 60% of total site energy consumption. Many of these buildings use rudimentary controls that are mostly manual, with limited scheduling capability, no monitoring or failure management. Therefore, many of these buildings are operated inefficiently and consume excess energy. SMBs typically utilize packaged rooftop units (RTUs) that are controlled by an individual thermostat. There is increased urgency to improve the operating efficiency of existing commercial building stock in the U.S. for many reasons, chief among them is to mitigate the climatemore » change impacts. Studies have shown that managing set points and schedules of the RTUs will result in up to 20% energy and cost savings. Another problem associated with RTUs is short-cycling, where an RTU goes through ON and OFF cycles too frequently. Excessive cycling can lead to excessive wear and lead to premature failure of the compressor or its components. The short cycling can result in a significantly decreased average efficiency (up to 10%), even if there are no physical failures in the equipment. Also, SMBs use a time-of-day scheduling is to start the RTUs before the building will be occupied and shut it off when unoccupied. Ensuring correct use of the zone set points and eliminating frequent cycling of RTUs thereby leading to persistent building operations can significantly increase the operational efficiency of the SMBs. A growing trend is to use low-cost control infrastructure that can enable scalable and cost-effective intelligent building operations. The work reported in this report describes three algorithms for detecting the zone set point temperature, RTU cycling rate and occupancy schedule detection that can be deployed on the low-cost infrastructure. These algorithms only require the zone temperature data for detection. The algorithms have been tested and validated using field data from a number of RTUs from six buildings in different climate locations. Overall, the algorithms were successful in detecting the set points and ON/OFF cycles accurately using the peak detection technique and occupancy schedule using symbolic aggregate approximation technique. The report describes the three algorithms, results from testing the algorithms using field data, how the algorithms can be used to improve SMBs efficiency, and presents related conclusions.« less

  15. Retrieval of overviews of systematic reviews in MEDLINE was improved by the development of an objectively derived and validated search strategy.

    PubMed

    Lunny, Carole; McKenzie, Joanne E; McDonald, Steve

    2016-06-01

    Locating overviews of systematic reviews is difficult because of an absence of appropriate indexing terms and inconsistent terminology used to describe overviews. Our objective was to develop a validated search strategy to retrieve overviews in MEDLINE. We derived a test set of overviews from the references of two method articles on overviews. Two population sets were used to identify discriminating terms, that is, terms that appear frequently in the test set but infrequently in two population sets of references found in MEDLINE. We used text mining to conduct a frequency analysis of terms appearing in the titles and abstracts. Candidate terms were combined and tested in MEDLINE in various permutations, and the performance of strategies measured using sensitivity and precision. Two search strategies were developed: a sensitivity-maximizing strategy, achieving 93% sensitivity (95% confidence interval [CI]: 87, 96) and 7% precision (95% CI: 6, 8), and a sensitivity-and-precision-maximizing strategy, achieving 66% sensitivity (95% CI: 58, 74) and 21% precision (95% CI: 17, 25). The developed search strategies enable users to more efficiently identify overviews of reviews compared to current strategies. Consistent language in describing overviews would aid in their identification, as would a specific MEDLINE Publication Type. Copyright © 2015 Elsevier Inc. All rights reserved.

  16. Automation of diagnostic genetic testing: mutation detection by cyclic minisequencing.

    PubMed

    Alagrund, Katariina; Orpana, Arto K

    2014-01-01

    The rising role of nucleic acid testing in clinical decision making is creating a need for efficient and automated diagnostic nucleic acid test platforms. Clinical use of nucleic acid testing sets demands for shorter turnaround times (TATs), lower production costs and robust, reliable methods that can easily adopt new test panels and is able to run rare tests in random access principle. Here we present a novel home-brew laboratory automation platform for diagnostic mutation testing. This platform is based on the cyclic minisequecing (cMS) and two color near-infrared (NIR) detection. Pipetting is automated using Tecan Freedom EVO pipetting robots and all assays are performed in 384-well micro plate format. The automation platform includes a data processing system, controlling all procedures, and automated patient result reporting to the hospital information system. We have found automated cMS a reliable, inexpensive and robust method for nucleic acid testing for a wide variety of diagnostic tests. The platform is currently in clinical use for over 80 mutations or polymorphisms. Additionally to tests performed from blood samples, the system performs also epigenetic test for the methylation of the MGMT gene promoter, and companion diagnostic tests for analysis of KRAS and BRAF gene mutations from formalin fixed and paraffin embedded tumor samples. Automation of genetic test reporting is found reliable and efficient decreasing the work load of academic personnel.

  17. Understanding Patient Experience Using Internet-based Email Surveys: A Feasibility Study at Mount Sinai Hospital.

    PubMed

    Morgan, Matthew; Lau, Davina; Jivraj, Tanaz; Principi, Tania; Dietrich, Sandra; Bell, Chaim M

    2015-01-01

    Email is becoming a widely accepted communication tool in healthcare settings. This study sought to test the feasibility of Internet-based email surveys of patient experience in the ambulatory setting. We conducted a study of email Internet-based surveys sent to patients in selected ambulatory clinics at Mount Sinai Hospital in Toronto, Canada. Our findings suggest that email links to Internet surveys are a feasible, timely and efficient method to solicit patient feedback about their experience. Further research is required to optimally leverage Internet-based email surveys as a tool to better understand the patient experience.

  18. Highly Efficient and Scalable Compound Decomposition of Two-Electron Integral Tensor and Its Application in Coupled Cluster Calculations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Peng, Bo; Kowalski, Karol

    The representation and storage of two-electron integral tensors are vital in large- scale applications of accurate electronic structure methods. Low-rank representation and efficient storage strategy of integral tensors can significantly reduce the numerical overhead and consequently time-to-solution of these methods. In this paper, by combining pivoted incomplete Cholesky decomposition (CD) with a follow-up truncated singular vector decomposition (SVD), we develop a decomposition strategy to approximately represent the two-electron integral tensor in terms of low-rank vectors. A systematic benchmark test on a series of 1-D, 2-D, and 3-D carbon-hydrogen systems demonstrates high efficiency and scalability of the compound two-step decomposition ofmore » the two-electron integral tensor in our implementation. For the size of atomic basis set N_b ranging from ~ 100 up to ~ 2, 000, the observed numerical scaling of our implementation shows O(N_b^{2.5~3}) versus O(N_b^{3~4}) of single CD in most of other implementations. More importantly, this decomposition strategy can significantly reduce the storage requirement of the atomic-orbital (AO) two-electron integral tensor from O(N_b^4) to O(N_b^2 log_{10}(N_b)) with moderate decomposition thresholds. The accuracy tests have been performed using ground- and excited-state formulations of coupled- cluster formalism employing single and double excitations (CCSD) on several bench- mark systems including the C_{60} molecule described by nearly 1,400 basis functions. The results show that the decomposition thresholds can be generally set to 10^{-4} to 10^{-3} to give acceptable compromise between efficiency and accuracy.« less

  19. Highly Efficient and Scalable Compound Decomposition of Two-Electron Integral Tensor and Its Application in Coupled Cluster Calculations.

    PubMed

    Peng, Bo; Kowalski, Karol

    2017-09-12

    The representation and storage of two-electron integral tensors are vital in large-scale applications of accurate electronic structure methods. Low-rank representation and efficient storage strategy of integral tensors can significantly reduce the numerical overhead and consequently time-to-solution of these methods. In this work, by combining pivoted incomplete Cholesky decomposition (CD) with a follow-up truncated singular vector decomposition (SVD), we develop a decomposition strategy to approximately represent the two-electron integral tensor in terms of low-rank vectors. A systematic benchmark test on a series of 1-D, 2-D, and 3-D carbon-hydrogen systems demonstrates high efficiency and scalability of the compound two-step decomposition of the two-electron integral tensor in our implementation. For the size of the atomic basis set, N b , ranging from ∼100 up to ∼2,000, the observed numerical scaling of our implementation shows [Formula: see text] versus [Formula: see text] cost of performing single CD on the two-electron integral tensor in most of the other implementations. More importantly, this decomposition strategy can significantly reduce the storage requirement of the atomic orbital (AO) two-electron integral tensor from [Formula: see text] to [Formula: see text] with moderate decomposition thresholds. The accuracy tests have been performed using ground- and excited-state formulations of coupled cluster formalism employing single and double excitations (CCSD) on several benchmark systems including the C 60 molecule described by nearly 1,400 basis functions. The results show that the decomposition thresholds can be generally set to 10 -4 to 10 -3 to give acceptable compromise between efficiency and accuracy.

  20. Phaco-efficiency test and re-aspiration analysis of repulsed particle in phacoemulsification.

    PubMed

    Kim, Jae-hyung; Ko, Dong-Ah; Kim, Jae Yong; Kim, Myoung Joon; Tchah, Hungwon

    2013-04-01

    To measure the efficiency of phacoemulsification, we have developed a new experimental model for testing phaco-efficiency and analyzed re-aspiration of repulsed particles. Using a Kitaro wetlab system, a piece of blood agar (BA) was placed in an artificial chamber and the phacoemulsifier was placed horizontally. The settings of the phacoemulsifier (Infiniti, Alcon Laboratories) were 26 cc/min for aspiration, 350 cc/min for vacuum, and 95 cm of bottle height. The time to remove BAs was measured using Ozil 100 %, Ozil 40 %, and longitudinal 40 % of phaco power. The angle between the re-aspirated BA particles and the axis of the phacoemulsifier (re-aspiration zone, degree) was analyzed. The average time (seconds) to remove BAs was lower in the Ozil 100 % and the Ozil 40 % mode than in the longitudinal mode (0.37 ± 0.39, 0.85 ± 0.57, and 2.22 ± 1.40 respectively, P value < 0.01). Repulsion exceeding 1 mm occurred more frequently in the longitudinal mode than in the Ozil 100 % mode (100 % vs 40 %, P value = 0.01, Fisher's exact test). The average of re-aspiration zone was 25.9 ± 14.5 in the longitudinal 40 % and 54.0 ± 23.0 in the Ozil 40 % (P value = 0.016). The Ozil mode was more efficient than the longitudinal mode. In addition, the Ozil mode provided less repulsion and wider aspiration zone.

  1. Efficiency measurement and the operationalization of hospital production.

    PubMed Central

    Magnussen, J

    1996-01-01

    OBJECTIVE. To discuss the usefulness of efficiency measures as instruments of monitoring and resource allocation by analyzing their invariance to changes in the operationalization of hospital production. STUDY SETTING. Norwegian hospitals over the three-year period 1989-1991. STUDY DESIGN. Efficiency is measured using Data Envelopment Analysis (DEA). The distribution of efficiency and the ranking of hospitals is compared across models using various distribution-free tests. DATA COLLECTION. Input and output data are collected by the Norwegian Central Bureau of Statistics. PRINCIPAL FINDINGS. The distribution of efficiency is found to be unaffected by changes in the specification of hospital output. Both the ranking of hospitals and the scale properties of the technology, however, are found to depend on the choice of output specification. CONCLUSION. Extreme care should be taken before resource allocation is based on DEA-type efficiency measures alone. Both the identification of efficient and inefficient hospitals and the cardinal measure of inefficiency will depend on the specification of output. Since the scale properties of the technology also vary with the specification of output, the search for an optimal hospital size may be futile. PMID:8617607

  2. An empirical model of human aspiration in low-velocity air using CFD investigations.

    PubMed

    Anthony, T Renée; Anderson, Kimberly R

    2015-01-01

    Computational fluid dynamics (CFD) modeling was performed to investigate the aspiration efficiency of the human head in low velocities to examine whether the current inhaled particulate mass (IPM) sampling criterion matches the aspiration efficiency of an inhaling human in airflows common to worker exposures. Data from both mouth and nose inhalation, averaged to assess omnidirectional aspiration efficiencies, were compiled and used to generate a unifying model to relate particle size to aspiration efficiency of the human head. Multiple linear regression was used to generate an empirical model to estimate human aspiration efficiency and included particle size as well as breathing and freestream velocities as dependent variables. A new set of simulated mouth and nose breathing aspiration efficiencies was generated and used to test the fit of empirical models. Further, empirical relationships between test conditions and CFD estimates of aspiration were compared to experimental data from mannequin studies, including both calm-air and ultra-low velocity experiments. While a linear relationship between particle size and aspiration is reported in calm air studies, the CFD simulations identified a more reasonable fit using the square of particle aerodynamic diameter, which better addressed the shape of the efficiency curve's decline toward zero for large particles. The ultimate goal of this work was to develop an empirical model that incorporates real-world variations in critical factors associated with particle aspiration to inform low-velocity modifications to the inhalable particle sampling criterion.

  3. Comparison of methods for estimating the cost of human immunodeficiency virus-testing interventions.

    PubMed

    Shrestha, Ram K; Sansom, Stephanie L; Farnham, Paul G

    2012-01-01

    The Centers for Disease Control and Prevention (CDC), Division of HIV/AIDS Prevention, spends approximately 50% of its $325 million annual human immunodeficiency virus (HIV) prevention funds for HIV-testing services. An accurate estimate of the costs of HIV testing in various settings is essential for efficient allocation of HIV prevention resources. To assess the costs of HIV-testing interventions using different costing methods. We used the microcosting-direct measurement method to assess the costs of HIV-testing interventions in nonclinical settings, and we compared these results with those from 3 other costing methods: microcosting-staff allocation, where the labor cost was derived from the proportion of each staff person's time allocated to HIV testing interventions; gross costing, where the New York State Medicaid payment for HIV testing was used to estimate program costs, and program budget, where the program cost was assumed to be the total funding provided by Centers for Disease Control and Prevention. Total program cost, cost per person tested, and cost per person notified of new HIV diagnosis. The median costs per person notified of a new HIV diagnosis were $12 475, $15 018, $2697, and $20 144 based on microcosting-direct measurement, microcosting-staff allocation, gross costing, and program budget methods, respectively. Compared with the microcosting-direct measurement method, the cost was 78% lower with gross costing, and 20% and 61% higher using the microcosting-staff allocation and program budget methods, respectively. Our analysis showed that HIV-testing program cost estimates vary widely by costing methods. However, the choice of a particular costing method may depend on the research question being addressed. Although program budget and gross-costing methods may be attractive because of their simplicity, only the microcosting-direct measurement method can identify important determinants of the program costs and provide guidance to improve efficiency.

  4. Initial Flight Test Evaluation of the F-15 ACTIVE Axisymmetric Vectoring Nozzle Performance

    NASA Technical Reports Server (NTRS)

    Orme, John S.; Hathaway, Ross; Ferguson, Michael D.

    1998-01-01

    A full envelope database of a thrust-vectoring axisymmetric nozzle performance for the Pratt & Whitney Pitch/Yaw Balance Beam Nozzle (P/YBBN) is being developed using the F-15 Advanced Control Technology for Integrated Vehicles (ACTIVE) aircraft. At this time, flight research has been completed for steady-state pitch vector angles up to 20' at an altitude of 30,000 ft from low power settings to maximum afterburner power. The nozzle performance database includes vector forces, internal nozzle pressures, and temperatures all of which can be used for regression analysis modeling. The database was used to substantiate a set of nozzle performance data from wind tunnel testing and computational fluid dynamic analyses. Findings from initial flight research at Mach 0.9 and 1.2 are presented in this paper. The results show that vector efficiency is strongly influenced by power setting. A significant discrepancy in nozzle performance has been discovered between predicted and measured results during vectoring.

  5. Integrated Analysis of Pharmacologic, Clinical, and SNP Microarray Data using Projection onto the Most Interesting Statistical Evidence with Adaptive Permutation Testing

    PubMed Central

    Pounds, Stan; Cao, Xueyuan; Cheng, Cheng; Yang, Jun; Campana, Dario; Evans, William E.; Pui, Ching-Hon; Relling, Mary V.

    2010-01-01

    Powerful methods for integrated analysis of multiple biological data sets are needed to maximize interpretation capacity and acquire meaningful knowledge. We recently developed Projection Onto the Most Interesting Statistical Evidence (PROMISE). PROMISE is a statistical procedure that incorporates prior knowledge about the biological relationships among endpoint variables into an integrated analysis of microarray gene expression data with multiple biological and clinical endpoints. Here, PROMISE is adapted to the integrated analysis of pharmacologic, clinical, and genome-wide genotype data that incorporating knowledge about the biological relationships among pharmacologic and clinical response data. An efficient permutation-testing algorithm is introduced so that statistical calculations are computationally feasible in this higher-dimension setting. The new method is applied to a pediatric leukemia data set. The results clearly indicate that PROMISE is a powerful statistical tool for identifying genomic features that exhibit a biologically meaningful pattern of association with multiple endpoint variables. PMID:21516175

  6. Exprimental Results of the First Two Stages of an Advanced Transonic Core Compressor Under Isolated and Multi-Stage Conditions.

    NASA Technical Reports Server (NTRS)

    Prahst, Patricia S.; Kulkarni, Sameer; Sohn, Ki H.

    2015-01-01

    NASA's Environmentally Responsible Aviation (ERA) Program calls for investigation of the technology barriers associated with improved fuel efficiency for large gas turbine engines. Under ERA, the highly loaded core compressor technology program attempts to realize the fuel burn reduction goal by increasing overall pressure ratio of the compressor to increase thermal efficiency of the engine. Study engines with overall pressure ratio of 60 to 70 are now being investigated. This means that the high pressure compressor would have to almost double in pressure ratio while keeping a high level of efficiency. NASA and GE teamed to address this challenge by testing the first two stages of an advanced GE compressor designed to meet the requirements of a very high pressure ratio core compressor. Previous test experience of a compressor which included these front two stages indicated a performance deficit relative to design intent. Therefore, the current rig was designed to run in 1-stage and 2-stage configurations in two separate tests to assess whether the bow shock of the second rotor interacting with the upstream stage contributed to the unpredicted performance deficit, or if the culprit was due to interaction of rotor 1 and stator 1. Thus, the goal was to fully understand the stage 1 performance under isolated and multi-stage conditions, and additionally to provide a detailed aerodynamic data set for CFD validation. Full use was made of steady and unsteady measurement methods to understand fluid dynamics loss source mechanisms due to rotor shock interaction and endwall losses. This paper will present the description of the compressor test article and its measured performance and operability, for both the single stage and two stage configurations. We focus the paper on measurements at 97% corrected speed with design intent vane setting angles.

  7. Micro-cost Analysis of ALK Rearrangement Testing by FISH to Determine Eligibility for Crizotinib Therapy in NSCLC: Implications for Cost Effectiveness of Testing and Treatment.

    PubMed

    Parker, David; Belaud-Rotureau, Marc-Antoine

    2014-01-01

    Break-apart fluorescence in situ hybridization (FISH) is the gold standard test for anaplastic lymphoma kinase (ALK) gene rearrangement. However, this methodology often is assumed to be expensive and potentially cost-prohibitive given the low prevalence of ALK-positive non-small cell lung cancer (NSCLC) cases. To more accurately estimate the cost of ALK testing by FISH, we developed a micro-cost model that accounts for all cost elements of the assay, including laboratory reagents, supplies, capital equipment, technical and pathologist labor, and the acquisition cost of the commercial test and associated reagent kits and controls. By applying a set of real-world base-case parameter values, we determined that the cost of a single ALK break-apart FISH test result is $278.01. Sensitivity analysis on the parameters of batch size, testing efficiency, and the cost of the commercial diagnostic testing products revealed that the cost per result is highly sensitive to batch size, but much less so to efficiency or product cost. This implies that ALK testing by FISH will be most cost effective when performed in high-volume centers. Our results indicate that testing cost may not be the primary determinant of crizotinib (Xalkori(®)) treatment cost effectiveness, and suggest that testing cost is an insufficient reason to limit the use of FISH testing for ALK rearrangement.

  8. Micro-cost Analysis of ALK Rearrangement Testing by FISH to Determine Eligibility for Crizotinib Therapy in NSCLC: Implications for Cost Effectiveness of Testing and Treatment

    PubMed Central

    Parker, David; Belaud-Rotureau, Marc-Antoine

    2014-01-01

    Break-apart fluorescence in situ hybridization (FISH) is the gold standard test for anaplastic lymphoma kinase (ALK) gene rearrangement. However, this methodology often is assumed to be expensive and potentially cost-prohibitive given the low prevalence of ALK-positive non-small cell lung cancer (NSCLC) cases. To more accurately estimate the cost of ALK testing by FISH, we developed a micro-cost model that accounts for all cost elements of the assay, including laboratory reagents, supplies, capital equipment, technical and pathologist labor, and the acquisition cost of the commercial test and associated reagent kits and controls. By applying a set of real-world base-case parameter values, we determined that the cost of a single ALK break-apart FISH test result is $278.01. Sensitivity analysis on the parameters of batch size, testing efficiency, and the cost of the commercial diagnostic testing products revealed that the cost per result is highly sensitive to batch size, but much less so to efficiency or product cost. This implies that ALK testing by FISH will be most cost effective when performed in high-volume centers. Our results indicate that testing cost may not be the primary determinant of crizotinib (Xalkori®) treatment cost effectiveness, and suggest that testing cost is an insufficient reason to limit the use of FISH testing for ALK rearrangement. PMID:25520569

  9. Evaluation of range and distortion tolerance for high Mach number transonic fan stages. Task 2: Performance of a 1500-foot-per-second tip speed transonic fan stage with variable geometry inlet guide vanes and stator

    NASA Technical Reports Server (NTRS)

    Bilwakesh, K. R.; Koch, C. C.; Prince, D. C.

    1972-01-01

    A 0.5 hub/tip radius ratio compressor stage consisting of a 1500 ft/sec tip speed rotor, a variable camber inlet guide vane and a variable stagger stator was designed and tested with undistorted inlet flow, flow with tip radial distortion, and flow with 90 degrees, one-per-rev, circumferential distortion. At the design speed and design IGV and stator setting the design stage pressure ratio was achieved at a weight within 1% of the design flow. Analytical results on rotor tip shock structure, deviation angle and part-span shroud losses at different operating conditions are presented. The variable geometry blading enabled efficient operation with adequate stall margin at the design condition and at 70% speed. Closing the inlet guide vanes to 40 degrees changed the speed-versus-weight flow relationship along the stall line and thus provided the flexibility of operation at off-design conditions. Inlet flow distortion caused considerable losses in peak efficiency, efficiency on a constant throttle line through design pressure ratio at design speed, stall pressure ratio, and stall margin at the 0 degrees IGV setting and high rotative speeds. The use of the 40 degrees inlet guide vane setting enabled partial recovery of the stall margin over the standard constant throttle line.

  10. An efficient method for solving the steady Euler equations

    NASA Technical Reports Server (NTRS)

    Liou, M. S.

    1986-01-01

    An efficient numerical procedure for solving a set of nonlinear partial differential equations is given, specifically for the steady Euler equations. Solutions of the equations were obtained by Newton's linearization procedure, commonly used to solve the roots of nonlinear algebraic equations. In application of the same procedure for solving a set of differential equations we give a theorem showing that a quadratic convergence rate can be achieved. While the domain of quadratic convergence depends on the problems studied and is unknown a priori, we show that firstand second-order derivatives of flux vectors determine whether the condition for quadratic convergence is satisfied. The first derivatives enter as an implicit operator for yielding new iterates and the second derivatives indicates smoothness of the flows considered. Consequently flows involving shocks are expected to require larger number of iterations. First-order upwind discretization in conjunction with the Steger-Warming flux-vector splitting is employed on the implicit operator and a diagonal dominant matrix results. However the explicit operator is represented by first- and seond-order upwind differencings, using both Steger-Warming's and van Leer's splittings. We discuss treatment of boundary conditions and solution procedures for solving the resulting block matrix system. With a set of test problems for one- and two-dimensional flows, we show detailed study as to the efficiency, accuracy, and convergence of the present method.

  11. Statistically Assessing Time-Averaged and Paleosecular Variation Field Models Against Paleomagnetic Directional Data Sets. Can Likely non-Zonal Features be Detected in a Robust way ?

    NASA Astrophysics Data System (ADS)

    Hulot, G.; Khokhlov, A.

    2007-12-01

    We recently introduced a method to rigorously test the statistical compatibility of combined time-averaged (TAF) and paleosecular variation (PSV) field models against any lava flow paleomagnetic database (Khokhlov et al., 2001, 2006). Applying this method to test (TAF+PSV) models against synthetic data produced from those shows that the method is very efficient at discriminating models, and very sensitive, provided those data errors are properly taken into account. This prompted us to test a variety of published combined (TAF+PSV) models against a test Bruhnes stable polarity data set extracted from the Quidelleur et al. (1994) data base. Not surprisingly, ignoring data errors leads all models to be rejected. But taking data errors into account leads to the stimulating conclusion that at least one (TAF+PSV) model appears to be compatible with the selected data set, this model being purely axisymmetric. This result shows that in practice also, and with the data bases currently available, the method can discriminate various candidate models and decide which actually best fits a given data set. But it also shows that likely non-zonal signatures of non-homogeneous boundary conditions imposed by the mantle are difficult to identify as statistically robust from paleomagnetic directional data sets. In the present paper, we will discuss the possibility that such signatures could eventually be identified as robust with the help of more recent data sets (such as the one put together under the collaborative "TAFI" effort, see e.g. Johnson et al. abstract #GP21A-0013, AGU Fall Meeting, 2005) or by taking additional information into account (such as the possible coincidence of non-zonal time-averaged field patterns with analogous patterns in the modern field).

  12. A new modified conjugate gradient coefficient for solving system of linear equations

    NASA Astrophysics Data System (ADS)

    Hajar, N.; ‘Aini, N.; Shapiee, N.; Abidin, Z. Z.; Khadijah, W.; Rivaie, M.; Mamat, M.

    2017-09-01

    Conjugate gradient (CG) method is an evolution of computational method in solving unconstrained optimization problems. This approach is easy to implement due to its simplicity and has been proven to be effective in solving real-life application. Although this field has received copious amount of attentions in recent years, some of the new approaches of CG algorithm cannot surpass the efficiency of the previous versions. Therefore, in this paper, a new CG coefficient which retains the sufficient descent and global convergence properties of the original CG methods is proposed. This new CG is tested on a set of test functions under exact line search. Its performance is then compared to that of some of the well-known previous CG methods based on number of iterations and CPU time. The results show that the new CG algorithm has the best efficiency amongst all the methods tested. This paper also includes an application of the new CG algorithm for solving large system of linear equations

  13. pyRMSD: a Python package for efficient pairwise RMSD matrix calculation and handling.

    PubMed

    Gil, Víctor A; Guallar, Víctor

    2013-09-15

    We introduce pyRMSD, an open source standalone Python package that aims at offering an integrative and efficient way of performing Root Mean Square Deviation (RMSD)-related calculations of large sets of structures. It is specially tuned to do fast collective RMSD calculations, as pairwise RMSD matrices, implementing up to three well-known superposition algorithms. pyRMSD provides its own symmetric distance matrix class that, besides the fact that it can be used as a regular matrix, helps to save memory and increases memory access speed. This last feature can dramatically improve the overall performance of any Python algorithm using it. In addition, its extensibility, testing suites and documentation make it a good choice to those in need of a workbench for developing or testing new algorithms. The source code (under MIT license), installer, test suites and benchmarks can be found at https://pele.bsc.es/ under the tools section. victor.guallar@bsc.es Supplementary data are available at Bioinformatics online.

  14. Acoustic and aerodynamic testing of a scale model variable pitch fan

    NASA Technical Reports Server (NTRS)

    Jutras, R. R.; Kazin, S. B.

    1974-01-01

    A fully reversible pitch scale model fan with variable pitch rotor blades was tested to determine its aerodynamic and acoustic characteristics. The single-stage fan has a design tip speed of 1160 ft/sec (353.568 m/sec) at a bypass pressure ratio of 1.5. Three operating lines were investigated. Test results show that the blade pitch for minimum noise also resulted in the highest efficiency for all three operating lines at all thrust levels. The minimum perceived noise on a 200-ft (60.96 m) sideline was obtained with the nominal nozzle. At 44% of takeoff thrust, the PNL reduction between blade pitch and minimum noise blade pitch is 1.8 PNdB for the nominal nozzle and decreases with increasing thrust. The small nozzle (6% undersized) has the highest efficiency at all part thrust conditions for the minimum noise blade pitch setting; although, the noise is about 1.0 PNdB higher for the small nozzle at the minimum noise blade pitch position.

  15. A baroclinic quasigeostrophic open ocean model

    NASA Technical Reports Server (NTRS)

    Miller, R. N.; Robinson, A. R.; Haidvogel, D. B.

    1983-01-01

    A baroclinic quasigeostrophic open ocean model is presented, calibrated by a series of test problems, and demonstrated to be feasible and efficient for application to realistic mid-oceanic mesoscale eddy flow regimes. Two methods of treating the depth dependence of the flow, a finite difference method and a collocation method, are tested and intercompared. Sample Rossby wave calculations with and without advection are performed with constant stratification and two levels of nonlinearity, one weaker than and one typical of real ocean flows. Using exact analytical solutions for comparison, the accuracy and efficiency of the model is tabulated as a function of the computational parameters and stability limits set; typically, errors were controlled between 1 percent and 10 percent RMS after two wave periods. Further Rossby wave tests with realistic stratification and wave parameters chosen to mimic real ocean conditions were performed to determine computational parameters for use with real and simulated data. Finally, a prototype calculation with quasiturbulent simulated data was performed successfully, which demonstrates the practicality of the model for scientific use.

  16. Recognizing human actions by learning and matching shape-motion prototype trees.

    PubMed

    Jiang, Zhuolin; Lin, Zhe; Davis, Larry S

    2012-03-01

    A shape-motion prototype-based approach is introduced for action recognition. The approach represents an action as a sequence of prototypes for efficient and flexible action matching in long video sequences. During training, an action prototype tree is learned in a joint shape and motion space via hierarchical K-means clustering and each training sequence is represented as a labeled prototype sequence; then a look-up table of prototype-to-prototype distances is generated. During testing, based on a joint probability model of the actor location and action prototype, the actor is tracked while a frame-to-prototype correspondence is established by maximizing the joint probability, which is efficiently performed by searching the learned prototype tree; then actions are recognized using dynamic prototype sequence matching. Distance measures used for sequence matching are rapidly obtained by look-up table indexing, which is an order of magnitude faster than brute-force computation of frame-to-frame distances. Our approach enables robust action matching in challenging situations (such as moving cameras, dynamic backgrounds) and allows automatic alignment of action sequences. Experimental results demonstrate that our approach achieves recognition rates of 92.86 percent on a large gesture data set (with dynamic backgrounds), 100 percent on the Weizmann action data set, 95.77 percent on the KTH action data set, 88 percent on the UCF sports data set, and 87.27 percent on the CMU action data set.

  17. Quality control and batch testing of MRPC modules for BESIII ETOF upgrade

    NASA Astrophysics Data System (ADS)

    Liu, Z.; Li, X.; Sun, Y. J.; Li, C.; Heng, Y. K.; Chen, T. X.; Dai, H. L.; Shao, M.; Sun, S. S.; Tang, Z. B.; Yang, R. X.; Wu, Z.; Wang, X. Z.

    2017-12-01

    The end-cap time-of-flight (ETOF) system for the Beijing Spectrometer III (BESIII) has been upgraded using the Multi-gap Resistive Plate Chamber (MRPC) technology (Williams et al., 1999; Li et al., 2001; Blanco et al., 2003; Fonte et al., 2013, [1-4]). A set of quality-assurance procedures has been developed to guarantee the performances of the 72 mass-produced MRPC modules installed. The cosmic ray batch testing show that the average detection efficiency of the MRPC modules is about 95%. Two different calibration methods indicate that MRPCs' time resolution can reach 60 ps in the cosmic ray test.

  18. Use of the azimuthal resistivity technique for determination of regional azimuth of transmissivity

    USGS Publications Warehouse

    Carlson, D.

    2010-01-01

    Many bedrock units contain joint sets that commonly act as preferred paths for the movement of water, electrical charge, and possible contaminants associated with production or transit of crude oil or refined products. To facilitate the development of remediation programs, a need exists to reliably determine regional-scale properties of these joint sets: azimuth of transmissivity ellipse, dominant set, and trend(s). The surface azimuthal electrical resistivity survey method used for local in situ studies can be a noninvasive, reliable, efficient, and relatively cost-effective method for regional studies. The azimuthal resistivity survey method combines the use of standard resistivity equipment with a Wenner array rotated about a fixed center point, at selected degree intervals, which yields an apparent resistivity ellipse from which joint-set orientation can be determined. Regional application of the azimuthal survey method was tested at 17 sites in an approximately 500 km2 (193 mi2) area around Milwaukee, Wisconsin, with less than 15m (50 ft) overburden above the dolomite. Results of 26 azimuthal surveys were compared and determined to be consistent with the results of two other methods: direct observation of joint-set orientation and transmissivity ellipses from multiple-well-aquifer tests. The average of joint-set trend determined by azimuthal surveys is within 2.5?? of the average of joint-set trend determined by direct observation of major joint sets at 24 sites. The average of maximum of transmissivity trend determined by azimuthal surveys is within 5.7?? of the average of maximum of transmissivity trend determined for 14 multiple-well-aquifer tests. Copyright ?? 2010 The American Association of Petroleum Geologists/Division of Environmental Geosciences. All rights reserved.

  19. Efficiency of Health Care Production in Low-Resource Settings: A Monte-Carlo Simulation to Compare the Performance of Data Envelopment Analysis, Stochastic Distance Functions, and an Ensemble Model

    PubMed Central

    Giorgio, Laura Di; Flaxman, Abraham D.; Moses, Mark W.; Fullman, Nancy; Hanlon, Michael; Conner, Ruben O.; Wollum, Alexandra; Murray, Christopher J. L.

    2016-01-01

    Low-resource countries can greatly benefit from even small increases in efficiency of health service provision, supporting a strong case to measure and pursue efficiency improvement in low- and middle-income countries (LMICs). However, the knowledge base concerning efficiency measurement remains scarce for these contexts. This study shows that current estimation approaches may not be well suited to measure technical efficiency in LMICs and offers an alternative approach for efficiency measurement in these settings. We developed a simulation environment which reproduces the characteristics of health service production in LMICs, and evaluated the performance of Data Envelopment Analysis (DEA) and Stochastic Distance Function (SDF) for assessing efficiency. We found that an ensemble approach (ENS) combining efficiency estimates from a restricted version of DEA (rDEA) and restricted SDF (rSDF) is the preferable method across a range of scenarios. This is the first study to analyze efficiency measurement in a simulation setting for LMICs. Our findings aim to heighten the validity and reliability of efficiency analyses in LMICs, and thus inform policy dialogues about improving the efficiency of health service production in these settings. PMID:26812685

  20. Sample reproducibility of genetic association using different multimarker TDTs in genome-wide association studies: characterization and a new approach.

    PubMed

    Abad-Grau, Mara M; Medina-Medina, Nuria; Montes-Soldado, Rosana; Matesanz, Fuencisla; Bafna, Vineet

    2012-01-01

    Multimarker Transmission/Disequilibrium Tests (TDTs) are very robust association tests to population admixture and structure which may be used to identify susceptibility loci in genome-wide association studies. Multimarker TDTs using several markers may increase power by capturing high-degree associations. However, there is also a risk of spurious associations and power reduction due to the increase in degrees of freedom. In this study we show that associations found by tests built on simple null hypotheses are highly reproducible in a second independent data set regardless the number of markers. As a test exhibiting this feature to its maximum, we introduce the multimarker 2-Groups TDT (mTDT(2G)), a test which under the hypothesis of no linkage, asymptotically follows a χ2 distribution with 1 degree of freedom regardless the number of markers. The statistic requires the division of parental haplotypes into two groups: disease susceptibility and disease protective haplotype groups. We assessed the test behavior by performing an extensive simulation study as well as a real-data study using several data sets of two complex diseases. We show that mTDT(2G) test is highly efficient and it achieves the highest power among all the tests used, even when the null hypothesis is tested in a second independent data set. Therefore, mTDT(2G) turns out to be a very promising multimarker TDT to perform genome-wide searches for disease susceptibility loci that may be used as a preprocessing step in the construction of more accurate genetic models to predict individual susceptibility to complex diseases.

  1. Sample Reproducibility of Genetic Association Using Different Multimarker TDTs in Genome-Wide Association Studies: Characterization and a New Approach

    PubMed Central

    Abad-Grau, Mara M.; Medina-Medina, Nuria; Montes-Soldado, Rosana; Matesanz, Fuencisla; Bafna, Vineet

    2012-01-01

    Multimarker Transmission/Disequilibrium Tests (TDTs) are very robust association tests to population admixture and structure which may be used to identify susceptibility loci in genome-wide association studies. Multimarker TDTs using several markers may increase power by capturing high-degree associations. However, there is also a risk of spurious associations and power reduction due to the increase in degrees of freedom. In this study we show that associations found by tests built on simple null hypotheses are highly reproducible in a second independent data set regardless the number of markers. As a test exhibiting this feature to its maximum, we introduce the multimarker -Groups TDT ( ), a test which under the hypothesis of no linkage, asymptotically follows a distribution with degree of freedom regardless the number of markers. The statistic requires the division of parental haplotypes into two groups: disease susceptibility and disease protective haplotype groups. We assessed the test behavior by performing an extensive simulation study as well as a real-data study using several data sets of two complex diseases. We show that test is highly efficient and it achieves the highest power among all the tests used, even when the null hypothesis is tested in a second independent data set. Therefore, turns out to be a very promising multimarker TDT to perform genome-wide searches for disease susceptibility loci that may be used as a preprocessing step in the construction of more accurate genetic models to predict individual susceptibility to complex diseases. PMID:22363405

  2. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, L; Han, Y; Jin, M

    Purpose: To develop an iterative reconstruction method for X-ray CT, in which the reconstruction can quickly converge to the desired solution with much reduced projection views. Methods: The reconstruction is formulated as a convex feasibility problem, i.e. the solution is an intersection of three convex sets: 1) data fidelity (DF) set – the L2 norm of the difference of observed projections and those from the reconstructed image is no greater than an error bound; 2) non-negativity of image voxels (NN) set; and 3) piecewise constant (PC) set - the total variation (TV) of the reconstructed image is no greater thanmore » an upper bound. The solution can be found by applying projection onto convex sets (POCS) sequentially for these three convex sets. Specifically, the algebraic reconstruction technique and setting negative voxels as zero are used for projection onto the DF and NN sets, respectively, while the projection onto the PC set is achieved by solving a standard Rudin, Osher, and Fatemi (ROF) model. The proposed method is named as full sequential POCS (FS-POCS), which is tested using the Shepp-Logan phantom and the Catphan600 phantom and compared with two similar algorithms, TV-POCS and CP-TV. Results: Using the Shepp-Logan phantom, the root mean square error (RMSE) of reconstructed images changing along with the number of iterations is used as the convergence measurement. In general, FS- POCS converges faster than TV-POCS and CP-TV, especially with fewer projection views. FS-POCS can also achieve accurate reconstruction of cone-beam CT of the Catphan600 phantom using only 54 views, comparable to that of FDK using 364 views. Conclusion: We developed an efficient iterative reconstruction for sparse-view CT using full sequential POCS. The simulation and physical phantom data demonstrated the computational efficiency and effectiveness of FS-POCS.« less

  3. Thermal Transmittance and the Embodied Energy of Timber Frame Lightweight Walls Insulated with Straw and Reed

    NASA Astrophysics Data System (ADS)

    Miljan, M.; Miljan, J.

    2015-11-01

    Sustainable energy use has become topical in the whole world. Energy gives us comfort we are used to. EU and national regulations determine energy efficiency of the buildings. This is one side of the problem - energy efficiency of houses during exploitation. But the other side is primary energy content of used materials and more rational use of resources during the whole life cycle of a building. The latter value constitutes about 8 - 20% from the whole energy content. Calculations of energy efficiency of materials lead us to energy efficiency of insulation materials and to comparison of natural and industrial materials taking into account their thermal conductivity as well as their primary energy content. Case study of the test house (built in 2012) insulated with straw bales gave the result that thermal transmittance of investigated straw bale walls was according to the minimum energy efficiency requirements set in Estonia U = 0.12 - 0.22 W/m2K (for walls).

  4. Predicting Protein-protein Association Rates using Coarse-grained Simulation and Machine Learning

    NASA Astrophysics Data System (ADS)

    Xie, Zhong-Ru; Chen, Jiawen; Wu, Yinghao

    2017-04-01

    Protein-protein interactions dominate all major biological processes in living cells. We have developed a new Monte Carlo-based simulation algorithm to study the kinetic process of protein association. We tested our method on a previously used large benchmark set of 49 protein complexes. The predicted rate was overestimated in the benchmark test compared to the experimental results for a group of protein complexes. We hypothesized that this resulted from molecular flexibility at the interface regions of the interacting proteins. After applying a machine learning algorithm with input variables that accounted for both the conformational flexibility and the energetic factor of binding, we successfully identified most of the protein complexes with overestimated association rates and improved our final prediction by using a cross-validation test. This method was then applied to a new independent test set and resulted in a similar prediction accuracy to that obtained using the training set. It has been thought that diffusion-limited protein association is dominated by long-range interactions. Our results provide strong evidence that the conformational flexibility also plays an important role in regulating protein association. Our studies provide new insights into the mechanism of protein association and offer a computationally efficient tool for predicting its rate.

  5. Predicting Protein–protein Association Rates using Coarse-grained Simulation and Machine Learning

    PubMed Central

    Xie, Zhong-Ru; Chen, Jiawen; Wu, Yinghao

    2017-01-01

    Protein–protein interactions dominate all major biological processes in living cells. We have developed a new Monte Carlo-based simulation algorithm to study the kinetic process of protein association. We tested our method on a previously used large benchmark set of 49 protein complexes. The predicted rate was overestimated in the benchmark test compared to the experimental results for a group of protein complexes. We hypothesized that this resulted from molecular flexibility at the interface regions of the interacting proteins. After applying a machine learning algorithm with input variables that accounted for both the conformational flexibility and the energetic factor of binding, we successfully identified most of the protein complexes with overestimated association rates and improved our final prediction by using a cross-validation test. This method was then applied to a new independent test set and resulted in a similar prediction accuracy to that obtained using the training set. It has been thought that diffusion-limited protein association is dominated by long-range interactions. Our results provide strong evidence that the conformational flexibility also plays an important role in regulating protein association. Our studies provide new insights into the mechanism of protein association and offer a computationally efficient tool for predicting its rate. PMID:28418043

  6. Predicting Protein-protein Association Rates using Coarse-grained Simulation and Machine Learning.

    PubMed

    Xie, Zhong-Ru; Chen, Jiawen; Wu, Yinghao

    2017-04-18

    Protein-protein interactions dominate all major biological processes in living cells. We have developed a new Monte Carlo-based simulation algorithm to study the kinetic process of protein association. We tested our method on a previously used large benchmark set of 49 protein complexes. The predicted rate was overestimated in the benchmark test compared to the experimental results for a group of protein complexes. We hypothesized that this resulted from molecular flexibility at the interface regions of the interacting proteins. After applying a machine learning algorithm with input variables that accounted for both the conformational flexibility and the energetic factor of binding, we successfully identified most of the protein complexes with overestimated association rates and improved our final prediction by using a cross-validation test. This method was then applied to a new independent test set and resulted in a similar prediction accuracy to that obtained using the training set. It has been thought that diffusion-limited protein association is dominated by long-range interactions. Our results provide strong evidence that the conformational flexibility also plays an important role in regulating protein association. Our studies provide new insights into the mechanism of protein association and offer a computationally efficient tool for predicting its rate.

  7. ELIPS: Toward a Sensor Fusion Processor on a Chip

    NASA Technical Reports Server (NTRS)

    Daud, Taher; Stoica, Adrian; Tyson, Thomas; Li, Wei-te; Fabunmi, James

    1998-01-01

    The paper presents the concept and initial tests from the hardware implementation of a low-power, high-speed reconfigurable sensor fusion processor. The Extended Logic Intelligent Processing System (ELIPS) processor is developed to seamlessly combine rule-based systems, fuzzy logic, and neural networks to achieve parallel fusion of sensor in compact low power VLSI. The first demonstration of the ELIPS concept targets interceptor functionality; other applications, mainly in robotics and autonomous systems are considered for the future. The main assumption behind ELIPS is that fuzzy, rule-based and neural forms of computation can serve as the main primitives of an "intelligent" processor. Thus, in the same way classic processors are designed to optimize the hardware implementation of a set of fundamental operations, ELIPS is developed as an efficient implementation of computational intelligence primitives, and relies on a set of fuzzy set, fuzzy inference and neural modules, built in programmable analog hardware. The hardware programmability allows the processor to reconfigure into different machines, taking the most efficient hardware implementation during each phase of information processing. Following software demonstrations on several interceptor data, three important ELIPS building blocks (a fuzzy set preprocessor, a rule-based fuzzy system and a neural network) have been fabricated in analog VLSI hardware and demonstrated microsecond-processing times.

  8. The Contribution of Psychosocial Stress to the Obesity Epidemic

    PubMed Central

    Siervo, M.; Wells, J. C. K.; Cizza, G.

    2009-01-01

    The Thrifty Gene hypothesis theorizes that during evolution a set of genes has been selected to ensure survival in environments with limited food supply and marked seasonality. Contemporary environments have predictable and unlimited food availability, an attenuated seasonality due to artificial lighting, indoor heating during the winter and air conditioning during the summer, and promote sedentariness and overeating. In this setting the thrifty genes are constantly activated to enhance energy storage. Psychosocial stress and sleep deprivation are other features of modern societies. Stress-induced hypercortisolemia in the setting of unlimited food supply promotes adiposity. Modern man is becoming obese because these ancient mechanisms are efficiently promoting a positive energy balance. We propose that in today’s plentifully provisioned societies, where sedentariness and mental stress have become typical traits, chronic activation of the neuroendocrine systems may contribute to the increased prevalence of obesity. We suggest that some of the yet unidentified thrifty genes may be linked to highly conserved energy sensing mechanisms (AMP kinase, mTOR kinase). These hypotheses are testable. Rural societies that are becoming rapidly industrialized and are witnessing a dramatic increase in obesity may provide a historical opportunity to conduct epidemiological studies of the thrifty genotype. In experimental settings, the effects of various forms of psychosocial stress in increasing metabolic efficiency and gene expression can be further tested. PMID:19156597

  9. ReactionMap: an efficient atom-mapping algorithm for chemical reactions.

    PubMed

    Fooshee, David; Andronico, Alessio; Baldi, Pierre

    2013-11-25

    Large databases of chemical reactions provide new data-mining opportunities and challenges. Key challenges result from the imperfect quality of the data and the fact that many of these reactions are not properly balanced or atom-mapped. Here, we describe ReactionMap, an efficient atom-mapping algorithm. Our approach uses a combination of maximum common chemical subgraph search and minimization of an assignment cost function derived empirically from training data. We use a set of over 259,000 balanced atom-mapped reactions from the SPRESI commercial database to train the system, and we validate it on random sets of 1000 and 17,996 reactions sampled from this pool. These large test sets represent a broad range of chemical reaction types, and ReactionMap correctly maps about 99% of the atoms and about 96% of the reactions, with a mean time per mapping of 2 s. Most correctly mapped reactions are mapped with high confidence. Mapping accuracy compares favorably with ChemAxon's AutoMapper, versions 5 and 6.1, and the DREAM Web tool. These approaches correctly map 60.7%, 86.5%, and 90.3% of the reactions, respectively, on the same data set. A ReactionMap server is available on the ChemDB Web portal at http://cdb.ics.uci.edu .

  10. An EMTP system level model of the PMAD DC test bed

    NASA Technical Reports Server (NTRS)

    Dravid, Narayan V.; Kacpura, Thomas J.; Tam, Kwa-Sur

    1991-01-01

    A power management and distribution direct current (PMAD DC) test bed was set up at the NASA Lewis Research Center to investigate Space Station Freedom Electric Power Systems issues. Efficiency of test bed operation significantly improves with a computer simulation model of the test bed as an adjunct tool of investigation. Such a model is developed using the Electromagnetic Transients Program (EMTP) and is available to the test bed developers and experimenters. The computer model is assembled on a modular basis. Device models of different types can be incorporated into the system model with only a few lines of code. A library of the various model types is created for this purpose. Simulation results and corresponding test bed results are presented to demonstrate model validity.

  11. The reliability of in-hospital diagnoses of diabetes mellitus in the setting of an acute myocardial infarction

    PubMed Central

    Arnold, Suzanne V; Lipska, Kasia J; Inzucchi, Silvio E; Li, Yan; Jones, Philip G; McGuire, Darren K; Goyal, Abhinav; Stolker, Joshua M; Lind, Marcus; Spertus, John A; Kosiborod, Mikhail

    2014-01-01

    Objective Incident diabetes mellitus (DM) is important to recognize in patients with acute myocardial infarction (AMI). To develop an efficient screening strategy, we explored the use of random plasma glucose (RPG) at admission and fasting plasma glucose (FPG) to select patients with AMI for glycosylated hemoglobin (HbA1c) testing. Design, setting, andparticipants Prospective registry of 1574 patients with AMI not taking glucose-lowering medication from 24 US hospitals. All patients had HbA1c measured at a core laboratory and admission RPG and ≥2 FPGs recorded during hospitalization. We examined potential combinations of RPG and FPG and compared these with HbA1c≥6.5%—considered the gold standard for DM diagnosis in these analyses. Results An RPG>140 mg/dL or FPG≥126 mg/dL had high sensitivity for DM diagnosis. Combining these into a screening protocol (if admission RPG>140, check HbA1c; or if FPG≥126 on a subsequent day, check HbA1c) led to HbA1c testing in 50% of patients and identified 86% with incident DM (number needed to screen (NNS)=3.3 to identify 1 case of DM; vs NNS=5.6 with universal HbA1c screening). Alternatively, using an RPG>180 led to HbA1c testing in 40% of patients with AMI and identified 82% of DM (NNS=2.7). Conclusions We have established two potential selective screening methods for DM in the setting of AMI that could identify the vast majority of incident DM by targeted screening of 40–50% of patients with AMI with HbA1c testing. Using these methods may efficiently identify patients with AMI with DM so that appropriate education and treatment can be promptly initiated. PMID:25452878

  12. Influence of wire-coil inserts on the thermo-hydraulic performance of a flat-plate solar collector

    NASA Astrophysics Data System (ADS)

    Herrero Martín, R.; García, A.; Pérez-García, J.

    2012-11-01

    Enhancement techniques can be applied to flat-plate liquid solar collectors towards more compact and efficient designs. For the typical operating mass flow rates in flat-plate solar collectors, the most suitable technique is inserted devices. Based on previous studies from the authors, wire coils were selected for enhancing heat transfer. This type of inserted device provides better results in laminar, transitional and low turbulence fluid flow regimes. To test the enhanced solar collector and compare with a standard one, an experimental side-by-side solar collector test bed was designed and constructed. The testing set up was fully designed following the requirements of EN12975-2 and allow us to accomplish performance tests under the same operating conditions (mass flow rate, inlet fluid temperature and weather conditions). This work presents the thermal efficiency curves of a commercial and an enhanced solar collector, for the standardized mass flow rate per unit of absorber area of 0.02 kg/sm2 (in useful engineering units 144 kg/h for water as working fluid and 2 m2 flat-plate solar collector of absorber area). The enhanced collector was modified inserting spiral wire coils of dimensionless pitch p/D = 1 and wire-diameter e/D = 0.0717. The friction factor per tube has been computed from the overall pressure drop tests across the solar collectors. The thermal efficiency curves of both solar collectors, a standard and an enhanced collector, are presented. The enhanced solar collector increases the thermal efficiency by 15%. To account for the overall enhancement a modified performance evaluation criterion (R3m) is proposed. The maximum value encountered reaches 1.105 which represents an increase in useful power of 10.5% for the same pumping power consumption.

  13. Efficient calculation of general Voigt profiles

    NASA Astrophysics Data System (ADS)

    Cope, D.; Khoury, R.; Lovett, R. J.

    1988-02-01

    An accurate and efficient program is presented for the computation of OIL profiles, generalizations of the Voigt profile resulting from the one-interacting-level model of Ward et al. (1974). These profiles have speed dependent shift and width functions and have asymmetric shapes. The program contains an adjustable error control parameter and includes the Voigt profile as a special case, although the general nature of this program renders it slower than a specialized Voigt profile method. Results on accuracy and computation time are presented for a broad set of test parameters, and a comparison is made with previous work on the asymptotic behavior of general Voigt profiles.

  14. Novel method of using dynamic electrical impedance signals for noninvasive diagnosis of knee osteoarthritis.

    PubMed

    Gajre, Suhas S; Anand, Sneh; Singh, U; Saxena, Rajendra K

    2006-01-01

    Osteoarthritis (OA) of knee is the most commonly occurring non-fatal irreversible disease, mainly in the elderly population and particularly in female. Various invasive and non-invasive methods are reported for the diagnosis of this articular cartilage pathology. Well known techniques such as X-ray, computed tomography, magnetic resonance imaging, arthroscopy and arthrography are having their disadvantages, and diagnosis of OA in early stages with simple effective noninvasive method is still a biomedical engineering problem. Analyzing knee joint noninvasive signals around knee might give simple solution for diagnosis of knee OA. We used electrical impedance data from knees to compare normal and osteoarthritic subjects during the most common dynamic conditions of the knee, i.e. walking and knee swing. It was found that there is substantial difference in the properties of the walking cycle (WC) and knee swing cycle (KS) signals. In experiments on 90 pathological (combined for KS and WC signals) and 72 normal signals (combined), suitable features were drawn. Then signals were used to classify as normal or pathological. Artificial multilayer feed forward neural network was trained using back propagation algorithm for the classification. On a training data set of 54 signals for KS signals, the classification efficiency for a test set of 54 was 70.37% and 85.19% with and without normalization respectively wrt base impedance. Similarly, the training set of 27 WC signals and test set of 27 signals resulted in 77.78% and 66.67% classification efficiency. The results indicate that dynamic electrical impedance signals have potential to be used as a novel method for noninvasive diagnosis of knee OA.

  15. A bootstrap based Neyman-Pearson test for identifying variable importance.

    PubMed

    Ditzler, Gregory; Polikar, Robi; Rosen, Gail

    2015-04-01

    Selection of most informative features that leads to a small loss on future data are arguably one of the most important steps in classification, data analysis and model selection. Several feature selection (FS) algorithms are available; however, due to noise present in any data set, FS algorithms are typically accompanied by an appropriate cross-validation scheme. In this brief, we propose a statistical hypothesis test derived from the Neyman-Pearson lemma for determining if a feature is statistically relevant. The proposed approach can be applied as a wrapper to any FS algorithm, regardless of the FS criteria used by that algorithm, to determine whether a feature belongs in the relevant set. Perhaps more importantly, this procedure efficiently determines the number of relevant features given an initial starting point. We provide freely available software implementations of the proposed methodology.

  16. Performance assessment of multi-frequency processing of ICU chest images for enhanced visualization of tubes and catheters

    NASA Astrophysics Data System (ADS)

    Wang, Xiaohui; Couwenhoven, Mary E.; Foos, David H.; Doran, James; Yankelevitz, David F.; Henschke, Claudia I.

    2008-03-01

    An image-processing method has been developed to improve the visibility of tube and catheter features in portable chest x-ray (CXR) images captured in the intensive care unit (ICU). The image-processing method is based on a multi-frequency approach, wherein the input image is decomposed into different spatial frequency bands, and those bands that contain the tube and catheter signals are individually enhanced by nonlinear boosting functions. Using a random sampling strategy, 50 cases were retrospectively selected for the study from a large database of portable CXR images that had been collected from multiple institutions over a two-year period. All images used in the study were captured using photo-stimulable, storage phosphor computed radiography (CR) systems. Each image was processed two ways. The images were processed with default image processing parameters such as those used in clinical settings (control). The 50 images were then separately processed using the new tube and catheter enhancement algorithm (test). Three board-certified radiologists participated in a reader study to assess differences in both detection-confidence performance and diagnostic efficiency between the control and test images. Images were evaluated on a diagnostic-quality, 3-megapixel monochrome monitor. Two scenarios were studied: the baseline scenario, representative of today's workflow (a single-control image presented with the window/level adjustments enabled) vs. the test scenario (a control/test image pair presented with a toggle enabled and the window/level settings disabled). The radiologists were asked to read the images in each scenario as they normally would for clinical diagnosis. Trend analysis indicates that the test scenario offers improved reading efficiency while providing as good or better detection capability compared to the baseline scenario.

  17. Beneficial aerodynamic effect of wing scales on the climbing flight of butterflies.

    PubMed

    Slegers, Nathan; Heilman, Michael; Cranford, Jacob; Lang, Amy; Yoder, John; Habegger, Maria Laura

    2017-01-30

    It is hypothesized that butterfly wing scale geometry and surface patterning may function to improve aerodynamic efficiency. In order to investigate this hypothesis, a method to measure butterfly flapping kinematics optically over long uninhibited flapping sequences was developed. Statistical results for the climbing flight flapping kinematics of 11 butterflies, based on a total of 236 individual flights, both with and without their wing scales, are presented. Results show, that for each of the 11 butterflies, the mean climbing efficiency decreased after scales were removed. Data was reduced to a single set of differences of climbing efficiency using are paired t-test. Results show a mean decrease in climbing efficiency of 32.2% occurred with a 95% confidence interval of 45.6%-18.8%. Similar analysis showed that the flapping amplitude decreased by 7% while the flapping frequency did not show a significant difference. Results provide strong evidence that butterfly wing scale geometry and surface patterning improve butterfly climbing efficiency. The authors hypothesize that the wing scale's effect in measured climbing efficiency may be due to an improved aerodynamic efficiency of the butterfly and could similarly be used on flapping wing micro air vehicles to potentially achieve similar gains in efficiency.

  18. General, crystallized and fluid intelligence are not associated with functional global network efficiency: A replication study with the human connectome project 1200 data set.

    PubMed

    Kruschwitz, J D; Waller, L; Daedelow, L S; Walter, H; Veer, I M

    2018-05-01

    One hallmark example of a link between global topological network properties of complex functional brain connectivity and cognitive performance is the finding that general intelligence may depend on the efficiency of the brain's intrinsic functional network architecture. However, although this association has been featured prominently over the course of the last decade, the empirical basis for this broad association of general intelligence and global functional network efficiency is quite limited. In the current study, we set out to replicate the previously reported association between general intelligence and global functional network efficiency using the large sample size and high quality data of the Human Connectome Project, and extended the original study by testing for separate association of crystallized and fluid intelligence with global efficiency, characteristic path length, and global clustering coefficient. We were unable to provide evidence for the proposed association between general intelligence and functional brain network efficiency, as was demonstrated by van den Heuvel et al. (2009), or for any other association with the global network measures employed. More specifically, across multiple network definition schemes, ranging from voxel-level networks to networks of only 100 nodes, no robust associations and only very weak non-significant effects with a maximal R 2 of 0.01 could be observed. Notably, the strongest (non-significant) effects were observed in voxel-level networks. We discuss the possibility that the low power of previous studies and publication bias may have led to false positive results fostering the widely accepted notion of general intelligence being associated to functional global network efficiency. Copyright © 2018 Elsevier Inc. All rights reserved.

  19. Battery voltage-balancing applications of disk-type radial mode Pb(Zr • Ti)O3 ceramic resonator

    NASA Astrophysics Data System (ADS)

    Thenathayalan, Daniel; Lee, Chun-gu; Park, Joung-hu

    2017-10-01

    In this paper, we propose a novel technique to build a charge-balancing circuit for series-connected battery strings using various kinds of disk-type ceramic Pb(Zr • Ti)O3 piezoelectric resonators (PRs). The use of PRs replaces the whole external battery voltage-balancer circuit, which consists mainly of a bulky magnetic element. The proposed technique is validated using different ceramic PRs and the results are analyzed in terms of their physical properties. A series-connected battery string with a voltage rating of 61.5 V is set as a hardware prototype under test, then the power transfer efficiency of the system is measured at different imbalance voltages. The performance of the proposed battery voltage-balancer circuit employed with a PR is also validated through hardware implementation. Furthermore, the temperature distribution image of the PR is obtained to compare power transfer efficiency and thermal stress under different operating conditions. The test results show that the battery voltage-balancer circuit can be successfully implemented using PRs with the maximum power conversion efficiency of over 96% for energy storage systems.

  20. Are stock prices too volatile to be justified by the dividend discount model?

    NASA Astrophysics Data System (ADS)

    Akdeniz, Levent; Salih, Aslıhan Altay; Ok, Süleyman Tuluğ

    2007-03-01

    This study investigates excess stock price volatility using the variance bound framework of LeRoy and Porter [The present-value relation: tests based on implied variance bounds, Econometrica 49 (1981) 555-574] and of Shiller [Do stock prices move too much to be justified by subsequent changes in dividends? Am. Econ. Rev. 71 (1981) 421-436.]. The conditional variance bound relationship is examined using cross-sectional data simulated from the general equilibrium asset pricing model of Brock [Asset prices in a production economy, in: J.J. McCall (Ed.), The Economics of Information and Uncertainty, University of Chicago Press, Chicago (for N.B.E.R.), 1982]. Results show that the conditional variance bounds hold, hence, our hypothesis of the validity of the dividend discount model cannot be rejected. Moreover, in our setting, markets are efficient and stock prices are neither affected by herd psychology nor by the outcome of noise trading by naive investors; thus, we are able to control for market efficiency. Consequently, we show that one cannot infer any conclusions about market efficiency from the unconditional variance bounds tests.

  1. High Efficiency Centrifugal Compressor for Rotorcraft Applications

    NASA Technical Reports Server (NTRS)

    Medic, Gorazd; Sharma, Om P.; Jongwook, Joo; Hardin, Larry W.; McCormick, Duane C.; Cousins, William T.; Lurie, Elizabeth A.; Shabbir, Aamir; Holley, Brian M.; Van Slooten, Paul R.

    2017-01-01

    The report "High Efficiency Centrifugal Compressor for Rotorcraft Applications" documents the work conducted at UTRC under the NRA Contract NNC08CB03C, with cost share 2/3 NASA, and 1/3 UTRC, that has been extended to 4.5 years. The purpose of this effort was to identify key technical barriers to advancing the state-of-the-art of small centrifugal compressor stages; to delineate the measurements required to provide insight into the flow physics of the technical barriers; to design, fabricate, install, and test a state-of-the-art research compressor that is representative of the rear stage of an axial-centrifugal aero-engine; and to acquire detailed aerodynamic performance and research quality data to clarify flow physics and to establish detailed data sets for future application. The design activity centered on meeting the goal set outlined in the NASA solicitation-the design target was to increase efficiency at higher work factor, while also reducing the maximum diameter of the stage. To fit within the existing Small Engine Components Test Facility at NASA Glenn Research Center (GRC) and to facilitate component re-use, certain key design parameters were fixed by UTRC, including impeller tip diameter, impeller rotational speed, and impeller inlet hub and shroud radii. This report describes the design effort of the High Efficiency Centrifugal Compressor stage (HECC) and delineation of measurements, fabrication of the compressor, and the initial tests that were performed. A new High-Efficiency Centrifugal Compressor stage with a very challenging reduction in radius ratio was successfully designed, fabricated and installed at GRC. The testing was successful, with no mechanical problems and the running clearances were achieved without impeller rubs. Overall, measured pressure ratio of 4.68, work factor of 0.81, and at design exit corrected flow rate of 3 lbm/s met the target requirements. Polytropic efficiency of 85.5 percent and stall margin of 7.5 percent were measured at design flow rate and speed. The measured efficiency and stall margin were lower than pre-test CFD predictions by 2.4 percentage points (pt) and 4.5 pt, respectively. Initial impressions from the experimental data indicated that the loss in the efficiency and stall margin can be attributed to a design shortfall in the impeller. However, detailed investigation of experimental data and post-test CFD simulations of higher fidelity than pre-test CFD, and in particular the unsteady CFD simulations and the assessment with a wider range of turbulence models, have indicated that the loss in efficiency is most likely due to the impact of unfavorable unsteady impeller/diffuser interactions induced by diffuser vanes, an impeller/diffuser corrected flow-rate mismatch (and associated incidence levels), and, potentially, flow separation in the radial-to-axial bend. An experimental program with a vaneless diffuser is recommended to evaluate this observation. A subsequent redesign of the diffuser (and the radial-to-axial bend) is also recommended. The diffuser needs to be redesigned to eliminate the mismatching of the impeller and the diffuser, targeting a slightly higher flow capacity. Furthermore, diffuser vanes need to be adjusted to align the incidence angles, to optimize the splitter vane location (both radially and circumferentially), and to minimize the unsteady interactions with the impeller. The radial-to-axial bend needs to be redesigned to eliminate, or at least minimize, the flow separation at the inner wall, and its impact on the flow in the diffuser upstream. Lessons were also learned in terms of CFD methodology and the importance of unsteady CFD simulations for centrifugal compressors was highlighted. Inconsistencies in the implementation of a widely used two-equation turbulence model were identified and corrections are recommended. It was also observed that unsteady simulations for centrifugal compressors require significantly longer integration times than what is current practice in industry.

  2. High Efficiency Centrifugal Compressor for Rotorcraft Applications

    NASA Technical Reports Server (NTRS)

    Medic, Gorazd; Sharma, Om P.; Jongwook, Joo; Hardin, Larry W.; McCormick, Duane C.; Cousins, William T.; Lurie, Elizabeth A.; Shabbir, Aamir; Holley, Brian M.; Van Slooten, Paul R.

    2014-01-01

    The report "High Efficiency Centrifugal Compressor for Rotorcraft Applications" documents the work conducted at UTRC under the NRA Contract NNC08CB03C, with cost share 2/3 NASA, and 1/3 UTRC, that has been extended to 4.5 years. The purpose of this effort was to identify key technical barriers to advancing the state-of-the-art of small centrifugal compressor stages; to delineate the measurements required to provide insight into the flow physics of the technical barriers; to design, fabricate, install, and test a state-of-the-art research compressor that is representative of the rear stage of an axial-centrifugal aero-engine; and to acquire detailed aerodynamic performance and research quality data to clarify flow physics and to establish detailed data sets for future application. The design activity centered on meeting the goal set outlined in the NASA solicitation-the design target was to increase efficiency at higher work factor, while also reducing the maximum diameter of the stage. To fit within the existing Small Engine Components Test Facility at NASA Glenn Research Center (GRC) and to facilitate component re-use, certain key design parameters were fixed by UTRC, including impeller tip diameter, impeller rotational speed, and impeller inlet hub and shroud radii. This report describes the design effort of the High Efficiency Centrifugal Compressor stage (HECC) and delineation of measurements, fabrication of the compressor, and the initial tests that were performed. A new High-Efficiency Centrifugal Compressor stage with a very challenging reduction in radius ratio was successfully designed, fabricated and installed at GRC. The testing was successful, with no mechanical problems and the running clearances were achieved without impeller rubs. Overall, measured pressure ratio of 4.68, work factor of 0.81, and at design exit corrected flow rate of 3 lbm/s met the target requirements. Polytropic efficiency of 85.5 percent and stall margin of 7.5 percent were measured at design flow rate and speed. The measured efficiency and stall margin were lower than pre-test CFD predictions by 2.4 percentage points (pt) and 4.5 pt, respectively. Initial impressions from the experimental data indicated that the loss in the efficiency and stall margin can be attributed to a design shortfall in the impeller. However, detailed investigation of experimental data and post-test CFD simulations of higher fidelity than pre-test CFD, and in particular the unsteady CFD simulations and the assessment with a wider range of turbulence models, have indicated that the loss in efficiency is most likely due to the impact of unfavorable unsteady impeller/diffuser interactions induced by diffuser vanes, an impeller/diffuser corrected flow-rate mismatch (and associated incidence levels), and, potentially, flow separation in the radial-to-axial bend. An experimental program with a vaneless diffuser is recommended to evaluate this observation. A subsequent redesign of the diffuser (and the radial-to-axial bend) is also recommended. The diffuser needs to be redesigned to eliminate the mismatching of the impeller and the diffuser, targeting a slightly higher flow capacity. Furthermore, diffuser vanes need to be adjusted to align the incidence angles, to optimize the splitter vane location (both radially and circumferentially), and to minimize the unsteady interactions with the impeller. The radial-to-axial bend needs to be redesigned to eliminate, or at least minimize, the flow separation at the inner wall, and its impact on the flow in the diffuser upstream. Lessons were also learned in terms of CFD methodology and the importance of unsteady CFD simulations for centrifugal compressors was highlighted. Inconsistencies in the implementation of a widely used two-equation turbulence model were identified and corrections are recommended. It was also observed that unsteady simulations for centrifugal compressors require significantly longer integration times than what is current practice in industry.

  3. An investigation of new methods for estimating parameter sensitivities

    NASA Technical Reports Server (NTRS)

    Beltracchi, Todd J.; Gabriele, Gary A.

    1988-01-01

    Parameter sensitivity is defined as the estimation of changes in the modeling functions and the design variables due to small changes in the fixed parameters of the formulation. There are currently several methods for estimating parameter sensitivities requiring either difficult to obtain second order information, or do not return reliable estimates for the derivatives. Additionally, all the methods assume that the set of active constraints does not change in a neighborhood of the estimation point. If the active set does in fact change, than any extrapolations based on these derivatives may be in error. The objective here is to investigate more efficient new methods for estimating parameter sensitivities when the active set changes. The new method is based on the recursive quadratic programming (RQP) method and in conjunction a differencing formula to produce estimates of the sensitivities. This is compared to existing methods and is shown to be very competitive in terms of the number of function evaluations required. In terms of accuracy, the method is shown to be equivalent to a modified version of the Kuhn-Tucker method, where the Hessian of the Lagrangian is estimated using the BFS method employed by the RPQ algorithm. Inital testing on a test set with known sensitivities demonstrates that the method can accurately calculate the parameter sensitivity. To handle changes in the active set, a deflection algorithm is proposed for those cases where the new set of active constraints remains linearly independent. For those cases where dependencies occur, a directional derivative is proposed. A few simple examples are included for the algorithm, but extensive testing has not yet been performed.

  4. Experimental evaluation of a cooled radial-inflow turbine

    NASA Technical Reports Server (NTRS)

    Tirres, Lizet; Dicicco, L. D.; Nowlin, Brent C.

    1993-01-01

    Two 14.4 inch tip diameter rotors were installed and tested in the Small Engines Component Turbine Facility (SECTF) at NASA Lewis Research Center. The rotors, a solid and a cooled version of a radial-inflow turbine, were tested with a 15 vane stat or over a set of rotational speeds ranging from 80 to 120 percent design speed (17,500 to 21,500 rpm). The total-to-total stage pressure ratios ranged from 2.5 to 5.5. The data obtained at the equivalent conditions using the solid version of the rotor are presented with the cooled rotor data. A Reynolds number of 381,000 was maintained for both rotors, whose stages had a design mass flow of 4.0 lbm/sec, a design work level of 59.61 Btu/lbm, and a design efficiency of 87 percent. The results include mass flow data, turbine torque, turbine exit flow angles, stage efficiency, and rotor inlet and exit surveys.

  5. Experimental Evaluation of a Cooled Radial-inflow Turbine

    NASA Technical Reports Server (NTRS)

    Tirres, Lizet; Dicicco, L. Danielle; Nowlin, Brent C.

    1993-01-01

    Two 14.4 inch tip diameter rotors were installed and tested in the Small Engines Component Turbine Facility (SECTF) at NASA Lewis Research Center. The rotors, a solid and a cooled version of a radial-inflow turbine, were tested with a 15 vane stat or over a set of rotational speeds ranging from 80 to 120 percent design speed (17,500 to 21,500 rpm). The total-to-total stage pressure ratios ranged from 2.5 to 5.5. The data obtained at the equivalent conditions using the solid version of the rotor are presented with the cooled rotor data. A Reynolds number of 381,000 was maintained for both rotors, whose stages had a design mass flow of 4.0 Ibm/sec, a design work level of 59.61 Btu/lbm, and a design efficiency of 87 percent. The results include mass flow data, turbine torque, turbine exit flow angles, stage efficiency, and rotor inlet and exit surveys.

  6. A user-defined data type for the storage of time series data allowing efficient similarity screening.

    PubMed

    Sorokin, Anatoly; Selkov, Gene; Goryanin, Igor

    2012-07-16

    The volume of the experimentally measured time series data is rapidly growing, while storage solutions offering better data types than simple arrays of numbers or opaque blobs for keeping series data are sorely lacking. A number of indexing methods have been proposed to provide efficient access to time series data, but none has so far been integrated into a tried-and-proven database system. To explore the possibility of such integration, we have developed a data type for time series storage in PostgreSQL, an object-relational database system, and equipped it with an access method based on SAX (Symbolic Aggregate approXimation). This new data type has been successfully tested in a database supporting a large-scale plant gene expression experiment, and it was additionally tested on a very large set of simulated time series data. Copyright © 2011 Elsevier B.V. All rights reserved.

  7. Experimental analysis of direct-expansion ground-coupled heat pump systems

    NASA Astrophysics Data System (ADS)

    Mei, V. C.; Baxter, V. D.

    1991-09-01

    Direct-expansion ground-coil-coupled (DXGC) heat pump systems have certain energy efficiency advantages over conventional ground-coupled heat pump (GCHP) systems. Principal among these advantages are that the secondary heat transfer fluid heat exchanger and circulating pump are eliminated. While the DXGC concept can produce higher efficiencies, it also produces more system design and environmental problems (e.g., compressor starting, oil return, possible ground pollution, and more refrigerant charging). Furthermore, general design guidelines for DXGC systems are not well documented. A two-pronged approach was adopted for this study: (1) a literature survey, and (2) a laboratory study of a DXGC heat pump system with R-22 as the refrigerant, for both heating and cooling mode tests done in parallel and series tube connections. The results of each task are described in this paper. A set of general design guidelines was derived from the test results and is also presented.

  8. Knock-Limited Performance of Triptane and Xylidines Blended with 28-R Aviation Fuel at High Compression Ratios and Maximum-Economy Spark Setting

    NASA Technical Reports Server (NTRS)

    Held, Louis F.; Pritchard, Ernest I.

    1946-01-01

    An investigation was conducted to evaluate the possibilities of utilizing the high-performance characteristics of triptane and xylidines blended with 28-R fuel in order to increase fuel economy by the use of high compression ratios and maximum-economy spark setting. Full-scale single-cylinder knock tests were run with 20 deg B.T.C. and maximum-economy spark settings at compression ratios of 6.9, 8.0, and 10.0, and with two inlet-air temperatures. The fuels tested consisted of triptane, four triptane and one xylidines blend with 28-R, and 28-R fuel alone. Indicated specific fuel consumption at lean mixtures was decreased approximately 17 percent at a compression ratio of 10.0 and maximum-economy spark setting, as compared to that obtained with a compression ratio of 6.9 and normal spark setting. When compression ratio was increased from 6.9 to 10.0 at an inlet-air temperature of 150 F, normal spark setting, and a fuel-air ratio of 0.065, 55-percent triptane was required with 28-R fuel to maintain the knock-limited brake power level obtained with 28-R fuel at a compression ratio of 6.9. Brake specific fuel consumption was decreased 17.5 percent at a compression ratio of 10.0 relative to that obtained at a compression ratio of 6.9. Approximately similar results were noted at an inlet-air temperature of 250 F. For concentrations up through at least 20 percent, triptane can be more efficiently used at normal than at maximum-economy spark setting to maintain a constant knock-limited power output over the range of compression ratios tested.

  9. Statistical procedures for evaluating daily and monthly hydrologic model predictions

    USGS Publications Warehouse

    Coffey, M.E.; Workman, S.R.; Taraba, J.L.; Fogle, A.W.

    2004-01-01

    The overall study objective was to evaluate the applicability of different qualitative and quantitative methods for comparing daily and monthly SWAT computer model hydrologic streamflow predictions to observed data, and to recommend statistical methods for use in future model evaluations. Statistical methods were tested using daily streamflows and monthly equivalent runoff depths. The statistical techniques included linear regression, Nash-Sutcliffe efficiency, nonparametric tests, t-test, objective functions, autocorrelation, and cross-correlation. None of the methods specifically applied to the non-normal distribution and dependence between data points for the daily predicted and observed data. Of the tested methods, median objective functions, sign test, autocorrelation, and cross-correlation were most applicable for the daily data. The robust coefficient of determination (CD*) and robust modeling efficiency (EF*) objective functions were the preferred methods for daily model results due to the ease of comparing these values with a fixed ideal reference value of one. Predicted and observed monthly totals were more normally distributed, and there was less dependence between individual monthly totals than was observed for the corresponding predicted and observed daily values. More statistical methods were available for comparing SWAT model-predicted and observed monthly totals. The 1995 monthly SWAT model predictions and observed data had a regression Rr2 of 0.70, a Nash-Sutcliffe efficiency of 0.41, and the t-test failed to reject the equal data means hypothesis. The Nash-Sutcliffe coefficient and the R r2 coefficient were the preferred methods for monthly results due to the ability to compare these coefficients to a set ideal value of one.

  10. [Assessment of the efficiency of the auditory training in children with dyslalia and auditory processing disorders].

    PubMed

    Włodarczyk, Elżbieta; Szkiełkowska, Agata; Skarżyński, Henryk; Piłka, Adam

    2011-01-01

    To assess effectiveness of the auditory training in children with dyslalia and central auditory processing disorders. Material consisted of 50 children aged 7-9-years-old. Children with articulation disorders stayed under long-term speech therapy care in the Auditory and Phoniatrics Clinic. All children were examined by a laryngologist and a phoniatrician. Assessment included tonal and impedance audiometry and speech therapists' and psychologist's consultations. Additionally, a set of electrophysiological examinations was performed - registration of N2, P2, N2, P2, P300 waves and psychoacoustic test of central auditory functions: FPT - frequency pattern test. Next children took part in the regular auditory training and attended speech therapy. Speech assessment followed treatment and therapy, again psychoacoustic tests were performed and P300 cortical potentials were recorded. After that statistical analyses were performed. Analyses revealed that application of auditory training in patients with dyslalia and other central auditory disorders is very efficient. Auditory training may be a very efficient therapy supporting speech therapy in children suffering from dyslalia coexisting with articulation and central auditory disorders and in children with educational problems of audiogenic origin. Copyright © 2011 Polish Otolaryngology Society. Published by Elsevier Urban & Partner (Poland). All rights reserved.

  11. Efficient culture of Chlamydia pneumoniae with cell lines derived from the human respiratory tract.

    PubMed Central

    Wong, K H; Skelton, S K; Chan, Y K

    1992-01-01

    Two established cell lines, H 292 and HEp-2, originating from the human respiratory tract, were found to be significantly more efficient and practical than the currently used HeLa 229 cells for growth of Chlamydia pneumoniae. Six strains of C. pneumoniae recently isolated from patients with respiratory ailments were used as test cultures. The H 292 and HEp-2 cells yielded much higher inclusion counts for all the test strains than did HeLa 229 cells. When they were compared with each other, H 292 cells yielded more inclusions than did HEp-2 cells, and the differences were statistically significant in 10 of 18 test sets. A simple system with these two cell lines appeared to be very efficient for culturing C. pneumoniae. It does not require treatment of tissue cells with DEAE-dextran before infection, and it may eliminate the need for serial subpassages of specimens to increase culture sensitivity. Monolayers of these cells remained intact and viable in the Chlamydia growth medium so that reinfection could take place, resulting in greatly increased inclusion counts for specimens containing few infectious units. This system may make it more practical for laboratories to culture for C. pneumoniae for treatment of infections and outbreak intervention and will facilitate studies on this recently recognized pathogen. PMID:1629316

  12. Identification of DNA-Binding Proteins Using Mixed Feature Representation Methods.

    PubMed

    Qu, Kaiyang; Han, Ke; Wu, Song; Wang, Guohua; Wei, Leyi

    2017-09-22

    DNA-binding proteins play vital roles in cellular processes, such as DNA packaging, replication, transcription, regulation, and other DNA-associated activities. The current main prediction method is based on machine learning, and its accuracy mainly depends on the features extraction method. Therefore, using an efficient feature representation method is important to enhance the classification accuracy. However, existing feature representation methods cannot efficiently distinguish DNA-binding proteins from non-DNA-binding proteins. In this paper, a multi-feature representation method, which combines three feature representation methods, namely, K-Skip-N-Grams, Information theory, and Sequential and structural features (SSF), is used to represent the protein sequences and improve feature representation ability. In addition, the classifier is a support vector machine. The mixed-feature representation method is evaluated using 10-fold cross-validation and a test set. Feature vectors, which are obtained from a combination of three feature extractions, show the best performance in 10-fold cross-validation both under non-dimensional reduction and dimensional reduction by max-relevance-max-distance. Moreover, the reduced mixed feature method performs better than the non-reduced mixed feature technique. The feature vectors, which are a combination of SSF and K-Skip-N-Grams, show the best performance in the test set. Among these methods, mixed features exhibit superiority over the single features.

  13. Performance study of personal inhalable aerosol samplers at ultra-low wind speeds.

    PubMed

    Sleeth, Darrah K; Vincent, James H

    2012-03-01

    The assessment of personal inhalable aerosol samplers in a controlled laboratory setting has not previously been carried out at the ultra-low wind speed conditions that represent most modern workplaces. There is currently some concern about whether the existing inhalable aerosol convention is appropriate at these low wind speeds and an alternative has been suggested. It was therefore important to assess the performance of the most common personal samplers used to collect the inhalable aerosol fraction, especially those that were designed to match the original curve. The experimental set-up involved use of a hybrid ultra-low speed wind tunnel/calm air chamber and a rotating, heating breathing mannequin to measure the inhalable fraction of aerosol exposure. The samplers that were tested included the Institute of Occupational Medicine (IOM), Button, and GSP inhalable samplers as well as the closed-face cassette sampler that has been (and still is) widely used by occupational hygienists in many countries. The results showed that, down to ∼0.2 m s(-1), the samplers matched the current inhalability criterion relatively well but were significantly greater than this at the lowest wind speed tested. Overall, there was a significant effect of wind speed on sampling efficiency, with lower wind speeds clearly associated with an increase in sampling efficiency.

  14. Development of a multiplex PCR assay for detection and discrimination of Theileria annulata and Theileria sergenti in cattle.

    PubMed

    Junlong, Liu; Li, Youquan; Liu, Aihong; Guan, Guiquan; Xie, Junren; Yin, Hong; Luo, Jianxun

    2015-07-01

    Aim to construct a simple and efficient diagnostic assay for Theileria annulata and Theileria sergenti, a multiplex polymerase chain reaction (PCR) method was developed in this study. Following the alignment of the related sequences, two primer sets were designed specific targeting on T. annulata cytochrome b (COB) gene and T. sergenti internal transcribed spacer (ITS) sequences. It was found that the designed primers could react in one PCR system and generating amplifications of 818 and 393 base pair for T. sergenti and T. annulata, respectively. The standard genomic DNA of both species Theileria was serial tenfold diluted for testing the sensitivity, while specificity test confirmed both primer sets have no cross-reaction with other Theileria and Babesia species. In addition, 378 field samples were used for evaluation of the utility of the multiplex PCR assay for detection of the pathogens infection. The detection results were compared with the other two published PCR methods which targeting on T. annulata COB gene and T. sergenti major piroplasm surface protein (MPSP) gene, respectively. The developed multiplex PCR assay has similar efficient detection with COB and MPSP PCR, which indicates this multiplex PCR may be a valuable assay for the epidemiological studies for T. annulata and T. sergenti.

  15. Fast kinematic ray tracing of first- and later-arriving global seismic phases

    NASA Astrophysics Data System (ADS)

    Bijwaard, Harmen; Spakman, Wim

    1999-11-01

    We have developed a ray tracing algorithm that traces first- and later-arriving global seismic phases precisely (traveltime errors of the order of 0.1 s), and with great computational efficiency (15 rays s- 1). To achieve this, we have extended and adapted two existing ray tracing techniques: a graph method and a perturbation method. The two resulting algorithms are able to trace (critically) refracted, (multiply) reflected, some diffracted (Pdiff), and (multiply) converted seismic phases in a 3-D spherical geometry, thus including the largest part of seismic phases that are commonly observed on seismograms. We have tested and compared the two methods in 2-D and 3-D Cartesian and spherical models, for which both algorithms have yielded precise paths and traveltimes. These tests indicate that only the perturbation method is computationally efficient enough to perform 3-D ray tracing on global data sets of several million phases. To demonstrate its potential for non-linear tomography, we have applied the ray perturbation algorithm to a data set of 7.6 million P and pP phases used by Bijwaard et al. (1998) for linearized tomography. This showed that the expected heterogeneity within the Earth's mantle leads to significant non-linear effects on traveltimes for 10 per cent of the applied phases.

  16. Multilocus Association Mapping Using Variable-Length Markov Chains

    PubMed Central

    Browning, Sharon R.

    2006-01-01

    I propose a new method for association-based gene mapping that makes powerful use of multilocus data, is computationally efficient, and is straightforward to apply over large genomic regions. The approach is based on the fitting of variable-length Markov chain models, which automatically adapt to the degree of linkage disequilibrium (LD) between markers to create a parsimonious model for the LD structure. Edges of the fitted graph are tested for association with trait status. This approach can be thought of as haplotype testing with sophisticated windowing that accounts for extent of LD to reduce degrees of freedom and number of tests while maximizing information. I present analyses of two published data sets that show that this approach can have better power than single-marker tests or sliding-window haplotypic tests. PMID:16685642

  17. Multilocus association mapping using variable-length Markov chains.

    PubMed

    Browning, Sharon R

    2006-06-01

    I propose a new method for association-based gene mapping that makes powerful use of multilocus data, is computationally efficient, and is straightforward to apply over large genomic regions. The approach is based on the fitting of variable-length Markov chain models, which automatically adapt to the degree of linkage disequilibrium (LD) between markers to create a parsimonious model for the LD structure. Edges of the fitted graph are tested for association with trait status. This approach can be thought of as haplotype testing with sophisticated windowing that accounts for extent of LD to reduce degrees of freedom and number of tests while maximizing information. I present analyses of two published data sets that show that this approach can have better power than single-marker tests or sliding-window haplotypic tests.

  18. The Efficient Utilization of Open Source Information

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Baty, Samuel R.

    These are a set of slides on the efficient utilization of open source information. Open source information consists of a vast set of information from a variety of sources. Not only does the quantity of open source information pose a problem, the quality of such information can hinder efforts. To show this, two case studies are mentioned: Iran and North Korea, in order to see how open source information can be utilized. The huge breadth and depth of open source information can complicate an analysis, especially because open information has no guarantee of accuracy. Open source information can provide keymore » insights either directly or indirectly: looking at supporting factors (flow of scientists, products and waste from mines, government budgets, etc.); direct factors (statements, tests, deployments). Fundamentally, it is the independent verification of information that allows for a more complete picture to be formed. Overlapping sources allow for more precise bounds on times, weights, temperatures, yields or other issues of interest in order to determine capability. Ultimately, a "good" answer almost never comes from an individual, but rather requires the utilization of a wide range of skill sets held by a team of people.« less

  19. True Randomness from Big Data.

    PubMed

    Papakonstantinou, Periklis A; Woodruff, David P; Yang, Guang

    2016-09-26

    Generating random bits is a difficult task, which is important for physical systems simulation, cryptography, and many applications that rely on high-quality random bits. Our contribution is to show how to generate provably random bits from uncertain events whose outcomes are routinely recorded in the form of massive data sets. These include scientific data sets, such as in astronomics, genomics, as well as data produced by individuals, such as internet search logs, sensor networks, and social network feeds. We view the generation of such data as the sampling process from a big source, which is a random variable of size at least a few gigabytes. Our view initiates the study of big sources in the randomness extraction literature. Previous approaches for big sources rely on statistical assumptions about the samples. We introduce a general method that provably extracts almost-uniform random bits from big sources and extensively validate it empirically on real data sets. The experimental findings indicate that our method is efficient enough to handle large enough sources, while previous extractor constructions are not efficient enough to be practical. Quality-wise, our method at least matches quantum randomness expanders and classical world empirical extractors as measured by standardized tests.

  20. True Randomness from Big Data

    NASA Astrophysics Data System (ADS)

    Papakonstantinou, Periklis A.; Woodruff, David P.; Yang, Guang

    2016-09-01

    Generating random bits is a difficult task, which is important for physical systems simulation, cryptography, and many applications that rely on high-quality random bits. Our contribution is to show how to generate provably random bits from uncertain events whose outcomes are routinely recorded in the form of massive data sets. These include scientific data sets, such as in astronomics, genomics, as well as data produced by individuals, such as internet search logs, sensor networks, and social network feeds. We view the generation of such data as the sampling process from a big source, which is a random variable of size at least a few gigabytes. Our view initiates the study of big sources in the randomness extraction literature. Previous approaches for big sources rely on statistical assumptions about the samples. We introduce a general method that provably extracts almost-uniform random bits from big sources and extensively validate it empirically on real data sets. The experimental findings indicate that our method is efficient enough to handle large enough sources, while previous extractor constructions are not efficient enough to be practical. Quality-wise, our method at least matches quantum randomness expanders and classical world empirical extractors as measured by standardized tests.

  1. True Randomness from Big Data

    PubMed Central

    Papakonstantinou, Periklis A.; Woodruff, David P.; Yang, Guang

    2016-01-01

    Generating random bits is a difficult task, which is important for physical systems simulation, cryptography, and many applications that rely on high-quality random bits. Our contribution is to show how to generate provably random bits from uncertain events whose outcomes are routinely recorded in the form of massive data sets. These include scientific data sets, such as in astronomics, genomics, as well as data produced by individuals, such as internet search logs, sensor networks, and social network feeds. We view the generation of such data as the sampling process from a big source, which is a random variable of size at least a few gigabytes. Our view initiates the study of big sources in the randomness extraction literature. Previous approaches for big sources rely on statistical assumptions about the samples. We introduce a general method that provably extracts almost-uniform random bits from big sources and extensively validate it empirically on real data sets. The experimental findings indicate that our method is efficient enough to handle large enough sources, while previous extractor constructions are not efficient enough to be practical. Quality-wise, our method at least matches quantum randomness expanders and classical world empirical extractors as measured by standardized tests. PMID:27666514

  2. An efficient graph theory based method to identify every minimal reaction set in a metabolic network

    PubMed Central

    2014-01-01

    Background Development of cells with minimal metabolic functionality is gaining importance due to their efficiency in producing chemicals and fuels. Existing computational methods to identify minimal reaction sets in metabolic networks are computationally expensive. Further, they identify only one of the several possible minimal reaction sets. Results In this paper, we propose an efficient graph theory based recursive optimization approach to identify all minimal reaction sets. Graph theoretical insights offer systematic methods to not only reduce the number of variables in math programming and increase its computational efficiency, but also provide efficient ways to find multiple optimal solutions. The efficacy of the proposed approach is demonstrated using case studies from Escherichia coli and Saccharomyces cerevisiae. In case study 1, the proposed method identified three minimal reaction sets each containing 38 reactions in Escherichia coli central metabolic network with 77 reactions. Analysis of these three minimal reaction sets revealed that one of them is more suitable for developing minimal metabolism cell compared to other two due to practically achievable internal flux distribution. In case study 2, the proposed method identified 256 minimal reaction sets from the Saccharomyces cerevisiae genome scale metabolic network with 620 reactions. The proposed method required only 4.5 hours to identify all the 256 minimal reaction sets and has shown a significant reduction (approximately 80%) in the solution time when compared to the existing methods for finding minimal reaction set. Conclusions Identification of all minimal reactions sets in metabolic networks is essential since different minimal reaction sets have different properties that effect the bioprocess development. The proposed method correctly identified all minimal reaction sets in a both the case studies. The proposed method is computationally efficient compared to other methods for finding minimal reaction sets and useful to employ with genome-scale metabolic networks. PMID:24594118

  3. Liquid hydrogen and liquid oxygen feedline passive recirculation analysis

    NASA Astrophysics Data System (ADS)

    Holt, Kimberly Ann; Cleary, Nicole L.; Nichols, Andrew J.; Perry, Gretchen L. E.

    The primary goal of the National Launch System (NLS) program was to design an operationally efficient, highly reliable vehicle with minimal recurring launch costs. To achieve this goal, trade studies of key main propulsion subsystems were performed to specify vehicle design requirements. These requirements include the use of passive recirculation to thermally condition the liquid hydrogen (LH2) and liquid oxygen (LO2) propellant feed systems and Space Transportation Main Engine (STME) fuel pumps. Rockwell International (RI) proposed a joint independent research and development (JIRAD) program with Marshall Space Flight Center (MSFC) to study the LH2 feed system passive recirculation concept. The testing was started in July 1992 and completed in November 1992. Vertical and sloped feedline designs were used. An engine simulator was attached at the bottom of the feedline. This simulator had strip heaters that were set to equal the corresponding heat input from different engines. A computer program is currently being used to analyze the passive recirculation concept in the LH2 vertical feedline tests. Four tests, where the heater setting is the independent variable, were chosen. While the JIRAD with RI was underway, General Dynamics Space Systems (GDSS) proposed a JIRAD with MSFC to explore passive recirculation in the LO2 feed system. Liquid nitrogen (LN2) is being used instead of LO2 for safety and economic concerns. To date, three sets of calibration tests have been completed on the sloped LN2 test article. The environmental heat was calculated from the calibration tests in which the strip heaters were turned off. During the LH2 testing, the environmental heat was assumed to be constant. Therefore, the total heat was equal to the environmental heat flux plus the heater input. However, the first two sets of LN2 calibration tests have shown that the environmental heat flux varies with heater input. A Systems Improved Numerical Differencing Analyzer and Fluid Integrator (SINDA/FLUINT) model is currently being built to determine if this variation in environmental heat is due to a change in the wall temperature.

  4. Minimally Invasive Surgery Survey: A Survey of Surgical Team Members' Perceptions for Successful Minimally Invasive Surgery.

    PubMed

    Yurteri-Kaplan, Ladin A; Andriani, Leslie; Kumar, Anagha; Saunders, Pamela A; Mete, Mihriye M; Sokol, Andrew I

    To develop a valid and reliable survey to measure surgical team members' perceptions regarding their institution's requirements for successful minimally invasive surgery (MIS). Questionnaire development and validation study (Canadian Task Force classification II-2). Three hospital types: rural, urban/academic, and community/academic. Minimally invasive staff (team members). Development and validation of a minimally invasive surgery survey (MISS). Using the Safety Attitudes questionnaire as a guide, we developed questions assessing study participants' attitudes regarding the requirements for successful MIS. The questions were closed-ended and responses based on a 5-point Likert scale. The large pool of questions was then given to 4 focus groups made up of 3 to 6 individuals. Each focus group consisted of individuals from a specific profession (e.g., surgeons, anesthesiologists, nurses, and surgical technicians). Questions were revised based on focus group recommendations, resulting in a final 52-question set. The question set was then distributed to MIS team members. Individuals were included if they had participated in >10 MIS cases and worked in the MIS setting in the past 3 months. Participants in the trial population were asked to repeat the questionnaire 4 weeks later to evaluate internal consistency. Participants' demographics, including age, gender, specialty, profession, and years of experience, were captured in the questionnaire. Factor analysis with varimax rotation was performed to determine domains (questions evaluating similar themes). For internal consistency and reliability, domains were tested using interitem correlations and Cronbach's α. Cronbach's α > .6 was considered internally consistent. Kendall's correlation coefficient τ closer to 1 and with p < .05 was considered significant for the test-retest reliability. Two hundred fifty participants answered the initial question set. Of those, 53 were eliminated because they did not meet inclusion criteria or failed to answer all questions, leaving 197 participants. Most participants were women (68% vs 32%), and 42% were between the ages 30 and 39 years. Factor analysis identified 6 domains: collaboration, error reporting, job proficiency/efficiency, problem-solving, job satisfaction, and situational awareness. Interitem correlations testing for redundancy for each domain ranged from .2 to .7, suggesting similar themed questions while avoiding redundancy. Cronbach's α, testing internal consistency, was .87. Sixty-two participants from the original cohort repeated the question set at 4 weeks. Forty-three were analyzed for test-retest reliability after excluding those who did not meet inclusion criteria. The final questions showed high test-retest reliability (τ = .3-.7, p < .05). The final questionnaire was made up of 29 questions from the original 52 question set. The MISS is a reliable and valid tool that can be used to measure how surgical team members conceptualize the requirements for successful MIS. The MISS revealed that participants identified 6 important domains of a successful workenvironment: collaboration, error reporting, job proficiency/efficiency, problem-solving, job satisfaction, and situational awareness. The questionnaire can be used to understand and align various surgical team members' goals and expectations and may help improve quality of care in the MIS setting. Copyright © 2017 American Association of Gynecologic Laparoscopists. Published by Elsevier Inc. All rights reserved.

  5. The Weighting Is The Hardest Part: On The Behavior of the Likelihood Ratio Test and the Score Test Under a Data-Driven Weighting Scheme in Sequenced Samples

    PubMed Central

    Minică, Camelia C.; Genovese, Giulio; Hultman, Christina M.; Pool, René; Vink, Jacqueline M.; Neale, Michael C.; Dolan, Conor V.; Neale, Benjamin M.

    2017-01-01

    Sequence-based association studies are at a critical inflexion point with the increasing availability of exome-sequencing data. A popular test of association is the sequence kernel association test (SKAT). Weights are embedded within SKAT to reflect the hypothesized contribution of the variants to the trait variance. Because the true weights are generally unknown, and so are subject to misspecification, we examined the efficiency of a data-driven weighting scheme. We propose the use of a set of theoretically defensible weighting schemes, of which, we assume, the one that gives the largest test statistic is likely to capture best the allele frequency-functional effect relationship. We show that the use of alternative weights obviates the need to impose arbitrary frequency thresholds in sequence data association analyses. As both the score test and the likelihood ratio test (LRT) may be used in this context, and may differ in power, we characterize the behavior of both tests. We found that the two tests have equal power if the set of weights resembled the correct ones. However, if the weights are badly specified, the LRT shows superior power (due to its robustness to misspecification). With this data-driven weighting procedure the LRT detected significant signal in genes located in regions already confirmed as associated with schizophrenia – the PRRC2A (P=1.020E-06) and the VARS2 (P=2.383E-06) – in the Swedish schizophrenia case-control cohort of 11,040 individuals with exome-sequencing data. The score test is currently preferred for its computational efficiency and power. Indeed, assuming correct specification, in some circumstances the score test is the most powerful. However, LRT has the advantageous properties of being generally more robust and more powerful under weight misspecification. This is an important result given that, arguably, misspecified models are likely to be the rule rather than the exception in weighting-based approaches. PMID:28238293

  6. Ranking Cognitive Flexibility in a Group Setting of Rhesus Monkeys with a Set-Shifting Procedure.

    PubMed

    Shnitko, Tatiana A; Allen, Daicia C; Gonzales, Steven W; Walter, Nicole A R; Grant, Kathleen A

    2017-01-01

    Attentional set-shifting ability is an executive function underling cognitive flexibility in humans and animals. In humans, this function is typically observed during a single experimental session where dimensions of playing cards are used to measure flexibility in the face of changing rules for reinforcement (i.e., the Wisconsin Card Sorting Test (WCST)). In laboratory animals, particularly non-human primates, variants of the WCST involve extensive training and testing on a series of dimensional discriminations, usually in social isolation. In the present study, a novel experimental approach was used to assess attentional set-shifting simultaneously in 12 rhesus monkeys. Specifically, monkeys living in individual cages but in the same room were trained at the same time each day in a set-shifting task in the same housing environment. As opposed to the previous studies, each daily session began with a simple single-dimension discrimination regardless of the animal's performance on the previous session. A total of eight increasingly difficult, discriminations (sets) were possible in each daily 45 min session. Correct responses were reinforced under a second-order schedule of flavored food pellet delivery, and criteria for completing a set was 12 correct trials out of a running total of 15 trials. Monkeys progressed through the sets at their own pace and abilities. The results demonstrate that all 12 monkeys acquired the simple discrimination (the first set), but individual differences in the ability to progress through all eight sets were apparent. A performance index (PI) that encompassed progression through the sets, errors and session duration was calculated and used to rank each monkey's performance in relation to each other. Overall, this version of a set-shifting task results in an efficient assessment of reliable differences in cognitive flexibility in a group of monkeys.

  7. Ranking Cognitive Flexibility in a Group Setting of Rhesus Monkeys with a Set-Shifting Procedure

    PubMed Central

    Shnitko, Tatiana A.; Allen, Daicia C.; Gonzales, Steven W.; Walter, Nicole A. R.; Grant, Kathleen A.

    2017-01-01

    Attentional set-shifting ability is an executive function underling cognitive flexibility in humans and animals. In humans, this function is typically observed during a single experimental session where dimensions of playing cards are used to measure flexibility in the face of changing rules for reinforcement (i.e., the Wisconsin Card Sorting Test (WCST)). In laboratory animals, particularly non-human primates, variants of the WCST involve extensive training and testing on a series of dimensional discriminations, usually in social isolation. In the present study, a novel experimental approach was used to assess attentional set-shifting simultaneously in 12 rhesus monkeys. Specifically, monkeys living in individual cages but in the same room were trained at the same time each day in a set-shifting task in the same housing environment. As opposed to the previous studies, each daily session began with a simple single-dimension discrimination regardless of the animal’s performance on the previous session. A total of eight increasingly difficult, discriminations (sets) were possible in each daily 45 min session. Correct responses were reinforced under a second-order schedule of flavored food pellet delivery, and criteria for completing a set was 12 correct trials out of a running total of 15 trials. Monkeys progressed through the sets at their own pace and abilities. The results demonstrate that all 12 monkeys acquired the simple discrimination (the first set), but individual differences in the ability to progress through all eight sets were apparent. A performance index (PI) that encompassed progression through the sets, errors and session duration was calculated and used to rank each monkey’s performance in relation to each other. Overall, this version of a set-shifting task results in an efficient assessment of reliable differences in cognitive flexibility in a group of monkeys. PMID:28386222

  8. Java web tools for PCR, in silico PCR, and oligonucleotide assembly and analysis.

    PubMed

    Kalendar, Ruslan; Lee, David; Schulman, Alan H

    2011-08-01

    The polymerase chain reaction is fundamental to molecular biology and is the most important practical molecular technique for the research laboratory. We have developed and tested efficient tools for PCR primer and probe design, which also predict oligonucleotide properties based on experimental studies of PCR efficiency. The tools provide comprehensive facilities for designing primers for most PCR applications and their combinations, including standard, multiplex, long-distance, inverse, real-time, unique, group-specific, bisulphite modification assays, Overlap-Extension PCR Multi-Fragment Assembly, as well as a programme to design oligonucleotide sets for long sequence assembly by ligase chain reaction. The in silico PCR primer or probe search includes comprehensive analyses of individual primers and primer pairs. It calculates the melting temperature for standard and degenerate oligonucleotides including LNA and other modifications, provides analyses for a set of primers with prediction of oligonucleotide properties, dimer and G-quadruplex detection, linguistic complexity, and provides a dilution and resuspension calculator. Copyright © 2011 Elsevier Inc. All rights reserved.

  9. Structure and weights optimisation of a modified Elman network emotion classifier using hybrid computational intelligence algorithms: a comparative study

    NASA Astrophysics Data System (ADS)

    Sheikhan, Mansour; Abbasnezhad Arabi, Mahdi; Gharavian, Davood

    2015-10-01

    Artificial neural networks are efficient models in pattern recognition applications, but their performance is dependent on employing suitable structure and connection weights. This study used a hybrid method for obtaining the optimal weight set and architecture of a recurrent neural emotion classifier based on gravitational search algorithm (GSA) and its binary version (BGSA), respectively. By considering the features of speech signal that were related to prosody, voice quality, and spectrum, a rich feature set was constructed. To select more efficient features, a fast feature selection method was employed. The performance of the proposed hybrid GSA-BGSA method was compared with similar hybrid methods based on particle swarm optimisation (PSO) algorithm and its binary version, PSO and discrete firefly algorithm, and hybrid of error back-propagation and genetic algorithm that were used for optimisation. Experimental tests on Berlin emotional database demonstrated the superior performance of the proposed method using a lighter network structure.

  10. Binary Hypothesis Testing With Byzantine Sensors: Fundamental Tradeoff Between Security and Efficiency

    NASA Astrophysics Data System (ADS)

    Ren, Xiaoqiang; Yan, Jiaqi; Mo, Yilin

    2018-03-01

    This paper studies binary hypothesis testing based on measurements from a set of sensors, a subset of which can be compromised by an attacker. The measurements from a compromised sensor can be manipulated arbitrarily by the adversary. The asymptotic exponential rate, with which the probability of error goes to zero, is adopted to indicate the detection performance of a detector. In practice, we expect the attack on sensors to be sporadic, and therefore the system may operate with all the sensors being benign for extended period of time. This motivates us to consider the trade-off between the detection performance of a detector, i.e., the probability of error, when the attacker is absent (defined as efficiency) and the worst-case detection performance when the attacker is present (defined as security). We first provide the fundamental limits of this trade-off, and then propose a detection strategy that achieves these limits. We then consider a special case, where there is no trade-off between security and efficiency. In other words, our detection strategy can achieve the maximal efficiency and the maximal security simultaneously. Two extensions of the secure hypothesis testing problem are also studied and fundamental limits and achievability results are provided: 1) a subset of sensors, namely "secure" sensors, are assumed to be equipped with better security countermeasures and hence are guaranteed to be benign, 2) detection performance with unknown number of compromised sensors. Numerical examples are given to illustrate the main results.

  11. Does integration of HIV and SRH services achieve economies of scale and scope in practice? A cost function analysis of the Integra Initiative.

    PubMed

    Obure, Carol Dayo; Guinness, Lorna; Sweeney, Sedona; Initiative, Integra; Vassall, Anna

    2016-03-01

    Policy-makers have long argued about the potential efficiency gains and cost savings from integrating HIV and sexual reproductive health (SRH) services, particularly in resource-constrained settings with generalised HIV epidemics. However, until now, little empirical evidence exists on whether the hypothesised efficiency gains associated with such integration can be achieved in practice. We estimated a quadratic cost function using data obtained from 40 health facilities, over a 2-year-period, in Kenya and Swaziland. The quadratic specification enables us to determine the existence of economies of scale and scope. The empirical results reveal that at the current output levels, only HIV counselling and testing services are characterised by service-specific economies of scale. However, no overall economies of scale exist as all outputs are increased. The results also indicate cost complementarities between cervical cancer screening and HIV care; post-natal care and HIV care and family planning and sexually transmitted infection treatment combinations only. The results from this analysis reveal that contrary to expectation, efficiency gains from the integration of HIV and SRH services, if any, are likely to be modest. Efficiency gains are likely to be most achievable in settings that are currently delivering HIV and SRH services at a low scale with high levels of fixed costs. The presence of cost complementarities for only three service combinations implies that careful consideration of setting-specific clinical practices and the extent to which they can be combined should be made when deciding which services to integrate. NCT01694862. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/

  12. Does integration of HIV and SRH services achieve economies of scale and scope in practice? A cost function analysis of the Integra Initiative

    PubMed Central

    Obure, Carol Dayo; Guinness, Lorna; Sweeney, Sedona; Initiative, Integra; Vassall, Anna

    2016-01-01

    Objective Policy-makers have long argued about the potential efficiency gains and cost savings from integrating HIV and sexual reproductive health (SRH) services, particularly in resource-constrained settings with generalised HIV epidemics. However, until now, little empirical evidence exists on whether the hypothesised efficiency gains associated with such integration can be achieved in practice. Methods We estimated a quadratic cost function using data obtained from 40 health facilities, over a 2-year-period, in Kenya and Swaziland. The quadratic specification enables us to determine the existence of economies of scale and scope. Findings The empirical results reveal that at the current output levels, only HIV counselling and testing services are characterised by service-specific economies of scale. However, no overall economies of scale exist as all outputs are increased. The results also indicate cost complementarities between cervical cancer screening and HIV care; post-natal care and HIV care and family planning and sexually transmitted infection treatment combinations only. Conclusions The results from this analysis reveal that contrary to expectation, efficiency gains from the integration of HIV and SRH services, if any, are likely to be modest. Efficiency gains are likely to be most achievable in settings that are currently delivering HIV and SRH services at a low scale with high levels of fixed costs. The presence of cost complementarities for only three service combinations implies that careful consideration of setting-specific clinical practices and the extent to which they can be combined should be made when deciding which services to integrate. Trial registration number NCT01694862. PMID:26438349

  13. An Efficient Data Partitioning to Improve Classification Performance While Keeping Parameters Interpretable.

    PubMed

    Korjus, Kristjan; Hebart, Martin N; Vicente, Raul

    2016-01-01

    Supervised machine learning methods typically require splitting data into multiple chunks for training, validating, and finally testing classifiers. For finding the best parameters of a classifier, training and validation are usually carried out with cross-validation. This is followed by application of the classifier with optimized parameters to a separate test set for estimating the classifier's generalization performance. With limited data, this separation of test data creates a difficult trade-off between having more statistical power in estimating generalization performance versus choosing better parameters and fitting a better model. We propose a novel approach that we term "Cross-validation and cross-testing" improving this trade-off by re-using test data without biasing classifier performance. The novel approach is validated using simulated data and electrophysiological recordings in humans and rodents. The results demonstrate that the approach has a higher probability of discovering significant results than the standard approach of cross-validation and testing, while maintaining the nominal alpha level. In contrast to nested cross-validation, which is maximally efficient in re-using data, the proposed approach additionally maintains the interpretability of individual parameters. Taken together, we suggest an addition to currently used machine learning approaches which may be particularly useful in cases where model weights do not require interpretation, but parameters do.

  14. Improvement of analytical dynamic models using modal test data

    NASA Technical Reports Server (NTRS)

    Berman, A.; Wei, F. S.; Rao, K. V.

    1980-01-01

    A method developed to determine maximum changes in analytical mass and stiffness matrices to make them consistent with a set of measured normal modes and natural frequencies is presented. The corrected model will be an improved base for studies of physical changes, boundary condition changes, and for prediction of forced responses. The method features efficient procedures not requiring solutions of the eigenvalue problem, and the ability to have more degrees of freedom than the test data. In addition, modal displacements are obtained for all analytical degrees of freedom, and the frequency dependence of the coordinate transformations is properly treated.

  15. Afterburner Performance of Circular V-Gutters and a Sector of Parallel V-Gutters for a Range of Inlet Temperatures to 1255 K (1800 F)

    NASA Technical Reports Server (NTRS)

    Brandstetter, J. Robert; Reck, Gregory M.

    1973-01-01

    Combustion tests of two V-gutter types were conducted in a 19.25-in. diameter duct using vitiated air. Fuel spraybars were mounted in line with the V-gutters. Combustor length was set by flame-quench water sprays which were part of a calorimeter for measuring combustion efficiency. Although the levels of performance of the parallel and circular array afterburners were different, the trends with geometry variations were consistent. Therefore, parallel arrays can be used for evaluating V-gutter geometry effects on combustion performance. For both arrays, the highest inlet temperature produced combustion efficiencies near 100 percent. A 5-in. spraybar - to - V-gutter spacing gave higher efficiency and better lean blowout performance than a spacing twice as large. Gutter durability was good.

  16. Small Projects Rapid Integration and Test Environment (SPRITE): Application for Increasing Robustness

    NASA Technical Reports Server (NTRS)

    Rakoczy, John; Heater, Daniel; Lee, Ashley

    2013-01-01

    Marshall Space Flight Center's (MSFC) Small Projects Rapid Integration and Test Environment (SPRITE) is a Hardware-In-The-Loop (HWIL) facility that provides rapid development, integration, and testing capabilities for small projects (CubeSats, payloads, spacecraft, and launch vehicles). This facility environment focuses on efficient processes and modular design to support rapid prototyping, integration, testing and verification of small projects at an affordable cost, especially compared to larger type HWIL facilities. SPRITE (Figure 1) consists of a "core" capability or "plant" simulation platform utilizing a graphical programming environment capable of being rapidly re-configured for any potential test article's space environments, as well as a standard set of interfaces (i.e. Mil-Std 1553, Serial, Analog, Digital, etc.). SPRITE also allows this level of interface testing of components and subsystems very early in a program, thereby reducing program risk.

  17. Comparative Study of Impedance Eduction Methods. Part 1; DLR Tests and Methodology

    NASA Technical Reports Server (NTRS)

    Busse-Gerstengarbe, Stefan; Bake, Friedrich; Enghardt, Lars; Jones, Michael G.

    2013-01-01

    The absorption efficiency of acoustic liners used in aircraft engines is characterized by the acoustic impedance. World wide, many grazing ow test rigs and eduction methods are available that provide values for that impedance. However, a direct comparison and assessment of the data of the di erent rigs and methods is often not possible because test objects and test conditions are quite di erent. Only a few papers provide a direct comparison. Therefore, this paper together with a companion paper, present data measured with a reference test object under similar conditions in the DLR and NASA grazing ow test rigs. Additionally, by applying the in-house methods Liner Impedance Non-Uniform ow Solving algorithm (LINUS, DLR) and Convected Helmhholtz Equation approach (CHE, NASA) on the data sets, similarities and differences due to underlying theory are identi ed and discussed.

  18. A rolling-sliding bench test for investigating rear axle lubrication

    DOE PAGES

    Stump, Benjamin C.; Zhou, Yan; Viola, Michael B.; ...

    2018-02-07

    An automotive rear axle is composed of a set of hypoid gears, whose contact surfaces experience a complex combination of rolling contact fatigue damage and sliding wear. Full-scale rear axle dynamometer tests are used in the industry for efficiency and durability assessment. Here, this study developed a bench-scale rolling-sliding test protocol by simulating the contact pressure, oil temperature, and lubrication regime experienced in a dynamometer duty cycle test. Initial bench results have demonstrated the ability of generating both rolling contact-induced micropitting and sliding wear and the feasibility of investigating the impact of slide-to-roll ratio, surface roughness, test duration, and oilmore » temperature on the friction behavior, vibration noise, and surface damage. Finally, this bench test will allow studying candidate rear axle lubricants and materials under relevant conditions.« less

  19. A rolling-sliding bench test for investigating rear axle lubrication

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stump, Benjamin C.; Zhou, Yan; Viola, Michael B.

    An automotive rear axle is composed of a set of hypoid gears, whose contact surfaces experience a complex combination of rolling contact fatigue damage and sliding wear. Full-scale rear axle dynamometer tests are used in the industry for efficiency and durability assessment. Here, this study developed a bench-scale rolling-sliding test protocol by simulating the contact pressure, oil temperature, and lubrication regime experienced in a dynamometer duty cycle test. Initial bench results have demonstrated the ability of generating both rolling contact-induced micropitting and sliding wear and the feasibility of investigating the impact of slide-to-roll ratio, surface roughness, test duration, and oilmore » temperature on the friction behavior, vibration noise, and surface damage. Finally, this bench test will allow studying candidate rear axle lubricants and materials under relevant conditions.« less

  20. The silicon-glass microreactor with embedded sensors—technology and results of preliminary qualitative tests, toward intelligent microreaction plant

    NASA Astrophysics Data System (ADS)

    Knapkiewicz, P.

    2013-03-01

    The technology and preliminary qualitative tests of silicon-glass microreactors with embedded pressure and temperature sensors are presented. The concept of microreactors for leading highly exothermic reactions, e.g. nitration of hydrocarbons, and design process-included computer-aided simulations are described in detail. The silicon-glass microreactor chip consisting of two micromixers (multistream micromixer), reaction channels, cooling/heating chambers has been proposed. The microreactor chip was equipped with a set of pressure and temperature sensors and packaged. Tests of mixing quality, pressure drops in channels, heat exchange efficiency and dynamic behavior of pressure and temperature sensors were documented. Finally, two applications were described.

  1. A Worst-Case Approach for On-Line Flutter Prediction

    NASA Technical Reports Server (NTRS)

    Lind, Rick C.; Brenner, Martin J.

    1998-01-01

    Worst-case flutter margins may be computed for a linear model with respect to a set of uncertainty operators using the structured singular value. This paper considers an on-line implementation to compute these robust margins in a flight test program. Uncertainty descriptions are updated at test points to account for unmodeled time-varying dynamics of the airplane by ensuring the robust model is not invalidated by measured flight data. Robust margins computed with respect to this uncertainty remain conservative to the changing dynamics throughout the flight. A simulation clearly demonstrates this method can improve the efficiency of flight testing by accurately predicting the flutter margin to improve safety while reducing the necessary flight time.

  2. Increasing the Efficiency on Producing Radiology Reports for Breast Cancer Diagnosis by Means of Structured Reports. A Comparative Study.

    PubMed

    Segrelles, J Damian; Medina, Rosana; Blanquer, Ignacio; Martí-Bonmatí, Luis

    2017-05-18

    Radiology reports are commonly written on free-text using voice recognition devices. Structured reports (SR) have a high potential but they are usually considered more difficult to fill-in so their adoption in clinical practice leads to a lower efficiency. However, some studies have demonstrated that in some cases, producing SRs may require shorter time than plain-text ones. This work focuses on the definition and demonstration of a methodology to evaluate the productivity of software tools for producing radiology reports. A set of SRs for breast cancer diagnosis based on BI-RADS have been developed using this method. An analysis of their efficiency with respect to free-text reports has been performed. The methodology proposed compares the Elapsed Time (ET) on a set of radiological reports. Free-text reports are produced with the speech recognition devices used in the clinical practice. Structured reports are generated using a web application generated with TRENCADIS framework. A team of six radiologists with three different levels of experience in the breast cancer diagnosis was recruited. These radiologists performed the evaluation, each one introducing 50 reports for mammography, 50 for ultrasound scan and 50 for MRI using both approaches. Also, the Relative Efficiency (REF) was computed for each report, dividing the ET of both methods. We applied the T-Student (T-S) test to compare the ETs and the ANOVA test to compare the REFs. Both tests were computed using the SPSS software. The study produced three DICOM-SR templates for Breast Cancer Diagnosis on mammography, ultrasound and MRI, using RADLEX terms based on BIRADs 5th edition. The T-S test on radiologists with high or intermediate profile, showed that the difference between the ET was only statistically significant for mammography and ultrasound. The ANOVA test performed grouping the REF by modalities, indicated that there were no significant differences between mammograms and ultrasound scans, but both have significant statistical differences with MRI. The ANOVA test of the REF for each modality, indicated that there were only significant differences in Mammography (ANOVA p = 0.024) and Ultrasound (ANOVA p = 0.008). The ANOVA test for each radiologist profile, indicated that there were significant differences on the high profile (ANOVA p = 0.028) and medium (ANOVA p = 0.045). In this work, we have defined and demonstrated a methodology to evaluate the productivity of software tools for producing radiology reports in Breast Cancer. We have evaluated that adopting Structured Reporting in mammography and ultrasound studies in breast cancer diagnosis improves the performance in producing reports.

  3. Efficient forced vibration reanalysis method for rotating electric machines

    NASA Astrophysics Data System (ADS)

    Saito, Akira; Suzuki, Hiromitsu; Kuroishi, Masakatsu; Nakai, Hideo

    2015-01-01

    Rotating electric machines are subject to forced vibration by magnetic force excitation with wide-band frequency spectrum that are dependent on the operating conditions. Therefore, when designing the electric machines, it is inevitable to compute the vibration response of the machines at various operating conditions efficiently and accurately. This paper presents an efficient frequency-domain vibration analysis method for the electric machines. The method enables the efficient re-analysis of the vibration response of electric machines at various operating conditions without the necessity to re-compute the harmonic response by finite element analyses. Theoretical background of the proposed method is provided, which is based on the modal reduction of the magnetic force excitation by a set of amplitude-modulated standing-waves. The method is applied to the forced response vibration of the interior permanent magnet motor at a fixed operating condition. The results computed by the proposed method agree very well with those computed by the conventional harmonic response analysis by the FEA. The proposed method is then applied to the spin-up test condition to demonstrate its applicability to various operating conditions. It is observed that the proposed method can successfully be applied to the spin-up test conditions, and the measured dominant frequency peaks in the frequency response can be well captured by the proposed approach.

  4. Deliberation favours social efficiency by making people disregard their relative shares: evidence from USA and India

    PubMed Central

    Corgnet, Brice; Espín, Antonio M.; Hernán-González, Roberto

    2017-01-01

    Groups make decisions on both the production and the distribution of resources. These decisions typically involve a tension between increasing the total level of group resources (i.e. social efficiency) and distributing these resources among group members (i.e. individuals' relative shares). This is the case because the redistribution process may destroy part of the resources, thus resulting in socially inefficient allocations. Here we apply a dual-process approach to understand the cognitive underpinnings of this fundamental tension. We conducted a set of experiments to examine the extent to which different allocation decisions respond to intuition or deliberation. In a newly developed approach, we assess intuition and deliberation at both the trait level (using the Cognitive Reflection Test, henceforth CRT) and the state level (through the experimental manipulation of response times). To test for robustness, experiments were conducted in two countries: the USA and India. Despite absolute-level differences across countries, in both locations we show that: (i) time pressure and low CRT scores are associated with individuals' concerns for their relative shares and (ii) time delay and high CRT scores are associated with individuals' concerns for social efficiency. These findings demonstrate that deliberation favours social efficiency by overriding individuals' intuitive tendency to focus on relative shares. PMID:28386421

  5. Incineration of polychlorinated biphenyls in high-efficiency boilers: a viable disposal option

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hunt, G.T.; Wolf, P.; Fennelly, P.F.

    1984-03-01

    Approximately 750 million pounds of polychlorinated biphenyls (PCBs) remain in service today in the United States. The eventual disposition of these materials and the vast stock piles already removed from commerce and use represents a formidable problem to both U.S. industry (e.g., utility companies) and federal and state environmental agencies. Despite the fact that available disposal options include the use of high-temperature incineration, disposal efforts have been significantly hampered by the lack of approved incineration facilities. The results of comprehensive PCB incineration programs conducted in accordance with EPA test protocols at each of three high-efficiency boiler sites are presented. Fluemore » gas sampling procedures included the use of both the modified method 5 PCB train and the Source Assessment Sampling System (SASS). Analytical protocols included the use of gas chromatography (GC/ECD) and combined gas chromatography/mass spectrometry (GC/MS). PCB destruction efficiency data for each of nine test runs were in excess of the 99.9% values assumed by the EPA regulation. The cumulative data set lends further credibility to the use of high-efficiency boilers as a viable disposal option for PCB contaminated (50-500 ppm) waste oils when conducted in strict accordance with existing EPA protocols.« less

  6. An Optimal Bahadur-Efficient Method in Detection of Sparse Signals with Applications to Pathway Analysis in Sequencing Association Studies.

    PubMed

    Dai, Hongying; Wu, Guodong; Wu, Michael; Zhi, Degui

    2016-01-01

    Next-generation sequencing data pose a severe curse of dimensionality, complicating traditional "single marker-single trait" analysis. We propose a two-stage combined p-value method for pathway analysis. The first stage is at the gene level, where we integrate effects within a gene using the Sequence Kernel Association Test (SKAT). The second stage is at the pathway level, where we perform a correlated Lancaster procedure to detect joint effects from multiple genes within a pathway. We show that the Lancaster procedure is optimal in Bahadur efficiency among all combined p-value methods. The Bahadur efficiency,[Formula: see text], compares sample sizes among different statistical tests when signals become sparse in sequencing data, i.e. ε →0. The optimal Bahadur efficiency ensures that the Lancaster procedure asymptotically requires a minimal sample size to detect sparse signals ([Formula: see text]). The Lancaster procedure can also be applied to meta-analysis. Extensive empirical assessments of exome sequencing data show that the proposed method outperforms Gene Set Enrichment Analysis (GSEA). We applied the competitive Lancaster procedure to meta-analysis data generated by the Global Lipids Genetics Consortium to identify pathways significantly associated with high-density lipoprotein cholesterol, low-density lipoprotein cholesterol, triglycerides, and total cholesterol.

  7. Quantum efficiency performances of the NIR European Large Format Array detectors tested at ESTEC

    NASA Astrophysics Data System (ADS)

    Crouzet, P.-E.; Duvet, L.; de Wit, F.; Beaufort, T.; Blommaert, S.; Butler, B.; Van Duinkerken, G.; ter Haar, J.; Heijnen, J.; van der Luijt, K.; Smit, H.

    2015-10-01

    Publisher's Note: This paper, originally published on 10/12/2015, was replaced with a corrected/revised version on 10/23/2015. If you downloaded the original PDF but are unable to access the revision, please contact SPIE Digital Library Customer Service for assistance. The Payload Technology Validation Section (SRE-FV) at ESTEC has the goal to validate new technology for future or on-going mission. In this framework, a test set up to characterize the quantum efficiency of near-infrared (NIR) detectors has been created. In the context of the NIR European Large Format Array ("LFA"), 3 deliverables detectors coming from SELEX-UK/ATC (UK) on one side, and CEA/LETI- CEA/IRFU-SOFRADIR (FR) on the other side were characterized. The quantum efficiency of an HAWAII-2RG detector from Teledyne was as well measured. The capability to compare on the same setup detectors from different manufacturers is a unique asset for the future mission preparation office. This publication will present the quantum efficiency results of a HAWAII-2RG detector from Teledyne with a 2.5um cut off compared to the LFA European detectors prototypes developed independently by SELEX-UK/ATC (UK) on one side, and CEA/LETI- CEA/IRFU-SOFRADIR (FR) on the other side.

  8. The segmentation of bones in pelvic CT images based on extraction of key frames.

    PubMed

    Yu, Hui; Wang, Haijun; Shi, Yao; Xu, Ke; Yu, Xuyao; Cao, Yuzhen

    2018-05-22

    Bone segmentation is important in computed tomography (CT) imaging of the pelvis, which assists physicians in the early diagnosis of pelvic injury, in planning operations, and in evaluating the effects of surgical treatment. This study developed a new algorithm for the accurate, fast, and efficient segmentation of the pelvis. The proposed method consists of two main parts: the extraction of key frames and the segmentation of pelvic CT images. Key frames were extracted based on pixel difference, mutual information and normalized correlation coefficient. In the pelvis segmentation phase, skeleton extraction from CT images and a marker-based watershed algorithm were combined to segment the pelvis. To meet the requirements of clinical application, physician's judgment is needed. Therefore the proposed methodology is semi-automated. In this paper, 5 sets of CT data were used to test the overlapping area, and 15 CT images were used to determine the average deviation distance. The average overlapping area of the 5 sets was greater than 94%, and the minimum average deviation distance was approximately 0.58 pixels. In addition, the key frame extraction efficiency and the running time of the proposed method were evaluated on 20 sets of CT data. For each set, approximately 13% of the images were selected as key frames, and the average processing time was approximately 2 min (the time for manual marking was not included). The proposed method is able to achieve accurate, fast, and efficient segmentation of pelvic CT image sequences. Segmentation results not only provide an important reference for early diagnosis and decisions regarding surgical procedures, they also offer more accurate data for medical image registration, recognition and 3D reconstruction.

  9. Simulations of the 2.5D inviscid primitive equations in a limited domain

    NASA Astrophysics Data System (ADS)

    Chen, Qingshan; Temam, Roger; Tribbia, Joseph J.

    2008-12-01

    The primitive equations (PEs) of the atmosphere and the oceans without viscosity are considered. These equations are not well-posed for any set of local boundary conditions. In space dimension 2.5 a set of nonlocal boundary conditions has been proposed in Chen et al. [Q. Chen, J. Laminie, A. Rousseau, R. Temam, J. Tribbia, A 2.5D Model for the equations of the ocean and the atmosphere, Anal. Appl. 5(3) (2007) 199-229]. The present article is aimed at testing the validity of these boundary conditions with physically relevant data. The issues tested are the well-posedness in the nonlinear case and the computational efficiency of the boundary conditions for limited area models [T.T. Warner, R.A. Peterson, R.E. Treadon, A tutorial on lateral boundary conditions as a basic and potentially serious limitation to regional numerical weather prediction, Bull. Amer. Meteor. Soc. 78(11) (1997) 2599-2617].

  10. Economic assessments of small-scale drinking-water interventions in pursuit of MDG target 7C.

    PubMed

    Cameron, John; Jagals, Paul; Hunter, Paul R; Pedley, Steve; Pond, Katherine

    2011-12-01

    This paper uses an applied rural case study of a safer water intervention in South Africa to illustrate how three levels of economic assessment can be used to understand the impact of the intervention on people's well-being. It is set in the context of Millennium Development Goal 7 which sets a target (7C) for safe drinking-water provision and the challenges of reaching people in remote rural areas with relatively small-scale schemes. The assessment moves from cost efficiency to cost effectiveness to a full social cost-benefit analysis (SCBA) with an associated sensitivity test. In addition to demonstrating techniques of analysis, the paper brings out many of the challenges in understanding how safer drinking-water impacts on people's livelihoods. The SCBA shows the case study intervention is justified economically, though the sensitivity test suggests 'downside' vulnerability. Copyright © 2011 Elsevier B.V. All rights reserved.

  11. The effects of competition on efficiency of electricity generation: A post-PURPA analysis

    NASA Astrophysics Data System (ADS)

    Jordan, Paula Faye

    1998-10-01

    The central issue of this research is the effects increased market competition has on production efficiency. Specifically, the research focuses upon measuring the relative level of efficiency in the generation of electricity in 1978 and 1993. It is hypothesized that the Public Utilities Regulatory Policy Act (PURPA), passed by Congress in 1978, made progress toward achieving its legislative intent of increasing competition, and therefore increased efficiency, in the generation of electricity. The methodology used to measure levels of efficiency in this research is the stochastic statistical estimator with the functional form of the translog production function. The models are then estimated using the maximum likelihood estimating technique using plant level data of coal generating units in the U.S. for 1978 and 1993. Results from the estimation of these models indicate that: (a) For the technical efficiency measures, the 1978 data set out performed the 1993 data set for the OTE and OTE of Fuel measures; (b) the 1993 data set was relatively more efficient in the OTE of Capital and the OTE of Labor when compared to the 1978 data set; (c) The 1993 observations indicated a relatively greater level of efficiency over 1978 in the OAE, OAE of Fuel, and OAE of Capital measures; (d) The OAE of Labor measure findings supported the 1978 observations as more efficient when compared to the 1993 set of observations; (e) When looking at the top and bottom ranked sites within each data set, the results indicated that sites which were top or poor performers for the technical and allocative efficiency measures tended to be a top or poor performer for the overall, fuel, and capital measures. The sites that appeared as a top or poor performer of labor measures within the technical and allocative groups were often unique and didn't necessarily appear as a top or poor performer in the other efficiency measures.

  12. Efficiency of the spectral-spatial classification of hyperspectral imaging data

    NASA Astrophysics Data System (ADS)

    Borzov, S. M.; Potaturkin, O. I.

    2017-01-01

    The efficiency of methods of the spectral-spatial classification of similarly looking types of vegetation on the basis of hyperspectral data of remote sensing of the Earth, which take into account local neighborhoods of analyzed image pixels, is experimentally studied. Algorithms that involve spatial pre-processing of the raw data and post-processing of pixel-based spectral classification maps are considered. Results obtained both for a large-size hyperspectral image and for its test fragment with different methods of training set construction are reported. The classification accuracy in all cases is estimated through comparisons of ground-truth data and classification maps formed by using the compared methods. The reasons for the differences in these estimates are discussed.

  13. Pooled HIV-1 viral load testing using dried blood spots to reduce the cost of monitoring antiretroviral treatment in a resource-limited setting.

    PubMed

    Pannus, Pieter; Fajardo, Emmanuel; Metcalf, Carol; Coulborn, Rebecca M; Durán, Laura T; Bygrave, Helen; Ellman, Tom; Garone, Daniela; Murowa, Michael; Mwenda, Reuben; Reid, Tony; Preiser, Wolfgang

    2013-10-01

    Rollout of routine HIV-1 viral load monitoring is hampered by high costs and logistical difficulties associated with sample collection and transport. New strategies are needed to overcome these constraints. Dried blood spots from finger pricks have been shown to be more practical than the use of plasma specimens, and pooling strategies using plasma specimens have been demonstrated to be an efficient method to reduce costs. This study found that combination of finger-prick dried blood spots and a pooling strategy is a feasible and efficient option to reduce costs, while maintaining accuracy in the context of a district hospital in Malawi.

  14. An intelligent system based on fuzzy probabilities for medical diagnosis– a study in aphasia diagnosis*

    PubMed Central

    Moshtagh-Khorasani, Majid; Akbarzadeh-T, Mohammad-R; Jahangiri, Nader; Khoobdel, Mehdi

    2009-01-01

    BACKGROUND: Aphasia diagnosis is particularly challenging due to the linguistic uncertainty and vagueness, inconsistencies in the definition of aphasic syndromes, large number of measurements with imprecision, natural diversity and subjectivity in test objects as well as in opinions of experts who diagnose the disease. METHODS: Fuzzy probability is proposed here as the basic framework for handling the uncertainties in medical diagnosis and particularly aphasia diagnosis. To efficiently construct this fuzzy probabilistic mapping, statistical analysis is performed that constructs input membership functions as well as determines an effective set of input features. RESULTS: Considering the high sensitivity of performance measures to different distribution of testing/training sets, a statistical t-test of significance is applied to compare fuzzy approach results with NN results as well as author's earlier work using fuzzy logic. The proposed fuzzy probability estimator approach clearly provides better diagnosis for both classes of data sets. Specifically, for the first and second type of fuzzy probability classifiers, i.e. spontaneous speech and comprehensive model, P-values are 2.24E-08 and 0.0059, respectively, strongly rejecting the null hypothesis. CONCLUSIONS: The technique is applied and compared on both comprehensive and spontaneous speech test data for diagnosis of four Aphasia types: Anomic, Broca, Global and Wernicke. Statistical analysis confirms that the proposed approach can significantly improve accuracy using fewer Aphasia features. PMID:21772867

  15. In vitro comparison of Günther Tulip and Celect filters: testing filtering efficiency and pressure drop.

    PubMed

    Nicolas, M; Malvé, M; Peña, E; Martínez, M A; Leask, R

    2015-02-05

    In this study, the trapping ability of the Günther Tulip and Celect inferior vena cava filters was evaluated. Thrombus capture rates of the filters were tested in vitro in horizontal position with thrombus diameters of 3 and 6mm and tube diameter of 19mm. The filters were tested in centered and tilted positions. Sets of 30 clots were injected into the model and the same process was repeated 20 times for each different condition simulated. Pressure drop experienced along the system was also measured and the percentage of clots captured was recorded. The Günther Tulip filter showed superiority in all cases, trapping almost 100% of 6mm clots both in an eccentric and tilted position and trapping 81.7% of the 3mm clots in a centered position and 69.3% in a maximum tilted position. The efficiency of all filters tested decreased as the size of the embolus decreased and as the filter was tilted. The injection of 6 clots raised the pressure drop to 4.1mmHg, which is a reasonable value that does not cause the obstruction of blood flow through the system. Copyright © 2014 Elsevier Ltd. All rights reserved.

  16. Fabrication and Testing of a Thin-Film Heat Flux Sensor for a Stirling Convertor

    NASA Technical Reports Server (NTRS)

    Wilson, Scott D.; Fralick, Gustave; Wrbanek, John; Sayir, Ali

    2009-01-01

    The NASA Glenn Research Center (GRC) has been testing high efficiency free-piston Stirling convertors for potential use in radioisotope power systems since 1999. Stirling convertors are being operated for many years to demonstrate a radioisotope power system capable of providing reliable power for potential multi-year missions. Techniques used to monitor the convertors for change in performance include measurements of temperature, pressure, energy addition, and energy rejection. Micro-porous bulk insulation is used in the Stirling convertor test set up to minimize the loss of thermal energy from the electric heat source to the environment. The insulation is characterized before extended operation, enabling correlation of the net thermal energy addition to the convertor. Aging microporous bulk insulation changes insulation efficiency, introducing errors in the correlation for net thermal energy addition. A thin-mm heat flux sensor was designed and fabricated to directly measure the net thermal energy addition to the Stirling convertor. The fabrication techniques include slip casting and using Physical Vapor Deposition (PVD). One micron thick noble metal thermocouples measure temperature on the surface of an Alumina ceramic disc and heat flux is calculated. Fabrication, integration, and test results of a thin film heat flux sensor are presented.

  17. Measurement of non-sugar solids content in Chinese rice wine using near infrared spectroscopy combined with an efficient characteristic variables selection algorithm.

    PubMed

    Ouyang, Qin; Zhao, Jiewen; Chen, Quansheng

    2015-01-01

    The non-sugar solids (NSS) content is one of the most important nutrition indicators of Chinese rice wine. This study proposed a rapid method for the measurement of NSS content in Chinese rice wine using near infrared (NIR) spectroscopy. We also systemically studied the efficient spectral variables selection algorithms that have to go through modeling. A new algorithm of synergy interval partial least square with competitive adaptive reweighted sampling (Si-CARS-PLS) was proposed for modeling. The performance of the final model was back-evaluated using root mean square error of calibration (RMSEC) and correlation coefficient (Rc) in calibration set and similarly tested by mean square error of prediction (RMSEP) and correlation coefficient (Rp) in prediction set. The optimum model by Si-CARS-PLS algorithm was achieved when 7 PLS factors and 18 variables were included, and the results were as follows: Rc=0.95 and RMSEC=1.12 in the calibration set, Rp=0.95 and RMSEP=1.22 in the prediction set. In addition, Si-CARS-PLS algorithm showed its superiority when compared with the commonly used algorithms in multivariate calibration. This work demonstrated that NIR spectroscopy technique combined with a suitable multivariate calibration algorithm has a high potential in rapid measurement of NSS content in Chinese rice wine. Copyright © 2015 Elsevier B.V. All rights reserved.

  18. On the Asymptotic Relative Efficiency of Planned Missingness Designs.

    PubMed

    Rhemtulla, Mijke; Savalei, Victoria; Little, Todd D

    2016-03-01

    In planned missingness (PM) designs, certain data are set a priori to be missing. PM designs can increase validity and reduce cost; however, little is known about the loss of efficiency that accompanies these designs. The present paper compares PM designs to reduced sample (RN) designs that have the same total number of data points concentrated in fewer participants. In 4 studies, we consider models for both observed and latent variables, designs that do or do not include an "X set" of variables with complete data, and a full range of between- and within-set correlation values. All results are obtained using asymptotic relative efficiency formulas, and thus no data are generated; this novel approach allows us to examine whether PM designs have theoretical advantages over RN designs removing the impact of sampling error. Our primary findings are that (a) in manifest variable regression models, estimates of regression coefficients have much lower relative efficiency in PM designs as compared to RN designs, (b) relative efficiency of factor correlation or latent regression coefficient estimates is maximized when the indicators of each latent variable come from different sets, and (c) the addition of an X set improves efficiency in manifest variable regression models only for the parameters that directly involve the X-set variables, but it substantially improves efficiency of most parameters in latent variable models. We conclude that PM designs can be beneficial when the model of interest is a latent variable model; recommendations are made for how to optimize such a design.

  19. Feasibility of Using Full Synthetic Low Viscosity Engine Oil at High Ambient Temperatures in U.S. Army Engines

    DTIC Science & Technology

    2011-06-01

    Approved for public release; distribution unlimited 13. SUPPLEMENTARY NOTES 14. ABSTRACT Advancements in lubricant technology over the last two decades...in particular, the availability of high quality synthetic base oils, has set the stage for the development of a new fuel efficient, multifunctional...were conducted following two standard military testing cycles; the 210 h Tactical Wheeled Vehicle Cycle, and the 400 h NATO Hardware Endurance

  20. Partitioning Rectangular and Structurally Nonsymmetric Sparse Matrices for Parallel Processing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    B. Hendrickson; T.G. Kolda

    1998-09-01

    A common operation in scientific computing is the multiplication of a sparse, rectangular or structurally nonsymmetric matrix and a vector. In many applications the matrix- transpose-vector product is also required. This paper addresses the efficient parallelization of these operations. We show that the problem can be expressed in terms of partitioning bipartite graphs. We then introduce several algorithms for this partitioning problem and compare their performance on a set of test matrices.

  1. Advanced Lubrication for Energy Efficiency, Durability and Lower Maintenance Costs of Advanced Naval Components and Systems

    DTIC Science & Technology

    2010-08-20

    for transmitting the required power and torque. The proper gear set has also been sized to insure life expectancy of the test rig. The shaft design ...these at minimal cost and great environmental safety. These materials specifically designed on antiwear and extreme pressure chemistries can...nanolubricant additives are designed as surface-stabilized nanomaterials that are dispersed in a hydrocarbon medium for maximum effectiveness. This

  2. Visual Motor Integration as a Screener for Responders and Non-Responders in Preschool and Early School Years: Implications for Inclusive Assessment in Oman

    ERIC Educational Resources Information Center

    Emam, Mahmoud Mohamed; Kazem, Ali Mahdi

    2016-01-01

    Visual motor integration (VMI) is the ability of the eyes and hands to work together in smooth, efficient patterns. In Oman, there are few effective methods to assess VMI skills in children in inclusive settings. The current study investigated the performance of preschool and early school years responders and non-responders on a VMI test. The full…

  3. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kitanidis, Peter

    As large-scale, commercial storage projects become operational, the problem of utilizing information from diverse sources becomes more critically important. In this project, we developed, tested, and applied an advanced joint data inversion system for CO 2 storage modeling with large data sets for use in site characterization and real-time monitoring. Emphasis was on the development of advanced and efficient computational algorithms for joint inversion of hydro-geophysical data, coupled with state-of-the-art forward process simulations. The developed system consists of (1) inversion tools using characterization data, such as 3D seismic survey (amplitude images), borehole log and core data, as well as hydraulic,more » tracer and thermal tests before CO 2 injection, (2) joint inversion tools for updating the geologic model with the distribution of rock properties, thus reducing uncertainty, using hydro-geophysical monitoring data, and (3) highly efficient algorithms for directly solving the dense or sparse linear algebra systems derived from the joint inversion. The system combines methods from stochastic analysis, fast linear algebra, and high performance computing. The developed joint inversion tools have been tested through synthetic CO 2 storage examples.« less

  4. Estimated cost per HIV infection diagnosed through routine HIV testing offered in acute general medical admission units and general practice settings in England.

    PubMed

    Ong, K J; Thornton, A C; Fisher, M; Hutt, R; Nicholson, S; Palfreeman, A; Perry, N; Stedman-Bryce, G; Wilkinson, P; Delpech, V; Nardone, A

    2016-04-01

    Following national guidelines to expand HIV testing in high-prevalence areas in England, a number of pilot studies were conducted in acute general medical admission units (ACUs) and general practices (GPs) to assess the feasibility and acceptability of testing in these settings. The aim of this study was to estimate the cost per HIV infection diagnosed through routine HIV testing in these settings. Resource use data from four 2009/2010 Department of Health pilot studies (two ACUs; two GPs) were analysed. Data from the pilots were validated and supplemented with information from other sources. We constructed possible scenarios to estimate the cost per test carried out through expanded HIV testing in ACUs and GPs, and the cost per diagnosis. In the pilots, cost per test ranged from £8.55 to £13.50, and offer time and patient uptake were 2 minutes and 90% in ACUs, and 5 minutes and 60% in GPs, respectively. In scenario analyses we fixed offer time, diagnostic test cost and uptake rate at 2 minutes, £6 and 80% for ACUs, and 5 minutes, £9.60 and 40% for GPs, respectively. The cost per new HIV diagnosis at a positivity of 2/1000 tests conducted was £3230 in ACUs and £7930 in GPs for tests performed by a Band 3 staff member, and £5940 in ACUs and £18 800 in GPs for tests performed by either hospital consultants or GPs. Expanded HIV testing may be more cost-efficient in ACUs than in GPs as a consequence of a shorter offer time, higher patient uptake, higher HIV positivity and lower diagnostic test costs. As cost per new HIV diagnosis reduces at higher HIV positivity, expanded HIV testing should be promoted in high HIV prevalence areas. © 2015 British HIV Association.

  5. The reliability of in-hospital diagnoses of diabetes mellitus in the setting of an acute myocardial infarction.

    PubMed

    Arnold, Suzanne V; Lipska, Kasia J; Inzucchi, Silvio E; Li, Yan; Jones, Philip G; McGuire, Darren K; Goyal, Abhinav; Stolker, Joshua M; Lind, Marcus; Spertus, John A; Kosiborod, Mikhail

    2014-01-01

    Incident diabetes mellitus (DM) is important to recognize in patients with acute myocardial infarction (AMI). To develop an efficient screening strategy, we explored the use of random plasma glucose (RPG) at admission and fasting plasma glucose (FPG) to select patients with AMI for glycosylated hemoglobin (HbA1c) testing. Prospective registry of 1574 patients with AMI not taking glucose-lowering medication from 24 US hospitals. All patients had HbA1c measured at a core laboratory and admission RPG and ≥2 FPGs recorded during hospitalization. We examined potential combinations of RPG and FPG and compared these with HbA1c≥6.5%-considered the gold standard for DM diagnosis in these analyses. An RPG>140 mg/dL or FPG≥126 mg/dL had high sensitivity for DM diagnosis. Combining these into a screening protocol (if admission RPG>140, check HbA1c; or if FPG≥126 on a subsequent day, check HbA1c) led to HbA1c testing in 50% of patients and identified 86% with incident DM (number needed to screen (NNS)=3.3 to identify 1 case of DM; vs NNS=5.6 with universal HbA1c screening). Alternatively, using an RPG>180 led to HbA1c testing in 40% of patients with AMI and identified 82% of DM (NNS=2.7). We have established two potential selective screening methods for DM in the setting of AMI that could identify the vast majority of incident DM by targeted screening of 40-50% of patients with AMI with HbA1c testing. Using these methods may efficiently identify patients with AMI with DM so that appropriate education and treatment can be promptly initiated.

  6. Robust efficient video fingerprinting

    NASA Astrophysics Data System (ADS)

    Puri, Manika; Lubin, Jeffrey

    2009-02-01

    We have developed a video fingerprinting system with robustness and efficiency as the primary and secondary design criteria. In extensive testing, the system has shown robustness to cropping, letter-boxing, sub-titling, blur, drastic compression, frame rate changes, size changes and color changes, as well as to the geometric distortions often associated with camcorder capture in cinema settings. Efficiency is afforded by a novel two-stage detection process in which a fast matching process first computes a number of likely candidates, which are then passed to a second slower process that computes the overall best match with minimal false alarm probability. One key component of the algorithm is a maximally stable volume computation - a three-dimensional generalization of maximally stable extremal regions - that provides a content-centric coordinate system for subsequent hash function computation, independent of any affine transformation or extensive cropping. Other key features include an efficient bin-based polling strategy for initial candidate selection, and a final SIFT feature-based computation for final verification. We describe the algorithm and its performance, and then discuss additional modifications that can provide further improvement to efficiency and accuracy.

  7. An efficient approach to improve the usability of e-learning resources: the role of heuristic evaluation.

    PubMed

    Davids, Mogamat Razeen; Chikte, Usuf M E; Halperin, Mitchell L

    2013-09-01

    Optimizing the usability of e-learning materials is necessary to maximize their potential educational impact, but this is often neglected when time and other resources are limited, leading to the release of materials that cannot deliver the desired learning outcomes. As clinician-teachers in a resource-constrained environment, we investigated whether heuristic evaluation of our multimedia e-learning resource by a panel of experts would be an effective and efficient alternative to testing with end users. We engaged six inspectors, whose expertise included usability, e-learning, instructional design, medical informatics, and the content area of nephrology. They applied a set of commonly used heuristics to identify usability problems, assigning severity scores to each problem. The identification of serious problems was compared with problems previously found by user testing. The panel completed their evaluations within 1 wk and identified a total of 22 distinct usability problems, 11 of which were considered serious. The problems violated the heuristics of visibility of system status, user control and freedom, match with the real world, intuitive visual layout, consistency and conformity to standards, aesthetic and minimalist design, error prevention and tolerance, and help and documentation. Compared with user testing, heuristic evaluation found most, but not all, of the serious problems. Combining heuristic evaluation and user testing, with each involving a small number of participants, may be an effective and efficient way of improving the usability of e-learning materials. Heuristic evaluation should ideally be used first to identify the most obvious problems and, once these are fixed, should be followed by testing with typical end users.

  8. A Unique test for Hubble's new Solar Arrays

    NASA Astrophysics Data System (ADS)

    2000-10-01

    In mid-October, a team from the European Space Agency (ESA) and NASA will perform a difficult, never-before-done test on one of the Hubble Space Telescope's new solar array panels. Two of these panels, or arrays, will be installed by astronauts in November 2001, when the Space Shuttle Columbia visits Hubble on a routine service mission. The test will ensure that the new arrays are solid and vibration free before they are installed on orbit. The test will be conducted at ESA's European Space Research and Technology Center (ESTEC) in Noordwijk, The Netherlands. Because of the array's size, the facility's special features, and ESA's longstanding experience with Hubble's solar arrays, ESTEC is the only place in the world the test can be performed. This test is the latest chapter in a longstanding partnership between ESA and NASA on the Hubble Space Telescope. The Large Space Simulator at ESTEC, ESA's world-class test facility, features a huge vacuum chamber containing a bank of extremely bright lights that simulate the Sun's intensity - including sunrise and sunset. By exposing the solar wing to the light and temperature extremes of Hubble's orbit, engineers can verify how the new set of arrays will act in space. Hubble orbits the Earth once every 90 minutes. During each orbit, the telescope experiences 45 minutes of searing sunlight and 45 minutes of frigid darkness. This test will detect any tiny vibrations, or jitters, caused by these dramatic, repeated changes. Even a small amount of jitter can affect Hubble's sensitive instruments and interfere with observations. Hubble's first set of solar arrays experienced mild jitter and was replaced in 1993 with a much more stable pair. Since that time, advances in solar cell technology have led to the development of even more efficient arrays. In 2001, NASA will take advantage of these improvements, by fitting Hubble with a third-generation set of arrays. Though smaller, this new set generates more power than the previous pairs. The arrays use high efficiency solar cells and an advanced structural system to support the solar panels. Unlike the earlier sets, which roll up like window shades, the new arrays are rigid. ESA provided Hubble's first two sets of solar arrays, and built and tested the motors and electronics of the new set provided by NASA Goddard Space Flight Center. Now, this NASA/ESA test has benefits that extend beyond Hubble to the world-wide aerospace community. It will greatly expand basic knowledge of the jitter phenomenon. Engineers across the globe can apply these findings to other spacecraft that are subjected to regular, dramatic changes in sunlight and temperature. Note to editors The Hubble Project The Hubble Space Telescope is a project of international co-operation between the National Aeronautics and Space Administration (NASA) and the European Space Agency (ESA). The partnership agreement between ESA and NASA was signed on 7 October 1977. ESA has provided two pairs of solar panels and one of Hubble's scientific instruments (the Faint Object Camera), as well as a number of other components and supports NASA during routine Servicing Missions to the telescope. In addition, 15 European scientists are working at the Space Telescope Science Institute in Baltimore (STScI), which is responsible for the scientific operation of the Hubble Observatory and is managed by the Association of Universities for Research in Astronomy (AURA) for NASA. In return, European astronomers have guaranteed access to 15% of Hubble's observing time. The Space Telescope European Coordinating Facility (ST-ECF) hosted at the European Southern Observatory (ESO) in Garching bei München, Germany, supports European Hubble users. ESA and ESO jointly operate the ST-ECF.

  9. Cost-effective Diagnostic Checklists for Meningitis in Resource Limited Settings

    PubMed Central

    Durski, Kara N.; Kuntz, Karen M.; Yasukawa, Kosuke; Virnig, Beth A.; Meya, David B.; Boulware, David R.

    2013-01-01

    Background Checklists can standardize patient care, reduce errors, and improve health outcomes. For meningitis in resource-limited settings, with high patient loads and limited financial resources, CNS diagnostic algorithms may be useful to guide diagnosis and treatment. However, the cost-effectiveness of such algorithms is unknown. Methods We used decision analysis methodology to evaluate the costs, diagnostic yield, and cost-effectiveness of diagnostic strategies for adults with suspected meningitis in resource limited settings with moderate/high HIV prevalence. We considered three strategies: 1) comprehensive “shotgun” approach of utilizing all routine tests; 2) “stepwise” strategy with tests performed in a specific order with additional TB diagnostics; 3) “minimalist” strategy of sequential ordering of high-yield tests only. Each strategy resulted in one of four meningitis diagnoses: bacterial (4%), cryptococcal (59%), TB (8%), or other (aseptic) meningitis (29%). In model development, we utilized prevalence data from two Ugandan sites and published data on test performance. We validated the strategies with data from Malawi, South Africa, and Zimbabwe. Results The current comprehensive testing strategy resulted in 93.3% correct meningitis diagnoses costing $32.00/patient. A stepwise strategy had 93.8% correct diagnoses costing an average of $9.72/patient, and a minimalist strategy had 91.1% correct diagnoses costing an average of $6.17/patient. The incremental cost effectiveness ratio was $133 per additional correct diagnosis for the stepwise over minimalist strategy. Conclusions Through strategically choosing the order and type of testing coupled with disease prevalence rates, algorithms can deliver more care more efficiently. The algorithms presented herein are generalizable to East Africa and Southern Africa. PMID:23466647

  10. An Efficient Data Partitioning to Improve Classification Performance While Keeping Parameters Interpretable

    PubMed Central

    Korjus, Kristjan; Hebart, Martin N.; Vicente, Raul

    2016-01-01

    Supervised machine learning methods typically require splitting data into multiple chunks for training, validating, and finally testing classifiers. For finding the best parameters of a classifier, training and validation are usually carried out with cross-validation. This is followed by application of the classifier with optimized parameters to a separate test set for estimating the classifier’s generalization performance. With limited data, this separation of test data creates a difficult trade-off between having more statistical power in estimating generalization performance versus choosing better parameters and fitting a better model. We propose a novel approach that we term “Cross-validation and cross-testing” improving this trade-off by re-using test data without biasing classifier performance. The novel approach is validated using simulated data and electrophysiological recordings in humans and rodents. The results demonstrate that the approach has a higher probability of discovering significant results than the standard approach of cross-validation and testing, while maintaining the nominal alpha level. In contrast to nested cross-validation, which is maximally efficient in re-using data, the proposed approach additionally maintains the interpretability of individual parameters. Taken together, we suggest an addition to currently used machine learning approaches which may be particularly useful in cases where model weights do not require interpretation, but parameters do. PMID:27564393

  11. Setting the time and place for a hearing before an administrative law judge. Final rules.

    PubMed

    2010-07-08

    We are amending our rules to state that our agency is responsible for setting the time and place for a hearing before an administrative law judge (ALJ). This change creates a 3-year pilot program that will allow us to test this new authority. Our use of this authority, consistent with due process rights of claimants, may provide us with greater flexibility in scheduling both in-person and video hearings, lead to improved efficiency in our hearing process, and reduce the number of pending hearing requests. This change is a part of our broader commitment to maintaining a hearing process that results in accurate, high-quality decisions for claimants.

  12. Orthonormal vector polynomials in a unit circle, Part I: Basis set derived from gradients of Zernike polynomials.

    PubMed

    Zhao, Chunyu; Burge, James H

    2007-12-24

    Zernike polynomials provide a well known, orthogonal set of scalar functions over a circular domain, and are commonly used to represent wavefront phase or surface irregularity. A related set of orthogonal functions is given here which represent vector quantities, such as mapping distortion or wavefront gradient. These functions are generated from gradients of Zernike polynomials, made orthonormal using the Gram- Schmidt technique. This set provides a complete basis for representing vector fields that can be defined as a gradient of some scalar function. It is then efficient to transform from the coefficients of the vector functions to the scalar Zernike polynomials that represent the function whose gradient was fit. These new vector functions have immediate application for fitting data from a Shack-Hartmann wavefront sensor or for fitting mapping distortion for optical testing. A subsequent paper gives an additional set of vector functions consisting only of rotational terms with zero divergence. The two sets together provide a complete basis that can represent all vector distributions in a circular domain.

  13. Efficient expansion of global protected areas requires simultaneous planning for species and ecosystems

    PubMed Central

    Polak, Tal; Watson, James E. M.; Fuller, Richard A.; Joseph, Liana N.; Martin, Tara G.; Possingham, Hugh P.; Venter, Oscar; Carwardine, Josie

    2015-01-01

    The Convention on Biological Diversity (CBD)'s strategic plan advocates the use of environmental surrogates, such as ecosystems, as a basis for planning where new protected areas should be placed. However, the efficiency and effectiveness of this ecosystem-based planning approach to adequately capture threatened species in protected area networks is unknown. We tested the application of this approach in Australia according to the nation's CBD-inspired goals for expansion of the national protected area system. We set targets for ecosystems (10% of the extent of each ecosystem) and threatened species (variable extents based on persistence requirements for each species) and then measured the total land area required and opportunity cost of meeting those targets independently, sequentially and simultaneously. We discover that an ecosystem-based approach will not ensure the adequate representation of threatened species in protected areas. Planning simultaneously for species and ecosystem targets delivered the most efficient outcomes for both sets of targets, while planning first for ecosystems and then filling the gaps to meet species targets was the most inefficient conservation strategy. Our analysis highlights the pitfalls of pursuing goals for species and ecosystems non-cooperatively and has significant implications for nations aiming to meet their CBD mandated protected area obligations. PMID:26064645

  14. Surgical hand antisepsis in veterinary practice: evaluation of soap scrubs and alcohol based rub techniques.

    PubMed

    Verwilghen, Denis R; Mainil, Jacques; Mastrocicco, Emilie; Hamaide, Annick; Detilleux, Johann; van Galen, Gaby; Serteyn, Didier; Grulke, Sigrid

    2011-12-01

    Recent studies have shown that hydro-alcoholic solutions are more efficient than traditional medicated soaps in the pre-surgical hand antisepsis of human surgeons but there is little veterinary literature on the subject. The aim of this study was to compare the efficiency of medicated soaps and a hydro-alcoholic solution prior to surgery using an in-use testing method in a veterinary setting. A preliminary trial was performed that compared the mean log(10) number of bacterial colony forming units (CFU) and the reduction factors (RF) between two 5-min hand-scrubbing sessions using different soaps, namely, povidone iodine (PVP) and chlorhexidine gluconate (CHX), and the 1.5-min application of a hydro-alcoholic rub. A clinical in-use trial was then used to compare the hydro-alcoholic rub and CHX in a surgical setting. Sampling was performed using finger printing on agar plates. The hydro-alcoholic rub and CHX had a similar immediate effect, although the sustained effect was significantly better for the hydro-alcoholic rub, while PVP had a significantly lower immediate and sustained effect. The hydro-alcoholic rub showed good efficiency in the clinical trial and could be considered as a useful alternative method for veterinary surgical hand antisepsis. Copyright © 2010 Elsevier Ltd. All rights reserved.

  15. Performance evaluation of the intra compression in the video coding standards

    NASA Astrophysics Data System (ADS)

    Abramowski, Andrzej

    2015-09-01

    The article presents a comparison of the Intra prediction algorithms in the current state-of-the-art video coding standards, including MJPEG 2000, VP8, VP9, H.264/AVC and H.265/HEVC. The effectiveness of techniques employed by each standard is evaluated in terms of compression efficiency and average encoding time. The compression efficiency is measured using BD-PSNR and BD-RATE metrics with H.265/HEVC results as an anchor. Tests are performed on a set of video sequences, composed of sequences gathered by Joint Collaborative Team on Video Coding during the development of the H.265/HEVC standard and 4K sequences provided by Ultra Video Group. According to results, H.265/HEVC provides significant bit-rate savings at the expense of computational complexity, while VP9 may be regarded as a compromise between the efficiency and required encoding time.

  16. The new and computationally efficient MIL-SOM algorithm: potential benefits for visualization and analysis of a large-scale high-dimensional clinically acquired geographic data.

    PubMed

    Oyana, Tonny J; Achenie, Luke E K; Heo, Joon

    2012-01-01

    The objective of this paper is to introduce an efficient algorithm, namely, the mathematically improved learning-self organizing map (MIL-SOM) algorithm, which speeds up the self-organizing map (SOM) training process. In the proposed MIL-SOM algorithm, the weights of Kohonen's SOM are based on the proportional-integral-derivative (PID) controller. Thus, in a typical SOM learning setting, this improvement translates to faster convergence. The basic idea is primarily motivated by the urgent need to develop algorithms with the competence to converge faster and more efficiently than conventional techniques. The MIL-SOM algorithm is tested on four training geographic datasets representing biomedical and disease informatics application domains. Experimental results show that the MIL-SOM algorithm provides a competitive, better updating procedure and performance, good robustness, and it runs faster than Kohonen's SOM.

  17. Development of noSQL data storage for the ATLAS PanDA Monitoring System

    NASA Astrophysics Data System (ADS)

    Potekhin, M.; ATLAS Collaboration

    2012-06-01

    For several years the PanDA Workload Management System has been the basis for distributed production and analysis for the ATLAS experiment at the LHC. Since the start of data taking PanDA usage has ramped up steadily, typically exceeding 500k completed jobs/day by June 2011. The associated monitoring data volume has been rising as well, to levels that present a new set of challenges in the areas of database scalability and monitoring system performance and efficiency. These challenges are being met with a R&D effort aimed at implementing a scalable and efficient monitoring data storage based on a noSQL solution (Cassandra). We present our motivations for using this technology, as well as data design and the techniques used for efficient indexing of the data. We also discuss the hardware requirements as they were determined by testing with actual data and realistic loads.

  18. Microstructured silicon neutron detectors for security applications

    NASA Astrophysics Data System (ADS)

    Esteban, S.; Fleta, C.; Guardiola, C.; Jumilla, C.; Pellegrini, G.; Quirion, D.; Rodriguez, J.; Lozano, M.

    2014-12-01

    In this paper we present the design and performance of a perforated thermal neutron silicon detector with a 6LiF neutron converter. This device was manufactured within the REWARD project workplace whose aim is to develop and enhance technologies for the detection of nuclear and radiological materials. The sensor perforated structure results in a higher efficiency than that obtained with an equivalent planar sensor. The detectors were tested in a thermal neutron beam at the nuclear reactor at the Instituto Superior Técnico in Lisbon and the intrinsic detection efficiency for thermal neutrons and the gamma sensitivity were obtained. The Geant4 Monte Carlo code was used to simulate the experimental conditions, i.e. thermal neutron beam and the whole detector geometry. An intrinsic thermal neutron detection efficiency of 8.6%±0.4% with a discrimination setting of 450 keV was measured.

  19. The New and Computationally Efficient MIL-SOM Algorithm: Potential Benefits for Visualization and Analysis of a Large-Scale High-Dimensional Clinically Acquired Geographic Data

    PubMed Central

    Oyana, Tonny J.; Achenie, Luke E. K.; Heo, Joon

    2012-01-01

    The objective of this paper is to introduce an efficient algorithm, namely, the mathematically improved learning-self organizing map (MIL-SOM) algorithm, which speeds up the self-organizing map (SOM) training process. In the proposed MIL-SOM algorithm, the weights of Kohonen's SOM are based on the proportional-integral-derivative (PID) controller. Thus, in a typical SOM learning setting, this improvement translates to faster convergence. The basic idea is primarily motivated by the urgent need to develop algorithms with the competence to converge faster and more efficiently than conventional techniques. The MIL-SOM algorithm is tested on four training geographic datasets representing biomedical and disease informatics application domains. Experimental results show that the MIL-SOM algorithm provides a competitive, better updating procedure and performance, good robustness, and it runs faster than Kohonen's SOM. PMID:22481977

  20. High Efficiency Room Air Conditioner

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bansal, Pradeep

    This project was undertaken as a CRADA project between UT-Battelle and Geberal Electric Company and was funded by Department of Energy to design and develop of a high efficiency room air conditioner. A number of novel elements were investigated to improve the energy efficiency of a state-of-the-art WAC with base capacity of 10,000 BTU/h. One of the major modifications was made by downgrading its capacity from 10,000 BTU/hr to 8,000 BTU/hr by replacing the original compressor with a lower capacity (8,000 BTU/hr) but high efficiency compressor having an EER of 9.7 as compared with 9.3 of the original compressor. However,more » all heat exchangers from the original unit were retained to provide higher EER. The other subsequent major modifications included- (i) the AC fan motor was replaced by a brushless high efficiency ECM motor along with its fan housing, (ii) the capillary tube was replaced with a needle valve to better control the refrigerant flow and refrigerant set points, and (iii) the unit was tested with a drop-in environmentally friendly binary mixture of R32 (90% molar concentration)/R125 (10% molar concentration). The WAC was tested in the environmental chambers at ORNL as per the design rating conditions of AHAM/ASHRAE (Outdoor- 95F and 40%RH, Indoor- 80F, 51.5%RH). All these modifications resulted in enhancing the EER of the WAC by up to 25%.« less

  1. Mining Distance Based Outliers in Near Linear Time with Randomization and a Simple Pruning Rule

    NASA Technical Reports Server (NTRS)

    Bay, Stephen D.; Schwabacher, Mark

    2003-01-01

    Defining outliers by their distance to neighboring examples is a popular approach to finding unusual examples in a data set. Recently, much work has been conducted with the goal of finding fast algorithms for this task. We show that a simple nested loop algorithm that in the worst case is quadratic can give near linear time performance when the data is in random order and a simple pruning rule is used. We test our algorithm on real high-dimensional data sets with millions of examples and show that the near linear scaling holds over several orders of magnitude. Our average case analysis suggests that much of the efficiency is because the time to process non-outliers, which are the majority of examples, does not depend on the size of the data set.

  2. A semiparametric graphical modelling approach for large-scale equity selection.

    PubMed

    Liu, Han; Mulvey, John; Zhao, Tianqi

    2016-01-01

    We propose a new stock selection strategy that exploits rebalancing returns and improves portfolio performance. To effectively harvest rebalancing gains, we apply ideas from elliptical-copula graphical modelling and stability inference to select stocks that are as independent as possible. The proposed elliptical-copula graphical model has a latent Gaussian representation; its structure can be effectively inferred using the regularized rank-based estimators. The resulting algorithm is computationally efficient and scales to large data-sets. To show the efficacy of the proposed method, we apply it to conduct equity selection based on a 16-year health care stock data-set and a large 34-year stock data-set. Empirical tests show that the proposed method is superior to alternative strategies including a principal component analysis-based approach and the classical Markowitz strategy based on the traditional buy-and-hold assumption.

  3. Empirical valence bond models for reactive potential energy surfaces: a parallel multilevel genetic program approach.

    PubMed

    Bellucci, Michael A; Coker, David F

    2011-07-28

    We describe a new method for constructing empirical valence bond potential energy surfaces using a parallel multilevel genetic program (PMLGP). Genetic programs can be used to perform an efficient search through function space and parameter space to find the best functions and sets of parameters that fit energies obtained by ab initio electronic structure calculations. Building on the traditional genetic program approach, the PMLGP utilizes a hierarchy of genetic programming on two different levels. The lower level genetic programs are used to optimize coevolving populations in parallel while the higher level genetic program (HLGP) is used to optimize the genetic operator probabilities of the lower level genetic programs. The HLGP allows the algorithm to dynamically learn the mutation or combination of mutations that most effectively increase the fitness of the populations, causing a significant increase in the algorithm's accuracy and efficiency. The algorithm's accuracy and efficiency is tested against a standard parallel genetic program with a variety of one-dimensional test cases. Subsequently, the PMLGP is utilized to obtain an accurate empirical valence bond model for proton transfer in 3-hydroxy-gamma-pyrone in gas phase and protic solvent. © 2011 American Institute of Physics

  4. vitisFlower®: Development and Testing of a Novel Android-Smartphone Application for Assessing the Number of Grapevine Flowers per Inflorescence Using Artificial Vision Techniques.

    PubMed

    Aquino, Arturo; Millan, Borja; Gaston, Daniel; Diago, María-Paz; Tardaguila, Javier

    2015-08-28

    Grapevine flowering and fruit set greatly determine crop yield. This paper presents a new smartphone application for automatically counting, non-invasively and directly in the vineyard, the flower number in grapevine inflorescence photos by implementing artificial vision techniques. The application, called vitisFlower(®), firstly guides the user to appropriately take an inflorescence photo using the smartphone's camera. Then, by means of image analysis, the flowers in the image are detected and counted. vitisFlower(®) has been developed for Android devices and uses the OpenCV libraries to maximize computational efficiency. The application was tested on 140 inflorescence images of 11 grapevine varieties taken with two different devices. On average, more than 84% of flowers in the captures were found, with a precision exceeding 94%. Additionally, the application's efficiency on four different devices covering a wide range of the market's spectrum was also studied. The results of this benchmarking study showed significant differences among devices, although indicating that the application is efficiently usable even with low-range devices. vitisFlower is one of the first applications for viticulture that is currently freely available on Google Play.

  5. Point-of-Care Test Equipment for Flexible Laboratory Automation.

    PubMed

    You, Won Suk; Park, Jae Jun; Jin, Sung Moon; Ryew, Sung Moo; Choi, Hyouk Ryeol

    2014-08-01

    Blood tests are some of the core clinical laboratory tests for diagnosing patients. In hospitals, an automated process called total laboratory automation, which relies on a set of sophisticated equipment, is normally adopted for blood tests. Noting that the total laboratory automation system typically requires a large footprint and significant amount of power, slim and easy-to-move blood test equipment is necessary for specific demands such as emergency departments or small-size local clinics. In this article, we present a point-of-care test system that can provide flexibility and portability with low cost. First, the system components, including a reagent tray, dispensing module, microfluidic disk rotor, and photometry scanner, and their functions are explained. Then, a scheduler algorithm to provide a point-of-care test platform with an efficient test schedule to reduce test time is introduced. Finally, the results of diagnostic tests are presented to evaluate the system. © 2014 Society for Laboratory Automation and Screening.

  6. Modal Survey of ETM-3, A 5-Segment Derivative of the Space Shuttle Solid Rocket Booster

    NASA Technical Reports Server (NTRS)

    Nielsen, D.; Townsend, J.; Kappus, K.; Driskill, T.; Torres, I.; Parks, R.

    2005-01-01

    The complex interactions between internal motor generated pressure oscillations and motor structural vibration modes associated with the static test configuration of a Reusable Solid Rocket Motor have potential to generate significant dynamic thrust loads in the 5-segment configuration (Engineering Test Motor 3). Finite element model load predictions for worst-case conditions were generated based on extrapolation of a previously correlated 4-segment motor model. A modal survey was performed on the largest rocket motor to date, Engineering Test Motor #3 (ETM-3), to provide data for finite element model correlation and validation of model generated design loads. The modal survey preparation included pretest analyses to determine an efficient analysis set selection using the Effective Independence Method and test simulations to assure critical test stand component loads did not exceed design limits. Historical Reusable Solid Rocket Motor modal testing, ETM-3 test analysis model development and pre-test loads analyses, as well as test execution, and a comparison of results to pre-test predictions are discussed.

  7. Population-centered Risk- and Evidence-based Dental Interprofessional Care Team (PREDICT): study protocol for a randomized controlled trial.

    PubMed

    Cunha-Cruz, Joana; Milgrom, Peter; Shirtcliff, R Michael; Bailit, Howard L; Huebner, Colleen E; Conrad, Douglas; Ludwig, Sharity; Mitchell, Melissa; Dysert, Jeanne; Allen, Gary; Scott, JoAnna; Mancl, Lloyd

    2015-06-20

    To improve the oral health of low-income children, innovations in dental delivery systems are needed, including community-based care, the use of expanded duty auxiliary dental personnel, capitation payments, and global budgets. This paper describes the protocol for PREDICT (Population-centered Risk- and Evidence-based Dental Interprofessional Care Team), an evaluation project to test the effectiveness of new delivery and payment systems for improving dental care and oral health. This is a parallel-group cluster randomized controlled trial. Fourteen rural Oregon counties with a publicly insured (Medicaid) population of 82,000 children (0 to 21 years old) and pregnant women served by a managed dental care organization are randomized into test and control counties. In the test intervention (PREDICT), allied dental personnel provide screening and preventive services in community settings and case managers serve as patient navigators to arrange referrals of children who need dentist services. The delivery system intervention is paired with a compensation system for high performance (pay-for-performance) with efficient performance monitoring. PREDICT focuses on the following: 1) identifying eligible children and gaining caregiver consent for services in community settings (for example, schools); 2) providing risk-based preventive and caries stabilization services efficiently at these settings; 3) providing curative care in dental clinics; and 4) incentivizing local delivery teams to meet performance benchmarks. In the control intervention, care is delivered in dental offices without performance incentives. The primary outcome is the prevalence of untreated dental caries. Other outcomes are related to process, structure and cost. Data are collected through patient and staff surveys, clinical examinations, and the review of health and administrative records. If effective, PREDICT is expected to substantially reduce disparities in dental care and oral health. PREDICT can be disseminated to other care organizations as publicly insured clients are increasingly served by large practice organizations. ClinicalTrials.gov NCT02312921 6 December 2014. The Robert Wood Johnson Foundation and Advantage Dental Services, LLC, are supporting the evaluation.

  8. Mining functionally relevant gene sets for analyzing physiologically novel clinical expression data.

    PubMed

    Turcan, Sevin; Vetter, Douglas E; Maron, Jill L; Wei, Xintao; Slonim, Donna K

    2011-01-01

    Gene set analyses have become a standard approach for increasing the sensitivity of transcriptomic studies. However, analytical methods incorporating gene sets require the availability of pre-defined gene sets relevant to the underlying physiology being studied. For novel physiological problems, relevant gene sets may be unavailable or existing gene set databases may bias the results towards only the best-studied of the relevant biological processes. We describe a successful attempt to mine novel functional gene sets for translational projects where the underlying physiology is not necessarily well characterized in existing annotation databases. We choose targeted training data from public expression data repositories and define new criteria for selecting biclusters to serve as candidate gene sets. Many of the discovered gene sets show little or no enrichment for informative Gene Ontology terms or other functional annotation. However, we observe that such gene sets show coherent differential expression in new clinical test data sets, even if derived from different species, tissues, and disease states. We demonstrate the efficacy of this method on a human metabolic data set, where we discover novel, uncharacterized gene sets that are diagnostic of diabetes, and on additional data sets related to neuronal processes and human development. Our results suggest that our approach may be an efficient way to generate a collection of gene sets relevant to the analysis of data for novel clinical applications where existing functional annotation is relatively incomplete.

  9. A weighted exact test for mutually exclusive mutations in cancer

    PubMed Central

    Leiserson, Mark D.M.; Reyna, Matthew A.; Raphael, Benjamin J.

    2016-01-01

    Motivation: The somatic mutations in the pathways that drive cancer development tend to be mutually exclusive across tumors, providing a signal for distinguishing driver mutations from a larger number of random passenger mutations. This mutual exclusivity signal can be confounded by high and highly variable mutation rates across a cohort of samples. Current statistical tests for exclusivity that incorporate both per-gene and per-sample mutational frequencies are computationally expensive and have limited precision. Results: We formulate a weighted exact test for assessing the significance of mutual exclusivity in an arbitrary number of mutational events. Our test conditions on the number of samples with a mutation as well as per-event, per-sample mutation probabilities. We provide a recursive formula to compute P-values for the weighted test exactly as well as a highly accurate and efficient saddlepoint approximation of the test. We use our test to approximate a commonly used permutation test for exclusivity that conditions on per-event, per-sample mutation frequencies. However, our test is more efficient and it recovers more significant results than the permutation test. We use our Weighted Exclusivity Test (WExT) software to analyze hundreds of colorectal and endometrial samples from The Cancer Genome Atlas, which are two cancer types that often have extremely high mutation rates. On both cancer types, the weighted test identifies sets of mutually exclusive mutations in cancer genes with fewer false positives than earlier approaches. Availability and Implementation: See http://compbio.cs.brown.edu/projects/wext for software. Contact: braphael@cs.brown.edu Supplementary information: Supplementary data are available at Bioinformatics online. PMID:27587696

  10. Behavioural evidence of colour vision in free flying stingless bees.

    PubMed

    Spaethe, J; Streinzer, M; Eckert, J; May, S; Dyer, A G

    2014-06-01

    Colour vision was first demonstrated with behavioural experiments in honeybees 100 years ago. Since that time a wealth of quality physiological data has shown a highly conserved set of trichromatic colour receptors in most bee species. Despite the subsequent wealth of behavioural research on honeybees and bumblebees, there currently is a relative dearth of data on stingless bees, which are the largest tribe of the eusocial bees comprising of more than 600 species. In our first experiment we tested Trigona cf. fuscipennis, a stingless bee species from Costa Rica in a field setting using the von Frisch method and show functional colour vision. In a second experiment with these bees, we use a simultaneous colour discrimination test designed for honeybees to enable a comparative analysis of relative colour discrimination. In a third experiment, we test in laboratory conditions Tetragonula carbonaria, an Australian stingless bee species using a similar simultaneous colour discrimination test. Both stingless bee species show relatively poorer colour discrimination compared to honeybees and bumblebees; and we discuss the value of being able to use these behavioural methods to efficiently extend our current knowledge of colour vision and discrimination in different bee species.

  11. An adaptive toolkit for image quality evaluation in system performance test of digital breast tomosynthesis

    NASA Astrophysics Data System (ADS)

    Zhang, Guozhi; Petrov, Dimitar; Marshall, Nicholas; Bosmans, Hilde

    2017-03-01

    Digital breast tomosynthesis (DBT) is a relatively new diagnostic imaging modality for women. Currently, various models of DBT systems are available on the market and the number of installations is rapidly increasing. EUREF, the European Reference Organization for Quality Assured Breast Screening and Diagnostic Services, has proposed a preliminary Guideline - protocol for the quality control of the physical and technical aspects of digital breast tomosynthesis systems, with an ultimate aim of providing limiting values guaranteeing proper performance for different applications of DBT. In this work, we introduce an adaptive toolkit developed in accordance with this guideline to facilitate the process of image quality evaluation in DBT performance test. This toolkit implements robust algorithms to quantify various technical parameters of DBT images and provides a convenient user interface in practice. Each test is built into a separate module with configurations set corresponding to the European guideline, which can be easily adapted to different settings and extended with additional tests. This toolkit largely improves the efficiency for image quality evaluation of DBT. It is also going to evolve with the development of protocols in quality control of DBT systems.

  12. Potential Use of BEST® Sediment Trap in Splash - Saltation Transport Process by Simultaneous Wind and Rain Tests.

    PubMed

    Basaran, Mustafa; Uzun, Oguzhan; Cornelis, Wim; Gabriels, Donald; Erpul, Gunay

    2016-01-01

    The research on wind-driven rain (WDR) transport process of the splash-saltation has increased over the last twenty years as wind tunnel experimental studies provide new insights into the mechanisms of simultaneous wind and rain (WDR) transport. The present study was conducted to investigate the efficiency of the BEST® sediment traps in catching the sand particles transported through the splash-saltation process under WDR conditions. Experiments were conducted in a wind tunnel rainfall simulator facility with water sprayed through sprinkler nozzles and free-flowing wind at different velocities to simulate the WDR conditions. Not only for vertical sediment distribution, but a series of experimental tests for horizontal distribution of sediments was also performed using BEST® collectors to obtain the actual total sediment mass flow by the splash-saltation in the center of the wind tunnel test section. Total mass transport (kg m-2) were estimated by analytically integrating the exponential functional relationship using the measured sediment amounts at the set trap heights for every run. Results revealed the integrated efficiency of the BEST® traps at 6, 9, 12 and 15 m s-1 wind velocities under 55.8, 50.5, 55.0 and 50.5 mm h-1 rain intensities were, respectively, 83, 106, 105, and 102%. Results as well showed that the efficiencies of BEST® did not change much as compared with those under rainless wind condition.

  13. A practical model for the train-set utilization: The case of Beijing-Tianjin passenger dedicated line in China

    PubMed Central

    Li, Xiaomeng; Yang, Zhuo

    2017-01-01

    As a sustainable transportation mode, high-speed railway (HSR) has become an efficient way to meet the huge travel demand. However, due to the high acquisition and maintenance cost, it is impossible to build enough infrastructure and purchase enough train-sets. Great efforts are required to improve the transport capability of HSR. The utilization efficiency of train-sets (carrying tools of HSR) is one of the most important factors of the transport capacity of HSR. In order to enhance the utilization efficiency of the train-sets, this paper proposed a train-set circulation optimization model to minimize the total connection time. An innovative two-stage approach which contains segments generation and segments combination was designed to solve this model. In order to verify the feasibility of the proposed approach, an experiment was carried out in the Beijing-Tianjin passenger dedicated line, to fulfill a 174 trips train diagram. The model results showed that compared with the traditional Ant Colony Algorithm (ACA), the utilization efficiency of train-sets can be increased from 43.4% (ACA) to 46.9% (Two-Stage), and 1 train-set can be saved up to fulfill the same transportation tasks. The approach proposed in the study is faster and more stable than the traditional ones, by using which, the HSR staff can draw up the train-sets circulation plan more quickly and the utilization efficiency of the HSR system is also improved. PMID:28489933

  14. Test of VPHGS in SHSG for use at cryogenic temperatures

    NASA Astrophysics Data System (ADS)

    Insaustia, Maider; Garzón, Francisco; Mas-Abellán, P.; Madrigal, R.; Fimia, A.

    2017-05-01

    Silver halide sensitized gelatin (SHSG) processes are interesting because they combine the spectral and energetic sensitivity of a photographic emulsions with good optical quality and high diffraction efficiency of dichromate gelatin (DCG). Previous papers had been demonstrated that it is possible to obtain diffraction efficiencies near to 90% with Agfa- Gevaert plates and Colour Holographic plates in SHSG transmission gratings. In this communication, we report on the performances measured at room temperature and in cryogenic conditions of a set of volume phase holographic gratings(VPHGs) manufactured with SHSG process aimed at their use in astronomical instrumentations. Two set of diffraction gratings has been manufactured using different processing. The first with SHSG process and the second with typical bleached process (developed with AAC and bleached in R-10). In both cases the plate was BB640, ultrafine grain emulsions with a nominal thickness of 9 μm. The recording was performed with asymmetric geometry a 30° degrees between the light beams of wavelength 632.8 nm (He-Ne laser), which give a raise a spectral frequency of 800 l/m. The exposure was between 46 to 2048 μJ/cm2. The results give us information about Bragg plane modification and reduction of diffraction efficiency when we introduced the VPHG to 77° K. In the case of SHSG process the final diffraction efficiency after cryogenic temperature are better at some exposure energy than previous measurements at room temperature. This experimental result give us possibilities to applied SHSG process in Astrophysics applications.

  15. Testing the Hydrological Coherence of High-Resolution Gridded Precipitation and Temperature Data Sets

    NASA Astrophysics Data System (ADS)

    Laiti, L.; Mallucci, S.; Piccolroaz, S.; Bellin, A.; Zardi, D.; Fiori, A.; Nikulin, G.; Majone, B.

    2018-03-01

    Assessing the accuracy of gridded climate data sets is highly relevant to climate change impact studies, since evaluation, bias correction, and statistical downscaling of climate models commonly use these products as reference. Among all impact studies those addressing hydrological fluxes are the most affected by errors and biases plaguing these data. This paper introduces a framework, coined Hydrological Coherence Test (HyCoT), for assessing the hydrological coherence of gridded data sets with hydrological observations. HyCoT provides a framework for excluding meteorological forcing data sets not complying with observations, as function of the particular goal at hand. The proposed methodology allows falsifying the hypothesis that a given data set is coherent with hydrological observations on the basis of the performance of hydrological modeling measured by a metric selected by the modeler. HyCoT is demonstrated in the Adige catchment (southeastern Alps, Italy) for streamflow analysis, using a distributed hydrological model. The comparison covers the period 1989-2008 and includes five gridded daily meteorological data sets: E-OBS, MSWEP, MESAN, APGD, and ADIGE. The analysis highlights that APGD and ADIGE, the data sets with highest effective resolution, display similar spatiotemporal precipitation patterns and produce the largest hydrological efficiency indices. Lower performances are observed for E-OBS, MESAN, and MSWEP, especially in small catchments. HyCoT reveals deficiencies in the representation of spatiotemporal patterns of gridded climate data sets, which cannot be corrected by simply rescaling the meteorological forcing fields, as often done in bias correction of climate model outputs. We recommend this framework to assess the hydrological coherence of gridded data sets to be used in large-scale hydroclimatic studies.

  16. Quantum discord, local operations, and Maxwell's demons

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brodutch, Aharon; Terno, Daniel R.

    2010-06-15

    Quantum discord was proposed as a measure of the quantumness of correlations. There are at least three different discordlike quantities, two of which determine the difference between the efficiencies of a Szilard's engine under different sets of restrictions. The three discord measures vanish simultaneously. We introduce an easy way to test for zero discord, relate it to the Cerf-Adami conditional entropy and show that there is no simple relation between the discord and the local distinguishability.

  17. Algorithms for Zonal Methods and Development of Three Dimensional Mesh Generation Procedures.

    DTIC Science & Technology

    1984-02-01

    a r-re complete set of equations is used, but their effect is imposed by means of a right hand side forcing function, not by means of a zonal boundary...modifications of flow-simulation algorithms The explicit finite-difference code of Magnus and are discussed. Computational tests in two dimensions...used to simplify the task of grid generation without an adverse achieve computational efficiency. More recently, effect on flow-field algorithms and

  18. The use of Optical Character Recognition (OCR) in the digitisation of herbarium specimen labels.

    PubMed

    Drinkwater, Robyn E; Cubey, Robert W N; Haston, Elspeth M

    2014-01-01

    At the Royal Botanic Garden Edinburgh (RBGE) the use of Optical Character Recognition (OCR) to aid the digitisation process has been investigated. This was tested using a herbarium specimen digitisation process with two stages of data entry. Records were initially batch-processed to add data extracted from the OCR text prior to being sorted based on Collector and/or Country. Using images of the specimens, a team of six digitisers then added data to the specimen records. To investigate whether the data from OCR aid the digitisation process, they completed a series of trials which compared the efficiency of data entry between sorted and unsorted batches of specimens. A survey was carried out to explore the opinion of the digitisation staff to the different sorting options. In total 7,200 specimens were processed. When compared to an unsorted, random set of specimens, those which were sorted based on data added from the OCR were quicker to digitise. Of the methods tested here, the most successful in terms of efficiency used a protocol which required entering data into a limited set of fields and where the records were filtered by Collector and Country. The survey and subsequent discussions with the digitisation staff highlighted their preference for working with sorted specimens, in which label layout, locations and handwriting are likely to be similar, and so a familiarity with the Collector or Country is rapidly established.

  19. An efficient indexing scheme for binary feature based biometric database

    NASA Astrophysics Data System (ADS)

    Gupta, P.; Sana, A.; Mehrotra, H.; Hwang, C. Jinshong

    2007-04-01

    The paper proposes an efficient indexing scheme for binary feature template using B+ tree. In this scheme the input image is decomposed into approximation, vertical, horizontal and diagonal coefficients using the discrete wavelet transform. The binarized approximation coefficient at second level is divided into four quadrants of equal size and Hamming distance (HD) for each quadrant with respect to sample template of all ones is measured. This HD value of each quadrant is used to generate upper and lower range values which are inserted into B+ tree. The nodes of tree at first level contain the lower and upper range values generated from HD of first quadrant. Similarly, lower and upper range values for the three quadrants are stored in the second, third and fourth level respectively. Finally leaf node contains the set of identifiers. At the time of identification, the test image is used to generate HD for four quadrants. Then the B+ tree is traversed based on the value of HD at every node and terminates to leaf nodes with set of identifiers. The feature vector for each identifier is retrieved from the particular bin of secondary memory and matched with test feature template to get top matches. The proposed scheme is implemented on ear biometric database collected at IIT Kanpur. The system is giving an overall accuracy of 95.8% at penetration rate of 34%.

  20. Are Health State Valuations from the General Public Biased? A Test of Health State Reference Dependency Using Self-assessed Health and an Efficient Discrete Choice Experiment.

    PubMed

    Jonker, Marcel F; Attema, Arthur E; Donkers, Bas; Stolk, Elly A; Versteegh, Matthijs M

    2017-12-01

    Health state valuations of patients and non-patients are not the same, whereas health state values obtained from general population samples are a weighted average of both. The latter constitutes an often-overlooked source of bias. This study investigates the resulting bias and tests for the impact of reference dependency on health state valuations using an efficient discrete choice experiment administered to a Dutch nationally representative sample of 788 respondents. A Bayesian discrete choice experiment design consisting of eight sets of 24 (matched pairwise) choice tasks was developed, with each set providing full identification of the included parameters. Mixed logit models were used to estimate health state preferences with respondents' own health included as an additional predictor. Our results indicate that respondents with impaired health worse than or equal to the health state levels under evaluation have approximately 30% smaller health state decrements. This confirms that reference dependency can be observed in general population samples and affirms the relevance of prospect theory in health state valuations. At the same time, the limited number of respondents with severe health impairments does not appear to bias social tariffs as obtained from general population samples. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  1. Quality control for quantitative PCR based on amplification compatibility test.

    PubMed

    Tichopad, Ales; Bar, Tzachi; Pecen, Ladislav; Kitchen, Robert R; Kubista, Mikael; Pfaffl, Michael W

    2010-04-01

    Quantitative qPCR is a routinely used method for the accurate quantification of nucleic acids. Yet it may generate erroneous results if the amplification process is obscured by inhibition or generation of aberrant side-products such as primer dimers. Several methods have been established to control for pre-processing performance that rely on the introduction of a co-amplified reference sequence, however there is currently no method to allow for reliable control of the amplification process without directly modifying the sample mix. Herein we present a statistical approach based on multivariate analysis of the amplification response data generated in real-time. The amplification trajectory in its most resolved and dynamic phase is fitted with a suitable model. Two parameters of this model, related to amplification efficiency, are then used for calculation of the Z-score statistics. Each studied sample is compared to a predefined reference set of reactions, typically calibration reactions. A probabilistic decision for each individual Z-score is then used to identify the majority of inhibited reactions in our experiments. We compare this approach to univariate methods using only the sample specific amplification efficiency as reporter of the compatibility. We demonstrate improved identification performance using the multivariate approach compared to the univariate approach. Finally we stress that the performance of the amplification compatibility test as a quality control procedure depends on the quality of the reference set. Copyright 2010 Elsevier Inc. All rights reserved.

  2. The quadriceps muscle of knee joint modelling Using Hybrid Particle Swarm Optimization-Neural Network (PSO-NN)

    NASA Astrophysics Data System (ADS)

    Kamaruddin, Saadi Bin Ahmad; Marponga Tolos, Siti; Hee, Pah Chin; Ghani, Nor Azura Md; Ramli, Norazan Mohamed; Nasir, Noorhamizah Binti Mohamed; Ksm Kader, Babul Salam Bin; Saiful Huq, Mohammad

    2017-03-01

    Neural framework has for quite a while been known for its ability to handle a complex nonlinear system without a logical model and can learn refined nonlinear associations gives. Theoretically, the most surely understood computation to set up the framework is the backpropagation (BP) count which relies on upon the minimization of the mean square error (MSE). However, this algorithm is not totally efficient in the presence of outliers which usually exist in dynamic data. This paper exhibits the modelling of quadriceps muscle model by utilizing counterfeit smart procedures named consolidated backpropagation neural network nonlinear autoregressive (BPNN-NAR) and backpropagation neural network nonlinear autoregressive moving average (BPNN-NARMA) models in view of utilitarian electrical incitement (FES). We adapted particle swarm optimization (PSO) approach to enhance the performance of backpropagation algorithm. In this research, a progression of tests utilizing FES was led. The information that is gotten is utilized to build up the quadriceps muscle model. 934 preparing information, 200 testing and 200 approval information set are utilized as a part of the improvement of muscle model. It was found that both BPNN-NAR and BPNN-NARMA performed well in modelling this type of data. As a conclusion, the neural network time series models performed reasonably efficient for non-linear modelling such as active properties of the quadriceps muscle with one input, namely output namely muscle force.

  3. Creation and Implementation of an Environmental Scan to Assess Cancer Genetics Services at Three Oncology Care Settings.

    PubMed

    Bednar, Erica M; Walsh, Michael T; Baker, Ellen; Muse, Kimberly I; Oakley, Holly D; Krukenberg, Rebekah C; Dresbold, Cara S; Jenkinson, Sandra B; Eppolito, Amanda L; Teed, Kelly B; Klein, Molly H; Morman, Nichole A; Bowdish, Elizabeth C; Russ, Pauline; Wise, Emaline E; Cooper, Julia N; Method, Michael W; Henson, John W; Grainger, Andrew V; Arun, Banu K; Lu, Karen H

    2018-05-16

    An environmental scan (ES) is an efficient mixed-methods approach to collect and interpret relevant data for strategic planning and project design. To date, the ES has not been used nor evaluated in the clinical cancer genetics setting. We created and implemented an ES to inform the design of a quality improvement (QI) project to increase the rates of adherence to national guidelines for cancer genetic counseling and genetic testing at three unique oncology care settings (OCS). The ES collected qualitative and quantitative data from reviews of internal processes, past QI efforts, the literature, and each OCS. The ES used a data collection form and semi-structured interviews to aid in data collection. The ES was completed within 6 months, and sufficient data were captured to identify opportunities and threats to the QI project's success, as well as potential barriers to, and facilitators of guideline-based cancer genetics services at each OCS. Previously unreported barriers were identified, including inefficient genetic counseling appointment scheduling processes and the inability to track referrals, genetics appointments, and genetic test results within electronic medical record systems. The ES was a valuable process for QI project planning at three OCS and may be used to evaluate genetics services in other settings.

  4. DuraLith geopolymer waste form for Hanford secondary waste: correlating setting behavior to hydration heat evolution.

    PubMed

    Xu, Hui; Gong, Weiliang; Syltebo, Larry; Lutze, Werner; Pegg, Ian L

    2014-08-15

    The binary furnace slag-metakaolin DuraLith geopolymer waste form, which has been considered as one of the candidate waste forms for immobilization of certain Hanford secondary wastes (HSW) from the vitrification of nuclear wastes at the Hanford Site, Washington, was extended to a ternary fly ash-furnace slag-metakaolin system to improve workability, reduce hydration heat, and evaluate high HSW waste loading. A concentrated HSW simulant, consisting of more than 20 chemicals with a sodium concentration of 5 mol/L, was employed to prepare the alkaline activating solution. Fly ash was incorporated at up to 60 wt% into the binder materials, whereas metakaolin was kept constant at 26 wt%. The fresh waste form pastes were subjected to isothermal calorimetry and setting time measurement, and the cured samples were further characterized by compressive strength and TCLP leach tests. This study has firstly established quantitative linear relationships between both initial and final setting times and hydration heat, which were never discovered in scientific literature for any cementitious waste form or geopolymeric material. The successful establishment of the correlations between setting times and hydration heat may make it possible to efficiently design and optimize cementitious waste forms and industrial wastes based geopolymers using limited testing results. Copyright © 2014 Elsevier B.V. All rights reserved.

  5. In search of memory tests equivalent for experiments on animals and humans.

    PubMed

    Brodziak, Andrzej; Kołat, Estera; Różyk-Myrta, Alicja

    2014-12-19

    Older people often exhibit memory impairments. Contemporary demographic trends cause aging of the society. In this situation, it is important to conduct clinical trials of drugs and use training methods to improve memory capacity. Development of new memory tests requires experiments on animals and then clinical trials in humans. Therefore, we decided to review the assessment methods and search for tests that evaluate analogous cognitive processes in animals and humans. This review has enabled us to propose 2 pairs of tests of the efficiency of working memory capacity in animals and humans. We propose a basic set of methods for complex clinical trials of drugs and training methods to improve memory, consisting of 2 pairs of tests: 1) the Novel Object Recognition Test - Sternberg Item Recognition Test and 2) the Object-Location Test - Visuospatial Memory Test. We postulate that further investigations of methods that are equivalent in animals experiments and observations performed on humans are necessary.

  6. Using the gene ontology to scan multilevel gene sets for associations in genome wide association studies.

    PubMed

    Schaid, Daniel J; Sinnwell, Jason P; Jenkins, Gregory D; McDonnell, Shannon K; Ingle, James N; Kubo, Michiaki; Goss, Paul E; Costantino, Joseph P; Wickerham, D Lawrence; Weinshilboum, Richard M

    2012-01-01

    Gene-set analyses have been widely used in gene expression studies, and some of the developed methods have been extended to genome wide association studies (GWAS). Yet, complications due to linkage disequilibrium (LD) among single nucleotide polymorphisms (SNPs), and variable numbers of SNPs per gene and genes per gene-set, have plagued current approaches, often leading to ad hoc "fixes." To overcome some of the current limitations, we developed a general approach to scan GWAS SNP data for both gene-level and gene-set analyses, building on score statistics for generalized linear models, and taking advantage of the directed acyclic graph structure of the gene ontology when creating gene-sets. However, other types of gene-set structures can be used, such as the popular Kyoto Encyclopedia of Genes and Genomes (KEGG). Our approach combines SNPs into genes, and genes into gene-sets, but assures that positive and negative effects of genes on a trait do not cancel. To control for multiple testing of many gene-sets, we use an efficient computational strategy that accounts for LD and provides accurate step-down adjusted P-values for each gene-set. Application of our methods to two different GWAS provide guidance on the potential strengths and weaknesses of our proposed gene-set analyses. © 2011 Wiley Periodicals, Inc.

  7. Desired Precision in Multi-Objective Optimization: Epsilon Archiving or Rounding Objectives?

    NASA Astrophysics Data System (ADS)

    Asadzadeh, M.; Sahraei, S.

    2016-12-01

    Multi-objective optimization (MO) aids in supporting the decision making process in water resources engineering and design problems. One of the main goals of solving a MO problem is to archive a set of solutions that is well-distributed across a wide range of all the design objectives. Modern MO algorithms use the epsilon dominance concept to define a mesh with pre-defined grid-cell size (often called epsilon) in the objective space and archive at most one solution at each grid-cell. Epsilon can be set to the desired precision level of each objective function to make sure that the difference between each pair of archived solutions is meaningful. This epsilon archiving process is computationally expensive in problems that have quick-to-evaluate objective functions. This research explores the applicability of a similar but computationally more efficient approach to respect the desired precision level of all objectives in the solution archiving process. In this alternative approach each objective function is rounded to the desired precision level before comparing any new solution to the set of archived solutions that already have rounded objective function values. This alternative solution archiving approach is compared to the epsilon archiving approach in terms of efficiency and quality of archived solutions for solving mathematical test problems and hydrologic model calibration problems.

  8. Utilizing the ultrasensitive Schistosoma up-converting phosphor lateral flow circulating anodic antigen (UCP-LF CAA) assay for sample pooling-strategies.

    PubMed

    Corstjens, Paul L A M; Hoekstra, Pytsje T; de Dood, Claudia J; van Dam, Govert J

    2017-11-01

    Methodological applications of the high sensitivity genus-specific Schistosoma CAA strip test, allowing detection of single worm active infections (ultimate sensitivity), are discussed for efficient utilization in sample pooling strategies. Besides relevant cost reduction, pooling of samples rather than individual testing can provide valuable data for large scale mapping, surveillance, and monitoring. The laboratory-based CAA strip test utilizes luminescent quantitative up-converting phosphor (UCP) reporter particles and a rapid user-friendly lateral flow (LF) assay format. The test includes a sample preparation step that permits virtually unlimited sample concentration with urine, reaching ultimate sensitivity (single worm detection) at 100% specificity. This facilitates testing large urine pools from many individuals with minimal loss of sensitivity and specificity. The test determines the average CAA level of the individuals in the pool thus indicating overall worm burden and prevalence. When requiring test results at the individual level, smaller pools need to be analysed with the pool-size based on expected prevalence or when unknown, on the average CAA level of a larger group; CAA negative pools do not require individual test results and thus reduce the number of tests. Straightforward pooling strategies indicate that at sub-population level the CAA strip test is an efficient assay for general mapping, identification of hotspots, determination of stratified infection levels, and accurate monitoring of mass drug administrations (MDA). At the individual level, the number of tests can be reduced i.e. in low endemic settings as the pool size can be increased as opposed to prevalence decrease. At the sub-population level, average CAA concentrations determined in urine pools can be an appropriate measure indicating worm burden. Pooling strategies allowing this type of large scale testing are feasible with the various CAA strip test formats and do not affect sensitivity and specificity. It allows cost efficient stratified testing and monitoring of worm burden at the sub-population level, ideally for large-scale surveillance generating hard data for performance of MDA programs and strategic planning when moving towards transmission-stop and elimination.

  9. In search of standards to support circularity in product policies: A systematic approach.

    PubMed

    Tecchio, Paolo; McAlister, Catriona; Mathieux, Fabrice; Ardente, Fulvio

    2017-12-01

    The aspiration of a circular economy is to shift material flows toward a zero waste and pollution production system. The process of shifting to a circular economy has been initiated by the European Commission in their action plan for the circular economy. The EU Ecodesign Directive is a key policy in this transition. However, to date the focus of access to market requirements on products has primarily been upon energy efficiency. The absence of adequate metrics and standards has been a key barrier to the inclusion of resource efficiency requirements. This paper proposes a framework to boost sustainable engineering and resource use by systematically identifying standardization needs and features. Standards can then support the setting of appropriate material efficiency requirements in EU product policy. Three high-level policy goals concerning material efficiency of products were identified: embodied impact reduction, lifetime extension and residual waste reduction. Through a lifecycle perspective, a matrix of interactions among material efficiency topics (recycled content, re-used content, relevant material content, durability, upgradability, reparability, re-manufacturability, reusability, recyclability, recoverability, relevant material separability) and policy goals was created. The framework was tested on case studies for electronic displays and washing machines. For potential material efficiency requirements, specific standardization needs were identified, such as adequate metrics for performance measurements, reliable and repeatable tests, and calculation procedures. The proposed novel framework aims to provide a method by which to identify key material efficiency considerations within the policy context, and to map out the generic and product-specific standardisation needs to support ecodesign. Via such an approach, many different stakeholders (industry, academics, policy makers, non-governmental organizations etc.) can be involved in material efficiency standards and regulations. Requirements and standards concerning material efficiency would compel product manufacturers, but also help designers and interested parties in addressing the sustainable resource use issue.

  10. Bandwidth-Efficient Communication through 225 MHz Ka-band Relay Satellite Channel

    NASA Technical Reports Server (NTRS)

    Downey, Joseph; Downey, James; Reinhart, Richard C.; Evans, Michael Alan; Mortensen, Dale John

    2016-01-01

    The communications and navigation space infrastructure of the National Aeronautics and Space Administration (NASA) consists of a constellation of relay satellites (called Tracking and Data Relay Satellites (TDRS)) and a global set of ground stations to receive and deliver data to researchers around the world from mission spacecraft throughout the solar system. Planning is underway to enhance and transform the infrastructure over the coming decade. Key to the upgrade will be the simultaneous and efficient use of relay transponders to minimize cost and operations while supporting science and exploration spacecraft. Efficient use of transponders necessitates bandwidth efficient communications to best use and maximize data throughput within the allocated spectrum. Experiments conducted with NASA's Space Communication and Navigation (SCaN) Testbed on the International Space Station provides a unique opportunity to evaluate advanced communication techniques, such as bandwidth-efficient modulations, in an operational flight system. Demonstrations of these new techniques in realistic flight conditions provides critical experience and reduces the risk of using these techniques in future missions. Efficient use of spectrum is enabled by using high-order modulations coupled with efficient forward error correction codes. This paper presents a high-rate, bandwidth-efficient waveform operating over the 225 MHz Ka-band service of the TDRS System (TDRSS). The testing explores the application of Gaussian Minimum Shift Keying (GMSK), 248-phase shift keying (PSK) and 1632- amplitude PSK (APSK) providing over three bits-per-second-per-Hertz (3 bsHz) modulation combined with various LDPC encoding rates to maximize throughput. With a symbol rate of 200 Mbaud, coded data rates of 1000 Mbps were tested in the laboratory and up to 800 Mbps over the TDRS 225 MHz channel. This paper will present on the high-rate waveform design, channel characteristics, performance results, compensation techniques for filtering and equalization, and architecture considerations going forward for efficient use of NASA's infrastructure.

  11. Bandwidth-Efficient Communication through 225 MHz Ka-band Relay Satellite Channel

    NASA Technical Reports Server (NTRS)

    Downey, Joseph A.; Downey, James M.; Reinhart, Richard C.; Evans, Michael A.; Mortensen, Dale J.

    2016-01-01

    The communications and navigation space infrastructure of the National Aeronautics and Space Administration (NASA) consists of a constellation of relay satellites (called Tracking and Data Relay Satellites (TDRS)) and a global set of ground stations to receive and deliver data to researchers around the world from mission spacecraft throughout the solar system. Planning is underway to enhance and transform the infrastructure over the coming decade. Key to the upgrade will be the simultaneous and efficient use of relay transponders to minimize cost and operations while supporting science and exploration spacecraft. Efficient use of transponders necessitates bandwidth efficient communications to best use and maximize data throughput within the allocated spectrum. Experiments conducted with NASA's Space Communication and Navigation (SCaN) Testbed on the International Space Station provides a unique opportunity to evaluate advanced communication techniques, such as bandwidth-efficient modulations, in an operational flight system. Demonstrations of these new techniques in realistic flight conditions provides critical experience and reduces the risk of using these techniques in future missions. Efficient use of spectrum is enabled by using high-order modulations coupled with efficient forward error correction codes. This paper presents a high-rate, bandwidth-efficient waveform operating over the 225 MHz Ka-band service of the TDRS System (TDRSS). The testing explores the application of Gaussian Minimum Shift Keying (GMSK), 2/4/8-phase shift keying (PSK) and 16/32- amplitude PSK (APSK) providing over three bits-per-second-per-Hertz (3 b/s/Hz) modulation combined with various LDPC encoding rates to maximize through- put. With a symbol rate of 200 M-band, coded data rates of 1000 Mbps were tested in the laboratory and up to 800 Mbps over the TDRS 225 MHz channel. This paper will present on the high-rate waveform design, channel characteristics, performance results, compensation techniques for filtering and equalization, and architecture considerations going forward for efficient use of NASA's infrastructure.

  12. Performance Evaluation Tests for Environmental Research (PETER): evaluation of 114 measures

    NASA Technical Reports Server (NTRS)

    Bittner, A. C. Jr; Carter, R. C.; Kennedy, R. S.; Harbeson, M. M.; Krause, M.

    1986-01-01

    The goal of the Performance Evaluation Tests for Environmental Research (PETER) Program was to identify a set of measures of human capabilities for use in the study of environmental and other time-course effects. 114 measures studied in the PETER Program were evaluated and categorized into four groups based upon task stability and task definition. The Recommended category contained 30 measures that clearly obtained total stabilization and had an acceptable level of reliability efficiency. The Acceptable-But-Redundant category contained 15 measures. The 37 measures in the Marginal category, which included an inordinate number of slope and other derived measures, usually had desirable features which were outweighed by faults. The 32 measures in the Unacceptable category had either differential instability or weak reliability efficiency. It is our opinion that the 30 measures in the Recommended category should be given first consideration for environmental research applications. Further, it is recommended that information pertaining to preexperimental practice requirements and stabilized reliabilities should be utilized in repeated-measures environmental studies.

  13. Efficient design of CMOS TSC checkers

    NASA Technical Reports Server (NTRS)

    Biddappa, Anita; Shamanna, Manjunath K.; Maki, Gary; Whitaker, Sterling

    1990-01-01

    This paper considers the design of an efficient, robustly testable, CMOS Totally Self-Checking (TSC) Checker for k-out-of-2k codes. Most existing implementations use primitive gates and assume the single stuck-at fault model. The self-testing property has been found to fail for CMOS TSC checkers under the stuck-open fault model due to timing skews and arbitrary delays in the circuit. A new four level design using CMOS primitive gates (NAND, NOR, INVERTERS) is presented. This design retains its properties under the stuck-open fault model. Additionally, this method offers an impressive reduction (greater than 70 percent) in gate count, gate inputs, and test set size when compared to the existing method. This implementation is easily realizable and is based on Anderson's technique. A thorough comparative study has been made on the proposed implementation and Kundu's implementation and the results indicate that the proposed one is better than Kundu's in all respects for k-out-of-2k codes.

  14. Advanced dc motor controller for battery-powered electric vehicles

    NASA Technical Reports Server (NTRS)

    Belsterling, C. A.

    1981-01-01

    A motor generation set is connected to run from the dc source and generate a voltage in the traction motor armature circuit that normally opposes the source voltage. The functional feasibility of the concept is demonstrated with tests on a Proof of Principle System. An analog computer simulation is developed, validated with the results of the tests, applied to predict the performance of a full scale Functional Model dc Controller. The results indicate high efficiencies over wide operating ranges and exceptional recovery of regenerated energy. The new machine integrates both motor and generator on a single two bearing shaft. The control strategy produces a controlled bidirectional plus or minus 48 volts dc output from the generator permitting full control of a 96 volt dc traction motor from a 48 volt battery, was designed to control a 20 hp traction motor. The controller weighs 63.5 kg (140 lb.) and has a peak efficiency of 90% in random driving modes and 96% during the SAE J 227a/D driving cycle.

  15. Evaluation the course of the vehicle braking process in case of hydraulic circuit malfunction

    NASA Astrophysics Data System (ADS)

    Szczypiński-Sala, W.; Lubas, J.

    2016-09-01

    In the paper, the results of the research were discussed, the aim of which was the evaluation of the vehicle braking performance efficiency and the course of this process with regard to the dysfunction which may occur in braking hydraulic circuit. As part of the research, on-road tests were conducted. During the research, the delay of the vehicle when braking was measured with the use of the set of sensors placed in the parallel and the perpendicular axis of the vehicle. All the tests were conducted on the same flat section of asphalt road with wet surface. Conditions of diminished tire-to-road adhesion were chosen in order to force the activity of anti-lock braking system. The research was conducted comparatively for the vehicle with acting anti-lock braking system and subsequently for the vehicle without the system. In both cases, there was a subsequent evaluation of the course of braking with efficient braking system and with the dysfunction of hydraulic circuit.

  16. Is domestication driven by reduced fear of humans? Boldness, metabolism and serotonin levels in divergently selected red junglefowl (Gallus gallus).

    PubMed

    Agnvall, Beatrix; Katajamaa, Rebecca; Altimiras, Jordi; Jensen, Per

    2015-09-01

    Domesticated animals tend to develop a coherent set of phenotypic traits. Tameness could be a central underlying factor driving this, and we therefore selected red junglefowl, ancestors of all domestic chickens, for high or low fear of humans during six generations. We measured basal metabolic rate (BMR), feed efficiency, boldness in a novel object (NO) test, corticosterone reactivity and basal serotonin levels (related to fearfulness) in birds from the fifth and sixth generation of the high- and low-fear lines, respectively (44-48 individuals). Corticosterone response to physical restraint did not differ between selection lines. However, BMR was higher in low-fear birds, as was feed efficiency. Low-fear males had higher plasma levels of serotonin and both low-fear males and females were bolder in an NO test. The results show that many aspects of the domesticated phenotype may have developed as correlated responses to reduced fear of humans, an essential trait for successful domestication. © 2015 The Author(s).

  17. Gas engine heat recovery unit

    NASA Astrophysics Data System (ADS)

    Kubasco, A. J.

    1991-07-01

    The objective of Gas Engine Heat Recovery Unit was to design, fabricate, and test an efficient, compact, and corrosion resistant heat recovery unit (HRU) for use on exhaust of natural gas-fired reciprocating engine-generator sets in the 50-500 kW range. The HRU would be a core component of a factory pre-packaged cogeneration system designed around component optimization, reliability, and efficiency. The HRU uses finned high alloy, stainless steel tubing wound into a compact helical coil heat exchanger. The corrosion resistance of the tubing allows more heat to be taken from the exhaust gas without fear of the effects of acid condensation. One HRU is currently installed in a cogeneration system at the Henry Ford Hospital Complex in Dearborn, Michigan. A second unit underwent successful endurance testing for 850 hours. The plan was to commercialize the HRU through its incorporation into a Caterpillar pre-packaged cogeneration system. Caterpillar is not proceeding with the concept at this time because of a downturn in the small size cogeneration market.

  18. Efficient summary statistical representation when change localization fails.

    PubMed

    Haberman, Jason; Whitney, David

    2011-10-01

    People are sensitive to the summary statistics of the visual world (e.g., average orientation/speed/facial expression). We readily derive this information from complex scenes, often without explicit awareness. Given the fundamental and ubiquitous nature of summary statistical representation, we tested whether this kind of information is subject to the attentional constraints imposed by change blindness. We show that information regarding the summary statistics of a scene is available despite limited conscious access. In a novel experiment, we found that while observers can suffer from change blindness (i.e., not localize where change occurred between two views of the same scene), observers could nevertheless accurately report changes in the summary statistics (or "gist") about the very same scene. In the experiment, observers saw two successively presented sets of 16 faces that varied in expression. Four of the faces in the first set changed from one emotional extreme (e.g., happy) to another (e.g., sad) in the second set. Observers performed poorly when asked to locate any of the faces that changed (change blindness). However, when asked about the ensemble (which set was happier, on average), observer performance remained high. Observers were sensitive to the average expression even when they failed to localize any specific object change. That is, even when observers could not locate the very faces driving the change in average expression between the two sets, they nonetheless derived a precise ensemble representation. Thus, the visual system may be optimized to process summary statistics in an efficient manner, allowing it to operate despite minimal conscious access to the information presented.

  19. Performance of heated humidifiers with a heated wire according to ventilatory settings.

    PubMed

    Nishida, T; Nishimura, M; Fujino, Y; Mashimo, T

    2001-01-01

    Delivering warm, humidified gas to patients is important during mechanical ventilation. Heated humidifiers are effective and popular. The humidifying efficiency is influenced not only by performance and settings of the devices but the settings of ventilator. We compared the efficiency of humidifying devices with a heated wire and servo-controlled function under a variety of ventilator settings. A bench study was done with a TTL model lung. The study took place in the laboratory of the University Hospital, Osaka, Japan. Four devices (MR290 with MR730, MR310 with MR730; both Fisher & Paykel, ConchaTherm IV; Hudson RCI, and HummaxII; METRAN) were tested. Hummax II has been developed recently, and it consists of a heated wire and polyethylene microporous hollow fiber. Both wire and fiber were put inside of an inspiratory circuit, and water vapor is delivered throughout the circuit. The Servo 300 was connected to the TTL with a standard ventilator circuit. The ventilator settings were as follows; minute ventilation (V(E)) 5, 10, and 15 L/min, a respiratory rate of 10 breaths/min, I:E ratio 1:1, 1:2, and 1:4, and no applied PEEP. Humidifying devices were set to maintain the temperature of airway opening at 32 degrees C and 37 degrees C. The greater V(E) the lower the humidity with all devices except Hummax II. Hummax II delivered 100% relative humidity at all ventilator and humidifier settings. When airway temperature control of the devices was set at 32 degrees C, the ConchaTherm IV did not deliver 30 mg/L of vapor, which is the value recommended by American National Standards at all V(E) settings. At 10 and 15 L/min of V(E) settings MR310 with MR730 did not deliver recommended vapor, either. In conclusion, airway temperature setting of the humidifying devices influenced the humidity of inspiratory gas greatly. Ventilatory settings also influenced the humidity of inspiratory gas. The Hummax II delivered sufficient water vapor under a variety of minute ventilation.

  20. Evaluating the Auto-MODS Assay, a Novel Tool for Tuberculosis Diagnosis for Use in Resource-Limited Settings

    PubMed Central

    Wang, Linwei; Mohammad, Sohaib H.; Li, Qiaozhi; Rienthong, Somsak; Rienthong, Dhanida; Nedsuwan, Supalert; Mahasirimongkol, Surakameth; Yasui, Yutaka

    2014-01-01

    There is an urgent need for simple, rapid, and affordable diagnostic tests for tuberculosis (TB) to combat the great burden of the disease in developing countries. The microscopic observation drug susceptibility assay (MODS) is a promising tool to fill this need, but it is not widely used due to concerns regarding its biosafety and efficiency. This study evaluated the automated MODS (Auto-MODS), which operates on principles similar to those of MODS but with several key modifications, making it an appealing alternative to MODS in resource-limited settings. In the operational setting of Chiang Rai, Thailand, we compared the performance of Auto-MODS with the gold standard liquid culture method in Thailand, mycobacterial growth indicator tube (MGIT) 960 plus the SD Bioline TB Ag MPT64 test, in terms of accuracy and efficiency in differentiating TB and non-TB samples as well as distinguishing TB and multidrug-resistant (MDR) TB samples. Sputum samples from clinically diagnosed TB and non-TB subjects across 17 hospitals in Chiang Rai were consecutively collected from May 2011 to September 2012. A total of 360 samples were available for evaluation, of which 221 (61.4%) were positive and 139 (38.6%) were negative for mycobacterial cultures according to MGIT 960. Of the 221 true-positive samples, Auto-MODS identified 212 as positive and 9 as negative (sensitivity, 95.9%; 95% confidence interval [CI], 92.4% to 98.1%). Of the 139 true-negative samples, Auto-MODS identified 135 as negative and 4 as positive (specificity, 97.1%; 95% CI, 92.8% to 99.2%). The median time to culture positivity was 10 days, with an interquartile range of 8 to 13 days for Auto-MODS. Auto-MODS is an effective and cost-sensitive alternative diagnostic tool for TB diagnosis in resource-limited settings. PMID:25378569

  1. A three-lead, programmable, and microcontroller-based electrocardiogram generator with frequency domain characteristics of heart rate variability.

    PubMed

    Wei, Ying-Chieh; Wei, Ying-Yu; Chang, Kai-Hsiung; Young, Ming-Shing

    2012-04-01

    The objective of this study is to design and develop a programmable electrocardiogram (ECG) generator with frequency domain characteristics of heart rate variability (HRV) which can be used to test the efficiency of ECG algorithms and to calibrate and maintain ECG equipment. We simplified and modified the three coupled ordinary differential equations in McSharry's model to a single differential equation to obtain the ECG signal. This system not only allows the signal amplitude, heart rate, QRS-complex slopes, and P- and T-wave position parameters to be adjusted, but can also be used to adjust the very low frequency, low frequency, and high frequency components of HRV frequency domain characteristics. The system can be tuned to function with HRV or not. When the HRV function is on, the average heart rate can be set to a value ranging from 20 to 122 beats per minute (BPM) with an adjustable variation of 1 BPM. When the HRV function is off, the heart rate can be set to a value ranging from 20 to 139 BPM with an adjustable variation of 1 BPM. The amplitude of the ECG signal can be set from 0.0 to 330 mV at a resolution of 0.005 mV. These parameters can be adjusted either via input through a keyboard or through a graphical user interface (GUI) control panel that was developed using LABVIEW. The GUI control panel depicts a preview of the ECG signal such that the user can adjust the parameters to establish a desired ECG morphology. A complete set of parameters can be stored in the flash memory of the system via a USB 2.0 interface. Our system can generate three different types of synthetic ECG signals for testing the efficiency of an ECG algorithm or calibrating and maintaining ECG equipment. © 2012 American Institute of Physics

  2. A three-lead, programmable, and microcontroller-based electrocardiogram generator with frequency domain characteristics of heart rate variability

    NASA Astrophysics Data System (ADS)

    Wei, Ying-Chieh; Wei, Ying-Yu; Chang, Kai-Hsiung; Young, Ming-Shing

    2012-04-01

    The objective of this study is to design and develop a programmable electrocardiogram (ECG) generator with frequency domain characteristics of heart rate variability (HRV) which can be used to test the efficiency of ECG algorithms and to calibrate and maintain ECG equipment. We simplified and modified the three coupled ordinary differential equations in McSharry's model to a single differential equation to obtain the ECG signal. This system not only allows the signal amplitude, heart rate, QRS-complex slopes, and P- and T-wave position parameters to be adjusted, but can also be used to adjust the very low frequency, low frequency, and high frequency components of HRV frequency domain characteristics. The system can be tuned to function with HRV or not. When the HRV function is on, the average heart rate can be set to a value ranging from 20 to 122 beats per minute (BPM) with an adjustable variation of 1 BPM. When the HRV function is off, the heart rate can be set to a value ranging from 20 to 139 BPM with an adjustable variation of 1 BPM. The amplitude of the ECG signal can be set from 0.0 to 330 mV at a resolution of 0.005 mV. These parameters can be adjusted either via input through a keyboard or through a graphical user interface (GUI) control panel that was developed using LABVIEW. The GUI control panel depicts a preview of the ECG signal such that the user can adjust the parameters to establish a desired ECG morphology. A complete set of parameters can be stored in the flash memory of the system via a USB 2.0 interface. Our system can generate three different types of synthetic ECG signals for testing the efficiency of an ECG algorithm or calibrating and maintaining ECG equipment.

  3. Measuring Efficiency in the Community College Sector. CCRC Working Paper No. 43

    ERIC Educational Resources Information Center

    Belfield, Clive R.

    2012-01-01

    Community colleges are increasingly being pressed to demonstrate efficiency and improve productivity even as these concepts are not clearly defined and require a significant set of assumptions to determine. This paper sets out a preferred economic definition of efficiency: fiscal and social cost per degree. It then assesses the validity of using…

  4. Highly Efficient Training, Refinement, and Validation of a Knowledge-based Planning Quality-Control System for Radiation Therapy Clinical Trials

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Nan; Carmona, Ruben; Sirak, Igor

    Purpose: To demonstrate an efficient method for training and validation of a knowledge-based planning (KBP) system as a radiation therapy clinical trial plan quality-control system. Methods and Materials: We analyzed 86 patients with stage IB through IVA cervical cancer treated with intensity modulated radiation therapy at 2 institutions according to the standards of the INTERTECC (International Evaluation of Radiotherapy Technology Effectiveness in Cervical Cancer, National Clinical Trials Network identifier: 01554397) protocol. The protocol used a planning target volume and 2 primary organs at risk: pelvic bone marrow (PBM) and bowel. Secondary organs at risk were rectum and bladder. Initial unfiltered dose-volumemore » histogram (DVH) estimation models were trained using all 86 plans. Refined training sets were created by removing sub-optimal plans from the unfiltered sample, and DVH estimation models… and DVH estimation models were constructed by identifying 30 of 86 plans emphasizing PBM sparing (comparing protocol-specified dosimetric cutpoints V{sub 10} (percentage volume of PBM receiving at least 10 Gy dose) and V{sub 20} (percentage volume of PBM receiving at least 20 Gy dose) with unfiltered predictions) and another 30 of 86 plans emphasizing bowel sparing (comparing V{sub 40} (absolute volume of bowel receiving at least 40 Gy dose) and V{sub 45} (absolute volume of bowel receiving at least 45 Gy dose), 9 in common with the PBM set). To obtain deliverable KBP plans, refined models must inform patient-specific optimization objectives and/or priorities (an auto-planning “routine”). Four candidate routines emphasizing different tradeoffs were composed, and a script was developed to automatically re-plan multiple patients with each routine. After selection of the routine that best met protocol objectives in the 51-patient training sample (KBP{sub FINAL}), protocol-specific DVH metrics and normal tissue complication probability were compared for original versus KBP{sub FINAL} plans across the 35-patient validation set. Paired t tests were used to test differences between planning sets. Results: KBP{sub FINAL} plans outperformed manual planning across the validation set in all protocol-specific DVH cutpoints. The mean normal tissue complication probability for gastrointestinal toxicity was lower for KBP{sub FINAL} versus validation-set plans (48.7% vs 53.8%, P<.001). Similarly, the estimated mean white blood cell count nadir was higher (2.77 vs 2.49 k/mL, P<.001) with KBP{sub FINAL} plans, indicating lowered probability of hematologic toxicity. Conclusions: This work demonstrates that a KBP system can be efficiently trained and refined for use in radiation therapy clinical trials with minimal effort. This patient-specific plan quality control resulted in improvements on protocol-specific dosimetric endpoints.« less

  5. Drug residues in urban water: A database for ecotoxicological risk management.

    PubMed

    Destrieux, Doriane; Laurent, François; Budzinski, Hélène; Pedelucq, Julie; Vervier, Philippe; Gerino, Magali

    2017-12-31

    Human-use drug residues (DR) are only partially eliminated by waste water treatment plants (WWTPs), so that residual amounts can reach natural waters and cause environmental hazards. In order to properly manage these hazards in the aquatic environment, a database is made available that integrates the concentration ranges for DR, which cause adverse effects for aquatic organisms, and the temporal variations of the ecotoxicological risks. To implement this database for the ecotoxicological risk assessment (ERA database), the required information for each DR is the predicted no effect concentrations (PNECs), along with the predicted environmental concentrations (PECs). The risk assessment is based on the ratio between the PNECs and the PECs. Adverse effect data or PNECs have been found in the publicly available literature for 45 substances. These ecotoxicity test data have been extracted from 125 different sources. This ERA database contains 1157 adverse effect data and 287 PNECs. The efficiency of this ERA database was tested with a data set coming from a simultaneous survey of WWTPs and the natural environment. In this data set, 26 DR were searched for in two WWTPs and in the river. On five sampling dates, concentrations measured in the river for 10 DR could pose environmental problems of which 7 were measured only downstream of WWTP outlets. From scientific literature and measurements, data implementation with unit homogenisation in a single database facilitates the actual ecotoxicological risk assessment, and may be useful for further risk coming from data arising from the future field survey. Moreover, the accumulation of a large ecotoxicity data set in a single database should not only improve knowledge of higher risk molecules but also supply an objective tool to help the rapid and efficient evaluation of the risk. Copyright © 2017 Elsevier B.V. All rights reserved.

  6. An Efficient Approach for the Development of Locus Specific Primers in Bread Wheat (Triticum aestivum L.) and Its Application to Re-Sequencing of Genes Involved in Frost Tolerance

    PubMed Central

    Babben, Steve; Perovic, Dragan; Koch, Michael; Ordon, Frank

    2015-01-01

    Recent declines in costs accelerated sequencing of many species with large genomes, including hexaploid wheat (Triticum aestivum L.). Although the draft sequence of bread wheat is known, it is still one of the major challenges to developlocus specific primers suitable to be used in marker assisted selection procedures, due to the high homology of the three genomes. In this study we describe an efficient approach for the development of locus specific primers comprising four steps, i.e. (i) identification of genomic and coding sequences (CDS) of candidate genes, (ii) intron- and exon-structure reconstruction, (iii) identification of wheat A, B and D sub-genome sequences and primer development based on sequence differences between the three sub-genomes, and (iv); testing of primers for functionality, correct size and localisation. This approach was applied to single, low and high copy genes involved in frost tolerance in wheat. In summary for 27 of these genes for which sequences were derived from Triticum aestivum, Triticum monococcum and Hordeum vulgare, a set of 119 primer pairs was developed and after testing on Nulli-tetrasomic (NT) lines, a set of 65 primer pairs (54.6%), corresponding to 19 candidate genes, turned out to be specific. Out of these a set of 35 fragments was selected for validation via Sanger's amplicon re-sequencing. All fragments, with the exception of one, could be assigned to the original reference sequence. The approach presented here showed a much higher specificity in primer development in comparison to techniques used so far in bread wheat and can be applied to other polyploid species with a known draft sequence. PMID:26565976

  7. Efficiency assessment of bi-radiated screens and improved convective set of tubes during the modernization of PTVM-100 tower hot-water boiler based on controlled all-mode mathematic models of boilers on Boiler Designer software

    NASA Astrophysics Data System (ADS)

    Orumbayev, R. K.; Kibarin, A. A.; Khodanova, T. V.; Korobkov, M. S.

    2018-03-01

    This work contains analysis of technical values of tower hot-water boiler PTVM-100 when operating on gas and oil residual. After the test it became clear that due to the construction deficiency during the combustion of oil residual, it is not possible to provide long-term production of heat. There is also given a short review on modernization of PTVM-100 hot-water boilers. With the help of calculations based on controlled all-mode mathematic modules of hot-water boilers in BOILER DESIGNER software, it was shown that boiler modernization by use of bi-radiated screens and new convective set of tubes allows decreasing sufficiently the temperature of combustor output gases and increase reliability of boiler operation. Constructive changes of boiler unit suggested by authors of this work, along with increase of boiler’s operation reliability also allow to improve it’s heat production rates and efficiency rate up to 90,5% when operating on fuel oil and outdoor installation option.

  8. Multi-Level Bitmap Indexes for Flash Memory Storage

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wu, Kesheng; Madduri, Kamesh; Canon, Shane

    2010-07-23

    Due to their low access latency, high read speed, and power-efficient operation, flash memory storage devices are rapidly emerging as an attractive alternative to traditional magnetic storage devices. However, tests show that the most efficient indexing methods are not able to take advantage of the flash memory storage devices. In this paper, we present a set of multi-level bitmap indexes that can effectively take advantage of flash storage devices. These indexing methods use coarsely binned indexes to answer queries approximately, and then use finely binned indexes to refine the answers. Our new methods read significantly lower volumes of data atmore » the expense of an increased disk access count, thus taking full advantage of the improved read speed and low access latency of flash devices. To demonstrate the advantage of these new indexes, we measure their performance on a number of storage systems using a standard data warehousing benchmark called the Set Query Benchmark. We observe that multi-level strategies on flash drives are up to 3 times faster than traditional indexing strategies on magnetic disk drives.« less

  9. Trends in computer applications in science assessment

    NASA Astrophysics Data System (ADS)

    Kumar, David D.; Helgeson, Stanley L.

    1995-03-01

    Seven computer applications to science assessment are reviewed. Conventional test administration includes record keeping, grading, and managing test banks. Multiple-choice testing involves forced selection of an answer from a menu, whereas constructed-response testing involves options for students to present their answers within a set standard deviation. Adaptive testing attempts to individualize the test to minimize the number of items and time needed to assess a student's knowledge. Figurai response testing assesses science proficiency in pictorial or graphic mode and requires the student to construct a mental image rather than selecting a response from a multiple choice menu. Simulations have been found useful for performance assessment on a large-scale basis in part because they make it possible to independently specify different aspects of a real experiment. An emerging approach to performance assessment is solution pathway analysis, which permits the analysis of the steps a student takes in solving a problem. Virtually all computer-based testing systems improve the quality and efficiency of record keeping and data analysis.

  10. Extending LMS to Support IRT-Based Assessment Test Calibration

    NASA Astrophysics Data System (ADS)

    Fotaris, Panagiotis; Mastoras, Theodoros; Mavridis, Ioannis; Manitsaris, Athanasios

    Developing unambiguous and challenging assessment material for measuring educational attainment is a time-consuming, labor-intensive process. As a result Computer Aided Assessment (CAA) tools are becoming widely adopted in academic environments in an effort to improve the assessment quality and deliver reliable results of examinee performance. This paper introduces a methodological and architectural framework which embeds a CAA tool in a Learning Management System (LMS) so as to assist test developers in refining items to constitute assessment tests. An Item Response Theory (IRT) based analysis is applied to a dynamic assessment profile provided by the LMS. Test developers define a set of validity rules for the statistical indices given by the IRT analysis. By applying those rules, the LMS can detect items with various discrepancies which are then flagged for review of their content. Repeatedly executing the aforementioned procedure can improve the overall efficiency of the testing process.

  11. A method to screen and evaluate tissue adhesives for joint repair applications

    PubMed Central

    2012-01-01

    Background Tissue adhesives are useful means for various medical procedures. Since varying requirements cause that a single adhesive cannot meet all needs, bond strength testing remains one of the key applications used to screen for new products and study the influence of experimental variables. This study was conducted to develop an easy to use method to screen and evaluate tissue adhesives for tissue engineering applications. Method Tissue grips were designed to facilitate the reproducible production of substrate tissue and adhesive strength measurements in universal testing machines. Porcine femoral condyles were used to generate osteochondral test tissue cylinders (substrates) of different shapes. Viability of substrates was tested using PI/FDA staining. Self-bonding properties were determined to examine reusability of substrates (n = 3). Serial measurements (n = 5) in different operation modes (OM) were performed to analyze the bonding strength of tissue adhesives in bone (OM-1) and cartilage tissue either in isolation (OM-2) or under specific requirements in joint repair such as filling cartilage defects with clinical applied fibrin/PLGA-cell-transplants (OM-3) or tissues (OM-4). The efficiency of the method was determined on the basis of adhesive properties of fibrin glue for different assembly times (30 s, 60 s). Seven randomly generated collagen formulations were analyzed to examine the potential of method to identify new tissue adhesives. Results Viability analysis of test tissue cylinders revealed vital cells (>80%) in cartilage components even 48 h post preparation. Reuse (n = 10) of test substrate did not significantly change adhesive characteristics. Adhesive strength of fibrin varied in different test settings (OM-1: 7.1 kPa, OM-2: 2.6 kPa, OM-3: 32.7 kPa, OM-4: 30.1 kPa) and was increasing with assembly time on average (2.4-fold). The screening of the different collagen formulations revealed a substance with significant higher adhesive strength on cartilage (14.8 kPa) and bone tissue (11.8 kPa) compared to fibrin and also considerable adhesive properties when filling defects with cartilage tissue (23.2 kPa). Conclusion The method confirmed adhesive properties of fibrin and demonstrated the dependence of adhesive properties and applied settings. Furthermore the method was suitable to screen for potential adhesives and to identify a promising candidate for cartilage and bone applications. The method can offer simple, replicable and efficient evaluation of adhesive properties in ex vivo specimens and may be a useful supplement to existing methods in clinical relevant settings. PMID:22984926

  12. On-board computer progress in development of A 310 flight testing program

    NASA Technical Reports Server (NTRS)

    Reau, P.

    1981-01-01

    Onboard computer progress in development of an Airbus A 310 flight testing program is described. Minicomputers were installed onboard three A 310 airplanes in 1979 in order to: (1) assure the flight safety by exercising a limit check of a given set of parameters; (2) improve the efficiency of flight tests and allow cost reduction; and (3) perform test analysis on an external basis by utilizing onboard flight types. The following program considerations are discussed: (1) conclusions based on simulation of an onboard computer system; (2) brief descriptions of A 310 airborne computer equipment, specifically the onboard universal calculator (CUB) consisting of a ROLM 1666 system and visualization system using an AFIGRAF CRT; (3) the ground system and flight information inputs; and (4) specifications and execution priorities for temporary and permanent programs.

  13. Toxcast and the Use of Human Relevant In Vitro Exposures ...

    EPA Pesticide Factsheets

    The path for incorporating new approach methods and technologies into quantitative chemical risk assessment poses a diverse set of scientific challenges. These challenges include sufficient coverage of toxicological mechanisms to meaningfully interpret negative test results, development of increasingly relevant test systems, computational modeling to integrate experimental data, putting results in a dose and exposure context, characterizing uncertainty, and efficient validation of the test systems and computational models. The presentation will cover progress at the U.S. EPA in systematically addressing each of these challenges and delivering more human-relevant risk-based assessments. This abstract does not necessarily reflect U.S. EPA policy. Presentation at the British Toxicological Society Annual Congress on ToxCast and the Use of Human Relevant In Vitro Exposures: Incorporating high-throughput exposure and toxicity testing data for 21st century risk assessments .

  14. Experimental comparisons of hypothesis test and moving average based combustion phase controllers.

    PubMed

    Gao, Jinwu; Wu, Yuhu; Shen, Tielong

    2016-11-01

    For engine control, combustion phase is the most effective and direct parameter to improve fuel efficiency. In this paper, the statistical control strategy based on hypothesis test criterion is discussed. Taking location of peak pressure (LPP) as combustion phase indicator, the statistical model of LPP is first proposed, and then the controller design method is discussed on the basis of both Z and T tests. For comparison, moving average based control strategy is also presented and implemented in this study. The experiments on a spark ignition gasoline engine at various operating conditions show that the hypothesis test based controller is able to regulate LPP close to set point while maintaining the rapid transient response, and the variance of LPP is also well constrained. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.

  15. Periodical capacity setting methods for make-to-order multi-machine production systems

    PubMed Central

    Altendorfer, Klaus; Hübl, Alexander; Jodlbauer, Herbert

    2014-01-01

    The paper presents different periodical capacity setting methods for make-to-order, multi-machine production systems with stochastic customer required lead times and stochastic processing times to improve service level and tardiness. These methods are developed as decision support when capacity flexibility exists, such as, a certain range of possible working hours a week for example. The methods differ in the amount of information used whereby all are based on the cumulated capacity demand at each machine. In a simulation study the methods’ impact on service level and tardiness is compared to a constant provided capacity for a single and a multi-machine setting. It is shown that the tested capacity setting methods can lead to an increase in service level and a decrease in average tardiness in comparison to a constant provided capacity. The methods using information on processing time and customer required lead time distribution perform best. The results found in this paper can help practitioners to make efficient use of their flexible capacity. PMID:27226649

  16. Atomic Cholesky decompositions: a route to unbiased auxiliary basis sets for density fitting approximation with tunable accuracy and efficiency.

    PubMed

    Aquilante, Francesco; Gagliardi, Laura; Pedersen, Thomas Bondo; Lindh, Roland

    2009-04-21

    Cholesky decomposition of the atomic two-electron integral matrix has recently been proposed as a procedure for automated generation of auxiliary basis sets for the density fitting approximation [F. Aquilante et al., J. Chem. Phys. 127, 114107 (2007)]. In order to increase computational performance while maintaining accuracy, we propose here to reduce the number of primitive Gaussian functions of the contracted auxiliary basis functions by means of a second Cholesky decomposition. Test calculations show that this procedure is most beneficial in conjunction with highly contracted atomic orbital basis sets such as atomic natural orbitals, and that the error resulting from the second decomposition is negligible. We also demonstrate theoretically as well as computationally that the locality of the fitting coefficients can be controlled by means of the decomposition threshold even with the long-ranged Coulomb metric. Cholesky decomposition-based auxiliary basis sets are thus ideally suited for local density fitting approximations.

  17. Atomic Cholesky decompositions: A route to unbiased auxiliary basis sets for density fitting approximation with tunable accuracy and efficiency

    NASA Astrophysics Data System (ADS)

    Aquilante, Francesco; Gagliardi, Laura; Pedersen, Thomas Bondo; Lindh, Roland

    2009-04-01

    Cholesky decomposition of the atomic two-electron integral matrix has recently been proposed as a procedure for automated generation of auxiliary basis sets for the density fitting approximation [F. Aquilante et al., J. Chem. Phys. 127, 114107 (2007)]. In order to increase computational performance while maintaining accuracy, we propose here to reduce the number of primitive Gaussian functions of the contracted auxiliary basis functions by means of a second Cholesky decomposition. Test calculations show that this procedure is most beneficial in conjunction with highly contracted atomic orbital basis sets such as atomic natural orbitals, and that the error resulting from the second decomposition is negligible. We also demonstrate theoretically as well as computationally that the locality of the fitting coefficients can be controlled by means of the decomposition threshold even with the long-ranged Coulomb metric. Cholesky decomposition-based auxiliary basis sets are thus ideally suited for local density fitting approximations.

  18. Filter Efficiency and Leak Testing of Returned ISS Bacterial Filter Elements After 2.5 Years of Continuous Operation

    NASA Technical Reports Server (NTRS)

    Green, Robert D.; Agui, Juan H.; Berger, Gordon M.; Vijayakumar, R.; Perry, Jay L.

    2016-01-01

    The atmosphere revitalization equipment aboard the International Space Station (ISS) and future deep space exploration vehicles provides the vital functions of maintaining a habitable environment for the crew as well as protecting the hardware from fouling by suspended particulate matter. Providing these functions are challenging in pressurized spacecraft cabins because no outside air ventilation is possible and a larger particulate load is imposed on the filtration system due to lack of sedimentation in reduced gravity conditions. The ISS Environmental Control and Life Support (ECLS) system architecture in the U.S. Segment uses a distributed particulate filtration approach consisting of traditional High-Efficiency Particulate Adsorption (HEPA) filters deployed at multiple locations in each module. These filters are referred to as Bacteria Filter Elements (BFEs). As more experience has been gained with ISS operations, the BFE service life, which was initially one year, has been extended to two to five years, dependent on the location in the U.S. Segment. In previous work we developed a test facility and test protocol for leak testing the ISS BFEs. For this work, we present results of leak testing a sample set of returned BFEs with a service life of 2.5 years, along with particulate removal efficiency and pressure drop measurements. The results can potentially be utilized by the ISS Program to ascertain whether the present replacement interval can be maintained or extended to balance the on-ground filter inventory with extension of the lifetime of ISS to 2024. These results can also provide meaningful guidance for particulate filter designs under consideration for future deep space exploration missions.

  19. High-Frequency Dusting Versus Conventional Holmium Laser Lithotripsy for Intrarenal and Ureteral Calculi.

    PubMed

    Li, Roger; Ruckle, David; Keheila, Mohamed; Maldonado, Jonathan; Lightfoot, Michelle; Alsyouf, Muhannad; Yeo, Alexander; Abourbih, Samuel R; Olgin, Gaudencio; Arenas, Javier L; Baldwin, D Duane

    2017-03-01

    The efficiency of holmium laser lithotripsy for urolithiasis depends upon several factors, including laser pulse energy and frequency and stone composition and retropulsion. This study investigates the complex interplay between these factors and quantifies lithotripsy efficiency using different laser settings in a benchtop kidney and ureter model. In vitro caliceal and ex vivo porcine ureteral models were constructed. Calcium oxalate monohydrate stones were fragmented using a 200-μm laser fiber. In the caliceal model, stone fragmentation and vaporization rates at settings of 0.6 J/5 Hz, 0.2 J/15 Hz, and 0.2 J/50 Hz were compared. In the ureteral model, fragmentation time, retropulsion rate, fragmentation rate, and fragmented stone weight were compared at settings of 0.6 J/5 Hz and 0.2 J/15 Hz. Retropulsive forces generated at 0.6 J/5 Hz, 0.2 J/15 Hz, and 0.2 J/50 Hz settings were compared. Analysis was performed using Student's t-test and one-way ANOVA. In the caliceal model, the 0.6 J/5 Hz setting fragmented and vaporized stones at a higher rate than the 0.2 J/15 Hz setting (0.072 vs. 0.049 mg/s; p < 0.001). However, when the 0.2 J energy setting was combined with the 50 Hz frequency, the fragmentation rate (0.069 mg/s) was similar to the fragmentation rate at 0.6 J/5 Hz (0.072 mg/s; p = 0.677). In the ureteral model, the 0.6 J/5 Hz setting produced higher fragmentation rates (0.089 vs. 0.049 mg/s; p < 0.001), but resulted in significantly lower fragmented stone weight overall (16.815 vs. 25.485 mg; p = 0.009) due to higher retropulsion rates (0.732 vs. 0.213 mm/s; p < 0.001). Retropulsive forces decreased significantly when pulse energy decreased from 0.6 to 0.2 J (0.907 vs. 0.223 N; p < 0.001). Frequency did not affect retropulsive force at 15 and 50 Hz settings (0.223 vs. 0.288 N; p = 0.509). Laser lithotripsy of calcium oxalate monohydrate stones in the ureter should be performed using the low-energy, moderate-frequency dusting setting to minimize retropulsion and maximize efficiency. In the renal calix, the low-energy high-frequency setting performed similarly to the high-energy low-frequency setting.

  20. The effect of physician practice organization on efficient utilization of hospital resources.

    PubMed Central

    Burns, L R; Chilingerian, J A; Wholey, D R

    1994-01-01

    OBJECTIVE. This study examines variations in the efficient use of hospital resources across individual physicians. DATA SOURCES AND SETTING. The study is conducted over a two-year period (1989-1990) in all short-term general hospitals with 50 or more beds in Arizona. We examine hospital discharge data for 43,625 women undergoing cesarean sections and vaginal deliveries without complications. These data include physician identifiers that permit us to link patient information with information on physicians provided by the state medical association. STUDY DESIGN. The study first measures the contribution of physician characteristics to the explanatory power of regression models that predict resource use. It then tests hypothesized effects on resource utilization exerted by two sets of physician level factors: physician background and physician practice organization. The latter includes effects of hospital practice volume, concentration of hospital practice, percent managed care patients in one's hospital practice, and diversity of patients treated. Efficiency (inefficiency) is measured as the degree of variation in patient charges and length of stay below (above) the average of treating all patients with the same condition in the same hospital in the same year with the same severity of illness, controlling for discharge status and the presence of complications. PRINCIPAL FINDINGS. After controlling for patient factors, physician characteristics explain a significant amount of the variability in hospital charges and length of stay in the two maternity conditions. Results also support hypotheses that efficiency is influenced by practice organization factors such as patient volume and managed care load. Physicians with larger practices and a higher share of managed care patients appear to be more efficient. CONCLUSIONS. The results suggest that health care reform efforts to develop physician-hospital networks and managed competition may promote greater parsimony in physicians' practice behavior. PMID:8002351

  1. Landmark Estimation of Survival and Treatment Effect in a Randomized Clinical Trial

    PubMed Central

    Parast, Layla; Tian, Lu; Cai, Tianxi

    2013-01-01

    Summary In many studies with a survival outcome, it is often not feasible to fully observe the primary event of interest. This often leads to heavy censoring and thus, difficulty in efficiently estimating survival or comparing survival rates between two groups. In certain diseases, baseline covariates and the event time of non-fatal intermediate events may be associated with overall survival. In these settings, incorporating such additional information may lead to gains in efficiency in estimation of survival and testing for a difference in survival between two treatment groups. If gains in efficiency can be achieved, it may then be possible to decrease the sample size of patients required for a study to achieve a particular power level or decrease the duration of the study. Most existing methods for incorporating intermediate events and covariates to predict survival focus on estimation of relative risk parameters and/or the joint distribution of events under semiparametric models. However, in practice, these model assumptions may not hold and hence may lead to biased estimates of the marginal survival. In this paper, we propose a semi-nonparametric two-stage procedure to estimate and compare t-year survival rates by incorporating intermediate event information observed before some landmark time, which serves as a useful approach to overcome semi-competing risks issues. In a randomized clinical trial setting, we further improve efficiency through an additional calibration step. Simulation studies demonstrate substantial potential gains in efficiency in terms of estimation and power. We illustrate our proposed procedures using an AIDS Clinical Trial Protocol 175 dataset by estimating survival and examining the difference in survival between two treatment groups: zidovudine and zidovudine plus zalcitabine. PMID:24659838

  2. Neural networks for computer-aided diagnosis: detection of lung nodules in chest radiograms.

    PubMed

    Coppini, Giuseppe; Diciotti, Stefano; Falchini, Massimo; Villari, Natale; Valli, Guido

    2003-12-01

    The paper describes a neural-network-based system for the computer aided detection of lung nodules in chest radiograms. Our approach is based on multiscale processing and artificial neural networks (ANNs). The problem of nodule detection is faced by using a two-stage architecture including: 1) an attention focusing subsystem that processes whole radiographs to locate possible nodular regions ensuring high sensitivity; 2) a validation subsystem that processes regions of interest to evaluate the likelihood of the presence of a nodule, so as to reduce false alarms and increase detection specificity. Biologically inspired filters (both LoG and Gabor kernels) are used to enhance salient image features. ANNs of the feedforward type are employed, which allow an efficient use of a priori knowledge about the shape of nodules, and the background structure. The images from the public JSRT database, including 247 radiograms, were used to build and test the system. We performed a further test by using a second private database with 65 radiograms collected and annotated at the Radiology Department of the University of Florence. Both data sets include nodule and nonnodule radiographs. The use of a public data set along with independent testing with a different image set makes the comparison with other systems easier and allows a deeper understanding of system behavior. Experimental results are described by ROC/FROC analysis. For the JSRT database, we observed that by varying sensitivity from 60 to 75% the number of false alarms per image lies in the range 4-10, while accuracy is in the range 95.7-98.0%. When the second data set was used comparable results were obtained. The observed system performances support the undertaking of system validation in clinical settings.

  3. FOCCoS for Subaru PFS

    NASA Astrophysics Data System (ADS)

    Cesar de Oliveira, Antonio; Souza de Oliveira, Ligia; de Arruda, Marcio V.; Bispo dos Santos, Jesulino; Souza Marrara, Lucas; Bawden de Paula Macanhan, Vanessa; Batista de Carvalho Oliveira, João.; de Paiva Vilaça, Rodrigo; Dominici, Tania P.; Sodré, Laerte; Mendes de Oliveira, Claudia; Karoji, Hiroshi; Sugai, Hajime; Shimono, Atsushi; Tamura, Naoyuki; Takato, Naruhisa; Ueda, Akitoshi

    2012-09-01

    The Fiber Optical Cable and Connector System (FOCCoS), provides optical connection between 2400 positioners and a set of spectrographs by an optical fibers cable as part of Subaru PFS instrument. Each positioner retains one fiber entrance attached at a microlens, which is responsible for the F-ratio transformation into a larger one so that difficulties of spectrograph design are eased. The optical fibers cable will be segmented in 3 parts at long of the way, cable A, cable B and cable C, connected by a set of multi-fibers connectors. Cable B will be permanently attached at the Subaru telescope. The first set of multi-fibers connectors will connect the cable A to the cable C from the spectrograph system at the Nasmith platform. The cable A, is an extension of a pseudo-slit device obtained with the linear disposition of the extremities of the optical fibers and fixed by epoxy at a base of composite substrate. The second set of multi-fibers connectors will connect the other extremity of cable A to the cable B, which is part of the positioner's device structure. The optical fiber under study for this project is the Polymicro FBP120170190, which has shown very encouraging results. The kind of test involves FRD measurements caused by stress induced by rotation and twist of the fiber extremity, similar conditions to those produced by positioners of the PFS instrument. The multi-fibers connector under study is produced by USCONEC Company and may connect 32 optical fibers. The tests involve throughput of light and stability after many connections and disconnections. This paper will review the general design of the FOCCoS subsystem, methods used to fabricate the devices involved and the tests results necessary to evaluate the total efficiency of the set.

  4. Fast structure similarity searches among protein models: efficient clustering of protein fragments

    PubMed Central

    2012-01-01

    Background For many predictive applications a large number of models is generated and later clustered in subsets based on structure similarity. In most clustering algorithms an all-vs-all root mean square deviation (RMSD) comparison is performed. Most of the time is typically spent on comparison of non-similar structures. For sets with more than, say, 10,000 models this procedure is very time-consuming and alternative faster algorithms, restricting comparisons only to most similar structures would be useful. Results We exploit the inverse triangle inequality on the RMSD between two structures given the RMSDs with a third structure. The lower bound on RMSD may be used, when restricting the search of similarity to a reasonably low RMSD threshold value, to speed up similarity searches significantly. Tests are performed on large sets of decoys which are widely used as test cases for predictive methods, with a speed-up of up to 100 times with respect to all-vs-all comparison depending on the set and parameters used. Sample applications are shown. Conclusions The algorithm presented here allows fast comparison of large data sets of structures with limited memory requirements. As an example of application we present clustering of more than 100000 fragments of length 5 from the top500H dataset into few hundred representative fragments. A more realistic scenario is provided by the search of similarity within the very large decoy sets used for the tests. Other applications regard filtering nearly-indentical conformation in selected CASP9 datasets and clustering molecular dynamics snapshots. Availability A linux executable and a Perl script with examples are given in the supplementary material (Additional file 1). The source code is available upon request from the authors. PMID:22642815

  5. Integrating Science and Engineering to Implement Evidence-Based Practices in Health Care Settings.

    PubMed

    Wu, Shinyi; Duan, Naihua; Wisdom, Jennifer P; Kravitz, Richard L; Owen, Richard R; Sullivan, J Greer; Wu, Albert W; Di Capua, Paul; Hoagwood, Kimberly Eaton

    2015-09-01

    Integrating two distinct and complementary paradigms, science and engineering, may produce more effective outcomes for the implementation of evidence-based practices in health care settings. Science formalizes and tests innovations, whereas engineering customizes and optimizes how the innovation is applied tailoring to accommodate local conditions. Together they may accelerate the creation of an evidence-based healthcare system that works effectively in specific health care settings. We give examples of applying engineering methods for better quality, more efficient, and safer implementation of clinical practices, medical devices, and health services systems. A specific example was applying systems engineering design that orchestrated people, process, data, decision-making, and communication through a technology application to implement evidence-based depression care among low-income patients with diabetes. We recommend that leading journals recognize the fundamental role of engineering in implementation research, to improve understanding of design elements that create a better fit between program elements and local context.

  6. Standardized development of computer software. Part 1: Methods

    NASA Technical Reports Server (NTRS)

    Tausworthe, R. C.

    1976-01-01

    This work is a two-volume set on standards for modern software engineering methodology. This volume presents a tutorial and practical guide to the efficient development of reliable computer software, a unified and coordinated discipline for design, coding, testing, documentation, and project organization and management. The aim of the monograph is to provide formal disciplines for increasing the probability of securing software that is characterized by high degrees of initial correctness, readability, and maintainability, and to promote practices which aid in the consistent and orderly development of a total software system within schedule and budgetary constraints. These disciplines are set forth as a set of rules to be applied during software development to drastically reduce the time traditionally spent in debugging, to increase documentation quality, to foster understandability among those who must come in contact with it, and to facilitate operations and alterations of the program as requirements on the program environment change.

  7. Efficient G0W0 using localized basis sets: a benchmark for molecules

    NASA Astrophysics Data System (ADS)

    Koval, Petr; Per Ljungberg, Mathias; Sanchez-Portal, Daniel

    Electronic structure calculations within Hedin's GW approximation are becoming increasingly accessible to the community. In particular, as it has been shown earlier and we confirm by calculations using our MBPT_LCAO package, the computational cost of the so-called G0W0 can be made comparable to the cost of a regular Hartree-Fock calculation. In this work, we study the performance of our new implementation of G0W0 to reproduce the ionization potentials of all 117 closed-shell molecules belonging to the G2/97 test set, using a pseudo-potential starting point provided by the popular density-functional package SIESTA. Moreover, the ionization potentials and electron affinities of a set of 24 acceptor molecules are compared to experiment and to reference all-electron calculations. PK: Guipuzcoa Fellow; PK,ML,DSP: Deutsche Forschungsgemeinschaft (SFB1083); PK,DSP: MINECO MAT2013-46593-C6-2-P.

  8. Fuzzy logic controllers for electrotechnical devices - On-site tuning approach

    NASA Astrophysics Data System (ADS)

    Hissel, D.; Maussion, P.; Faucher, J.

    2001-12-01

    Fuzzy logic offers nowadays an interesting alternative to the designers of non linear control laws for electrical or electromechanical systems. However, due to the huge number of tuning parameters, this kind of control is only used in a few industrial applications. This paper proposes a new, very simple, on-site tuning strategy for a PID-like fuzzy logic controller. Thanks to the experimental designs methodology, we will propose sets of optimized pre-established settings for this kind of fuzzy controllers. The proposed parameters are only depending on one on-site open-loop identification test. In this way, this on-site tuning methodology has to be compared to the Ziegler-Nichols one's for conventional controllers. Experimental results (on a permanent magnets synchronous motor and on a DC/DC converter) will underline all the efficiency of this tuning methodology. Finally, the field of validity of the proposed pre-established settings will be given.

  9. An extension of the directed search domain algorithm to bilevel optimization

    NASA Astrophysics Data System (ADS)

    Wang, Kaiqiang; Utyuzhnikov, Sergey V.

    2017-08-01

    A method is developed for generating a well-distributed Pareto set for the upper level in bilevel multiobjective optimization. The approach is based on the Directed Search Domain (DSD) algorithm, which is a classical approach for generation of a quasi-evenly distributed Pareto set in multiobjective optimization. The approach contains a double-layer optimizer designed in a specific way under the framework of the DSD method. The double-layer optimizer is based on bilevel single-objective optimization and aims to find a unique optimal Pareto solution rather than generate the whole Pareto frontier on the lower level in order to improve the optimization efficiency. The proposed bilevel DSD approach is verified on several test cases, and a relevant comparison against another classical approach is made. It is shown that the approach can generate a quasi-evenly distributed Pareto set for the upper level with relatively low time consumption.

  10. A semiparametric graphical modelling approach for large-scale equity selection

    PubMed Central

    Liu, Han; Mulvey, John; Zhao, Tianqi

    2016-01-01

    We propose a new stock selection strategy that exploits rebalancing returns and improves portfolio performance. To effectively harvest rebalancing gains, we apply ideas from elliptical-copula graphical modelling and stability inference to select stocks that are as independent as possible. The proposed elliptical-copula graphical model has a latent Gaussian representation; its structure can be effectively inferred using the regularized rank-based estimators. The resulting algorithm is computationally efficient and scales to large data-sets. To show the efficacy of the proposed method, we apply it to conduct equity selection based on a 16-year health care stock data-set and a large 34-year stock data-set. Empirical tests show that the proposed method is superior to alternative strategies including a principal component analysis-based approach and the classical Markowitz strategy based on the traditional buy-and-hold assumption. PMID:28316507

  11. All the noncontextuality inequalities for arbitrary prepare-and-measure experiments with respect to any fixed set of operational equivalences

    NASA Astrophysics Data System (ADS)

    Schmid, David; Spekkens, Robert W.; Wolfe, Elie

    2018-06-01

    Within the framework of generalized noncontextuality, we introduce a general technique for systematically deriving noncontextuality inequalities for any experiment involving finitely many preparations and finitely many measurements, each of which has a finite number of outcomes. Given any fixed sets of operational equivalences among the preparations and among the measurements as input, the algorithm returns a set of noncontextuality inequalities whose satisfaction is necessary and sufficient for a set of operational data to admit of a noncontextual model. Additionally, we show that the space of noncontextual data tables always defines a polytope. Finally, we provide a computationally efficient means for testing whether any set of numerical data admits of a noncontextual model, with respect to any fixed operational equivalences. Together, these techniques provide complete methods for characterizing arbitrary noncontextuality scenarios, both in theory and in practice. Because a quantum prepare-and-measure experiment admits of a noncontextual model if and only if it admits of a positive quasiprobability representation, our techniques also determine the necessary and sufficient conditions for the existence of such a representation.

  12. Quiet, Efficient Fans for Spaceflight: An Overview of NASA's Technology Development Plan

    NASA Technical Reports Server (NTRS)

    Koch, L. Danielle

    2010-01-01

    A Technology Development Plan to improve the aerodynamic and acoustic performance of spaceflight fans has been submitted to NASA s Exploration Technology Development Program. The plan describes a research program intended to make broader use of the technology developed at NASA Glenn to increase the efficiency and reduce the noise of aircraft engine fans. The goal is to develop a set of well-characterized government-owned fans nominally suited for spacecraft ventilation and cooling systems. NASA s Exploration Life Support community will identify design point conditions for the fans in this study. Computational Fluid Dynamics codes will be used in the design and analysis process. The fans will be built and used in a series of tests. Data from aerodynamic and acoustic performance tests will be used to validate performance predictions. These performance maps will also be entered into a database to help spaceflight fan system developers make informed design choices. Velocity measurements downstream of fan rotor blades and stator vanes will also be collected and used for code validation. Details of the fan design, analysis, and testing will be publicly reported. With access to fan geometry and test data, the small fan industry can independently evaluate design and analysis methods and work towards improvement.

  13. Clinical laboratory: bigger is not always better.

    PubMed

    Plebani, Mario

    2018-06-27

    Laboratory services around the world are undergoing substantial consolidation and changes through mechanisms ranging from mergers, acquisitions and outsourcing, primarily based on expectations to improve efficiency, increasing volumes and reducing the cost per test. However, the relationship between volume and costs is not linear and numerous variables influence the end cost per test. In particular, the relationship between volumes and costs does not span the entire platter of clinical laboratories: high costs are associated with low volumes up to a threshold of 1 million test per year. Over this threshold, there is no linear association between volumes and costs, as laboratory organization rather than test volume more significantly affects the final costs. Currently, data on laboratory errors and associated diagnostic errors and risk for patient harm emphasize the need for a paradigmatic shift: from a focus on volumes and efficiency to a patient-centered vision restoring the nature of laboratory services as an integral part of the diagnostic and therapy process. Process and outcome quality indicators are effective tools to measure and improve laboratory services, by stimulating a competition based on intra- and extra-analytical performance specifications, intermediate outcomes and customer satisfaction. Rather than competing with economic value, clinical laboratories should adopt a strategy based on a set of harmonized quality indicators and performance specifications, active laboratory stewardship, and improved patient safety.

  14. Restoring a smooth function from its noisy integrals

    NASA Astrophysics Data System (ADS)

    Goulko, Olga; Prokof'ev, Nikolay; Svistunov, Boris

    2018-05-01

    Numerical (and experimental) data analysis often requires the restoration of a smooth function from a set of sampled integrals over finite bins. We present the bin hierarchy method that efficiently computes the maximally smooth function from the sampled integrals using essentially all the information contained in the data. We perform extensive tests with different classes of functions and levels of data quality, including Monte Carlo data suffering from a severe sign problem and physical data for the Green's function of the Fröhlich polaron.

  15. Energy Smart Schools--Applied Research, Field Testing, and Technology Integration

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nebiat Solomon; Robin Vieira; William L. Manz

    2004-12-01

    The National Association of State Energy Officials (NASEO) in conjunction with the California Energy Commission, the Energy Center of Wisconsin, the Florida Solar Energy Center, the New York State Energy Research and Development Authority, and the Ohio Department of Development's Office of Energy Efficiency conducted a four-year, cost-share project with the U.S. Department of Energy (USDOE), Office of Energy Efficiency and Renewable Energy to focus on energy efficiency and high-performance technologies in our nation's schools. NASEO was the program lead for the MOU-State Schools Working group, established in conjunction with the USDOE Memorandum of Understanding process for collaboration among statemore » and federal energy research and demonstration offices and organizations. The MOU-State Schools Working Group included State Energy Offices and other state energy research organizations from all regions of the country. Through surveys and analyses, the Working Group determined the school-related energy priorities of the states and established a set of tasks to be accomplished, including the installation and evaluation of microturbines, advanced daylighting research, testing of schools and classrooms, and integrated school building technologies. The Energy Smart Schools project resulted in the adoption of advanced energy efficiency technologies in both the renovation of existing schools and building of new ones; the education of school administrators, architects, engineers, and manufacturers nationwide about the energy-saving, economic, and environmental benefits of energy efficiency technologies; and improved the learning environment for the nation's students through use of better temperature controls, improvements in air quality, and increased daylighting in classrooms. It also provided an opportunity for states to share and replicate successful projects to increase their energy efficiency while at the same time driving down their energy costs.« less

  16. A simple approach for the modeling of an ODS steel mechanical behavior in pilgering conditions

    NASA Astrophysics Data System (ADS)

    Vanegas-Márquez, E.; Mocellin, K.; Toualbi, L.; de Carlan, Y.; Logé, R. E.

    2012-01-01

    The optimization of the forming of ODS tubes is linked to the choice of an appropriated constitutive model for modeling the metal forming process. In the framework of a unified plastic constitutive theory, the strain-controlled cyclic characteristics of a ferritic ODS steel were analyzed and modeled with two different tests. The first test is a classical tension-compression test, and leads to cyclic softening at low to intermediate strain amplitudes. The second test consists in alternated uniaxial compressions along two perpendicular axes, and is selected based on the similarities with the loading path induced by the Fe-14Cr-1W-Ti ODS cladding tube pilgering process. This second test exhibits cyclic hardening at all tested strain amplitudes. Since variable strain amplitudes prevail in pilgering conditions, the parameters of the considered constitutive law were identified based on a loading sequence including strain amplitude changes. A proposed semi automated inverse analysis methodology is shown to efficiently provide optimal sets of parameters for the considered loading sequences. When compared to classical approaches, the model involves a reduced number of parameters, while keeping a good ability to capture stress changes induced by strain amplitude changes. Furthermore, the methodology only requires one test, which is an advantage when the amount of available material is limited. As two distinct sets of parameters were identified for the two considered tests, it is recommended to consider the loading path when modeling cold forming of the ODS steel.

  17. Level-set simulations of soluble surfactant driven flows

    NASA Astrophysics Data System (ADS)

    Cleret de Langavant, Charles; Guittet, Arthur; Theillard, Maxime; Temprano-Coleto, Fernando; Gibou, Frédéric

    2017-11-01

    We present an approach to simulate the diffusion, advection and adsorption-desorption of a material quantity defined on an interface in two and three spatial dimensions. We use a level-set approach to capture the interface motion and a Quad/Octree data structure to efficiently solve the equations describing the underlying physics. Coupling with a Navier-Stokes solver enables the study of the effect of soluble surfactants that locally modify the parameters of surface tension on different types of flows. The method is tested on several benchmarks and applied to three typical examples of flows in the presence of surfactant: a bubble in a shear flow, the well-known phenomenon of tears of wine, and the Landau-Levich coating problem.

  18. Concurrent approach for evolving compact decision rule sets

    NASA Astrophysics Data System (ADS)

    Marmelstein, Robert E.; Hammack, Lonnie P.; Lamont, Gary B.

    1999-02-01

    The induction of decision rules from data is important to many disciplines, including artificial intelligence and pattern recognition. To improve the state of the art in this area, we introduced the genetic rule and classifier construction environment (GRaCCE). It was previously shown that GRaCCE consistently evolved decision rule sets from data, which were significantly more compact than those produced by other methods (such as decision tree algorithms). The primary disadvantage of GRaCCe, however, is its relatively poor run-time execution performance. In this paper, a concurrent version of the GRaCCE architecture is introduced, which improves the efficiency of the original algorithm. A prototype of the algorithm is tested on an in- house parallel processor configuration and the results are discussed.

  19. Efficient bulk-loading of gridfiles

    NASA Technical Reports Server (NTRS)

    Leutenegger, Scott T.; Nicol, David M.

    1994-01-01

    This paper considers the problem of bulk-loading large data sets for the gridfile multiattribute indexing technique. We propose a rectilinear partitioning algorithm that heuristically seeks to minimize the size of the gridfile needed to ensure no bucket overflows. Empirical studies on both synthetic data sets and on data sets drawn from computational fluid dynamics applications demonstrate that our algorithm is very efficient, and is able to handle large data sets. In addition, we present an algorithm for bulk-loading data sets too large to fit in main memory. Utilizing a sort of the entire data set it creates a gridfile without incurring any overflows.

  20. Approach for Text Classification Based on the Similarity Measurement between Normal Cloud Models

    PubMed Central

    Dai, Jin; Liu, Xin

    2014-01-01

    The similarity between objects is the core research area of data mining. In order to reduce the interference of the uncertainty of nature language, a similarity measurement between normal cloud models is adopted to text classification research. On this basis, a novel text classifier based on cloud concept jumping up (CCJU-TC) is proposed. It can efficiently accomplish conversion between qualitative concept and quantitative data. Through the conversion from text set to text information table based on VSM model, the text qualitative concept, which is extraction from the same category, is jumping up as a whole category concept. According to the cloud similarity between the test text and each category concept, the test text is assigned to the most similar category. By the comparison among different text classifiers in different feature selection set, it fully proves that not only does CCJU-TC have a strong ability to adapt to the different text features, but also the classification performance is also better than the traditional classifiers. PMID:24711737

  1. My46: a web-based tool for self-guided management of genomic test results in research and clinical settings

    PubMed Central

    Tabor, Holly K.; Jamal, Seema M.; Yu, Joon-Ho; Crouch, Julia M.; Shankar, Aditi G.; Dent, Karin M.; Anderson, Nick; Miller, Damon A.; Futral, Brett T.; Bamshad, Michael J.

    2016-01-01

    A major challenge to implementing precision medicine is the need for an efficient and cost-effective strategy for returning individual genomic test results that is easily scalable and can be incorporated into multiple models of clinical practice. My46 is a web-based tool for managing the return of genetic results that was designed and developed to support a wide range of approaches to results disclosure, ranging from traditional face-to-face disclosure to self-guided models. My46 has five key functions: set and modify results return preferences, return results, educate, manage return of results, and assess return of results. These key functions are supported by six distinct modules and a suite of features that enhance the user experience, ease site navigation, facilitate knowledge sharing, and enable results return tracking. My46 is a potentially effective solution for returning results and supports current trends toward shared decision-making between patient and provider and patient-driven health management. PMID:27632689

  2. Parameter regionalization of a monthly water balance model for the conterminous United States

    USGS Publications Warehouse

    Bock, Andrew R.; Hay, Lauren E.; McCabe, Gregory J.; Markstrom, Steven L.; Atkinson, R. Dwight

    2016-01-01

    A parameter regionalization scheme to transfer parameter values from gaged to ungaged areas for a monthly water balance model (MWBM) was developed and tested for the conterminous United States (CONUS). The Fourier Amplitude Sensitivity Test, a global-sensitivity algorithm, was implemented on a MWBM to generate parameter sensitivities on a set of 109 951 hydrologic response units (HRUs) across the CONUS. The HRUs were grouped into 110 calibration regions based on similar parameter sensitivities. Subsequently, measured runoff from 1575 streamgages within the calibration regions were used to calibrate the MWBM parameters to produce parameter sets for each calibration region. Measured and simulated runoff at the 1575 streamgages showed good correspondence for the majority of the CONUS, with a median computed Nash–Sutcliffe efficiency coefficient of 0.76 over all streamgages. These methods maximize the use of available runoff information, resulting in a calibrated CONUS-wide application of the MWBM suitable for providing estimates of water availability at the HRU resolution for both gaged and ungaged areas of the CONUS.

  3. Parameter regionalization of a monthly water balance model for the conterminous United States

    NASA Astrophysics Data System (ADS)

    Bock, Andrew R.; Hay, Lauren E.; McCabe, Gregory J.; Markstrom, Steven L.; Atkinson, R. Dwight

    2016-07-01

    A parameter regionalization scheme to transfer parameter values from gaged to ungaged areas for a monthly water balance model (MWBM) was developed and tested for the conterminous United States (CONUS). The Fourier Amplitude Sensitivity Test, a global-sensitivity algorithm, was implemented on a MWBM to generate parameter sensitivities on a set of 109 951 hydrologic response units (HRUs) across the CONUS. The HRUs were grouped into 110 calibration regions based on similar parameter sensitivities. Subsequently, measured runoff from 1575 streamgages within the calibration regions were used to calibrate the MWBM parameters to produce parameter sets for each calibration region. Measured and simulated runoff at the 1575 streamgages showed good correspondence for the majority of the CONUS, with a median computed Nash-Sutcliffe efficiency coefficient of 0.76 over all streamgages. These methods maximize the use of available runoff information, resulting in a calibrated CONUS-wide application of the MWBM suitable for providing estimates of water availability at the HRU resolution for both gaged and ungaged areas of the CONUS.

  4. Rolling resistance and propulsion efficiency of manual and power-assisted wheelchairs.

    PubMed

    Pavlidou, Efthymia; Kloosterman, Marieke G M; Buurke, Jaap H; Rietman, Johan S; Janssen, Thomas W J

    2015-11-01

    Rolling resistance is one of the main forces resisting wheelchair propulsion and thus affecting stress exerted on the upper limbs. The present study investigates the differences in rolling resistance, propulsion efficiency and energy expenditure required by the user during power-assisted and manual propulsion. Different tire pressures (50%, 75%, 100%) and two different levels of motor assistance were tested. Drag force, energy expenditure and propulsion efficiency were measured in 10 able-bodied individuals under different experimental settings on a treadmill. Results showed that drag force levels were significantly higher in the 50%, compared to the 75% and 100% inflation conditions. In terms of wheelchair type, the manual wheelchair displayed significantly lower drag force values than the power-assisted one. The use of extra-power-assisted wheelchair appeared to be significantly superior to conventional power-assisted and manual wheelchairs concerning both propulsion efficiency and energy expenditure required by the user. Overall, the results of the study suggest that the use of power-assisted wheelchair was more efficient and required less energy input by the user, depending on the motor assistance provided. Copyright © 2015 IPEM. Published by Elsevier Ltd. All rights reserved.

  5. Synchronized navigation of current and prior studies using image registration improves radiologist's efficiency.

    PubMed

    Forsberg, Daniel; Gupta, Amit; Mills, Christopher; MacAdam, Brett; Rosipko, Beverly; Bangert, Barbara A; Coffey, Michael D; Kosmas, Christos; Sunshine, Jeffrey L

    2017-03-01

    The purpose of this study was to investigate how the use of multi-modal rigid image registration integrated within a standard picture archiving and communication system affects the efficiency of a radiologist while performing routine interpretations of cases including prior examinations. Six radiologists were recruited to read a set of cases (either 16 neuroradiology or 14 musculoskeletal cases) during two crossover reading sessions. Each radiologist read each case twice, one time with synchronized navigation, which enables spatial synchronization across examinations from different study dates, and one time without. Efficiency was evaluated based upon time to read a case and amount of scrolling while browsing a case using Wilcoxon signed rank test. Significant improvements in efficiency were found considering either all radiologists simultaneously, the two sections separately and the majority of individual radiologists for time to read and for amount of scrolling. The relative improvement for each individual radiologist ranged from 4 to 32% for time to read and from 14 to 38% for amount of scrolling. Image registration providing synchronized navigation across examinations from different study dates provides a tool that enables radiologists to work more efficiently while reading cases with one or more prior examinations.

  6. The efficiency frontier approach to economic evaluation of health-care interventions.

    PubMed

    Caro, J Jaime; Nord, Erik; Siebert, Uwe; McGuire, Alistair; McGregor, Maurice; Henry, David; de Pouvourville, Gérard; Atella, Vincenzo; Kolominsky-Rabas, Peter

    2010-10-01

    IQWiG commissioned an international panel of experts to develop methods for the assessment of the relation of benefits to costs in the German statutory health-care system. The panel recommended that IQWiG inform German decision makers of the net costs and value of additional benefits of an intervention in the context of relevant other interventions in that indication. To facilitate guidance regarding maximum reimbursement, this information is presented in an efficiency plot with costs on the horizontal axis and value of benefits on the vertical. The efficiency frontier links the interventions that are not dominated and provides guidance. A technology that places on the frontier or to the left is reasonably efficient, while one falling to the right requires further justification for reimbursement at that price. This information does not automatically give the maximum reimbursement, as other considerations may be relevant. Given that the estimates are for a specific indication, they do not address priority setting across the health-care system. This approach informs decision makers about efficiency of interventions, conforms to the mandate and is consistent with basic economic principles. Empirical testing of its feasibility and usefulness is required.

  7. Genetic relationships between feed efficiency in growing males and beef cow performance.

    PubMed

    Crowley, J J; Evans, R D; Mc Hugh, N; Kenny, D A; McGee, M; Crews, D H; Berry, D P

    2011-11-01

    Most studies on feed efficiency in beef cattle have focused on performance in young animals despite the contribution of the cow herd to overall profitability of beef production systems. The objective of this study was to quantify, using a large data set, the genetic covariances between feed efficiency in growing animals measured in a performance-test station, and beef cow performance including fertility, survival, calving traits, BW, maternal weaning weight, cow price, and cull cow carcass characteristics in commercial herds. Feed efficiency data were available on 2,605 purebred bulls from 1 test station. Records on cow performance were available on up to 94,936 crossbred beef cows. Genetic covariances were estimated using animal and animal-dam linear mixed models. Results showed that selection for feed efficiency, defined as feed conversion ratio (FCR) or residual BW gain (RG), improved maternal weaning weight as evidenced by the respective genetic correlations of -0.61 and 0.57. Despite residual feed intake (RFI) being phenotypically independent of BW, a negative genetic correlation existed between RFI and cow BW (-0.23; although the SE of 0.31 was large). None of the feed efficiency traits were correlated with fertility, calving difficulty, or perinatal mortality. However, genetic correlations estimated between age at first calving and FCR (-0.55 ± 0.14), Kleiber ratio (0.33 ± 0.15), RFI (-0.29 ± 0.14), residual BW gain (0.36 ± 0.15), and relative growth rate (0.37 ± 0.15) all suggest that selection for improved efficiency may delay the age at first calving, and we speculate, using information from other studies, that this may be due to a delay in the onset of puberty. Results from this study, based on the estimated genetic correlations, suggest that selection for improved feed efficiency will have no deleterious effect on cow performance traits with the exception of delaying the age at first calving.

  8. Comparing and Validating Machine Learning Models for Mycobacterium tuberculosis Drug Discovery.

    PubMed

    Lane, Thomas; Russo, Daniel P; Zorn, Kimberley M; Clark, Alex M; Korotcov, Alexandru; Tkachenko, Valery; Reynolds, Robert C; Perryman, Alexander L; Freundlich, Joel S; Ekins, Sean

    2018-04-26

    Tuberculosis is a global health dilemma. In 2016, the WHO reported 10.4 million incidences and 1.7 million deaths. The need to develop new treatments for those infected with Mycobacterium tuberculosis ( Mtb) has led to many large-scale phenotypic screens and many thousands of new active compounds identified in vitro. However, with limited funding, efforts to discover new active molecules against Mtb needs to be more efficient. Several computational machine learning approaches have been shown to have good enrichment and hit rates. We have curated small molecule Mtb data and developed new models with a total of 18,886 molecules with activity cutoffs of 10 μM, 1 μM, and 100 nM. These data sets were used to evaluate different machine learning methods (including deep learning) and metrics and to generate predictions for additional molecules published in 2017. One Mtb model, a combined in vitro and in vivo data Bayesian model at a 100 nM activity yielded the following metrics for 5-fold cross validation: accuracy = 0.88, precision = 0.22, recall = 0.91, specificity = 0.88, kappa = 0.31, and MCC = 0.41. We have also curated an evaluation set ( n = 153 compounds) published in 2017, and when used to test our model, it showed the comparable statistics (accuracy = 0.83, precision = 0.27, recall = 1.00, specificity = 0.81, kappa = 0.36, and MCC = 0.47). We have also compared these models with additional machine learning algorithms showing Bayesian machine learning models constructed with literature Mtb data generated by different laboratories generally were equivalent to or outperformed deep neural networks with external test sets. Finally, we have also compared our training and test sets to show they were suitably diverse and different in order to represent useful evaluation sets. Such Mtb machine learning models could help prioritize compounds for testing in vitro and in vivo.

  9. High-throughput heterogeneous catalyst research

    NASA Astrophysics Data System (ADS)

    Turner, Howard W.; Volpe, Anthony F., Jr.; Weinberg, W. H.

    2009-06-01

    With the discovery of abundant and low cost crude oil in the early 1900's came the need to create efficient conversion processes to produce low cost fuels and basic chemicals. Enormous investment over the last century has led to the development of a set of highly efficient catalytic processes which define the modern oil refinery and which produce most of the raw materials and fuels used in modern society. Process evolution and development has led to a refining infrastructure that is both dominated and enabled by modern heterogeneous catalyst technologies. Refineries and chemical manufacturers are currently under intense pressure to improve efficiency, adapt to increasingly disadvantaged feedstocks including biomass, lower their environmental footprint, and continue to deliver their products at low cost. This pressure creates a demand for new and more robust catalyst systems and processes that can accommodate them. Traditional methods of catalyst synthesis and testing are slow and inefficient, particularly in heterogeneous systems where the structure of the active sites is typically complex and the reaction mechanism is at best ill-defined. While theoretical modeling and a growing understanding of fundamental surface science help guide the chemist in designing and synthesizing targets, even in the most well understood areas of catalysis, the parameter space that one needs to explore experimentally is vast. The result is that the chemist using traditional methods must navigate a complex and unpredictable diversity space with a limited data set to make discoveries or to optimize known systems. We describe here a mature set of synthesis and screening technologies that together form a workflow that breaks this traditional paradigm and allows for rapid and efficient heterogeneous catalyst discovery and optimization. We exemplify the power of these new technologies by describing their use in the development and commercialization of a novel catalyst for the hydrodesulfurization of gasoline distillates having 50% more selectivity and 30% more activity for sulfur removal than the state-of-the-art commercial reference.

  10. Status of the Perpendicular Biased 2nd Harmonic Cavity for the Fermilab Booster

    DOE PAGES

    Tan, C. Y.; Dey, J. E.; Duel, K. L.; ...

    2017-05-01

    This is a status report on the 2nd harmonic cavity for the Fermilab Booster as part of the Proton Improvement Plan (PIP) for increasing beam transmission efficiency, and thus reducing losses. A set of tuner rings has been procured and is undergoing quality control tests. The Y567 tube for driving the cavity has been successfully tested at both injection and extraction frequencies. A cooling scheme for the tuner and cavity has been developed after a thorough thermal analysis of the system. RF windows have been procured and substantial progress has been made on the mechanical designs of the cavity andmore » the bias solenoid. Finally, the goal is to have a prototype cavity ready for testing by the end of 2017.« less

  11. The Effect of Reduction Gearing on Propeller-body Interference as Shown by Full-Scale Wind-Tunnel Tests

    NASA Technical Reports Server (NTRS)

    Weick, Fred E

    1931-01-01

    This report presents the results of full-scale tests made on a 10-foot 5-inch propeller on a geared J-5 engine and also on a similar 8-foot 11-inch propeller on a direct-drive J-5 engine. Each propeller was tested at two different pitch settings, and with a large and a small fuselage. The investigation was made in such a manner that the propeller-body interference factors were isolated, and it was found that, considering this interference only, the geared propellers had an appreciable advantage in propulsive efficiency, partially due to the larger diameter of the propellers with respect to the bodies, and partially because the geared propellers were located farther ahead of the engines and bodies.

  12. Status of the Perpendicular Biased 2nd Harmonic Cavity for the Fermilab Booster

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tan, C. Y.; Dey, J. E.; Duel, K. L.

    This is a status report on the 2nd harmonic cavity for the Fermilab Booster as part of the Proton Improvement Plan (PIP) for increasing beam transmission efficiency, and thus reducing losses. A set of tuner rings has been procured and is undergoing quality control tests. The Y567 tube for driving the cavity has been successfully tested at both injection and extraction frequencies. A cooling scheme for the tuner and cavity has been developed after a thorough thermal analysis of the system. RF windows have been procured and substantial progress has been made on the mechanical designs of the cavity andmore » the bias solenoid. Finally, the goal is to have a prototype cavity ready for testing by the end of 2017.« less

  13. Analyzing Real-World Light Duty Vehicle Efficiency Benefits

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gonder, Jeffrey; Wood, Eric; Chaney, Larry

    Off-cycle technologies represent an important pathway to achieve real-world fuel savings, through which OEMs can potentially receive credit toward CAFE compliance. DOE national labs such as NREL are well positioned to provide objective input on these technologies using large, national data sets in conjunction with OEM- and technology-specific testing. This project demonstrates an approach that combines vehicle testing (dynamometer and on-road) with powertrain modeling and simulation over large, representative datasets to quantify real-world fuel economy. The approach can be applied to specific off-cycle technologies (engine encapsulation, start/stop, connected vehicle, etc.) in A/B comparisons to support calculation of realistic real-world impacts.more » Future work will focus on testing-based A/B technology comparisons that demonstrate the significance of this approach.« less

  14. Minimizing makespan in a two-stage flow shop with parallel batch-processing machines and re-entrant jobs

    NASA Astrophysics Data System (ADS)

    Huang, J. D.; Liu, J. J.; Chen, Q. X.; Mao, N.

    2017-06-01

    Against a background of heat-treatment operations in mould manufacturing, a two-stage flow-shop scheduling problem is described for minimizing makespan with parallel batch-processing machines and re-entrant jobs. The weights and release dates of jobs are non-identical, but job processing times are equal. A mixed-integer linear programming model is developed and tested with small-scale scenarios. Given that the problem is NP hard, three heuristic construction methods with polynomial complexity are proposed. The worst case of the new constructive heuristic is analysed in detail. A method for computing lower bounds is proposed to test heuristic performance. Heuristic efficiency is tested with sets of scenarios. Compared with the two improved heuristics, the performance of the new constructive heuristic is superior.

  15. Learning moment-based fast local binary descriptor

    NASA Astrophysics Data System (ADS)

    Bellarbi, Abdelkader; Zenati, Nadia; Otmane, Samir; Belghit, Hayet

    2017-03-01

    Recently, binary descriptors have attracted significant attention due to their speed and low memory consumption; however, using intensity differences to calculate the binary descriptive vector is not efficient enough. We propose an approach to binary description called POLAR_MOBIL, in which we perform binary tests between geometrical and statistical information using moments in the patch instead of the classical intensity binary test. In addition, we introduce a learning technique used to select an optimized set of binary tests with low correlation and high variance. This approach offers high distinctiveness against affine transformations and appearance changes. An extensive evaluation on well-known benchmark datasets reveals the robustness and the effectiveness of the proposed descriptor, as well as its good performance in terms of low computation complexity when compared with state-of-the-art real-time local descriptors.

  16. The Associate Principal Astronomer for AI Management of Automatic Telescopes

    NASA Technical Reports Server (NTRS)

    Henry, Gregory W.

    1998-01-01

    This research program in scheduling and management of automatic telescopes had the following objectives: 1. To field test the 1993 Automatic Telescope Instruction Set (ATIS93) programming language, which was specifically developed to allow real-time control of an automatic telescope via an artificial intelligence scheduler running on a remote computer. 2. To develop and test the procedures for two-way communication between a telescope controller and remote scheduler via the Internet. 3. To test various concepts in Al scheduling being developed at NASA Ames Research Center on an automatic telescope operated by Tennessee State University at the Fairborn Observatory site in southern Arizona. and 4. To develop a prototype software package, dubbed the Associate Principal Astronomer, for the efficient scheduling and management of automatic telescopes.

  17. MALBEC: a new CUDA-C ray-tracer in general relativity

    NASA Astrophysics Data System (ADS)

    Quiroga, G. D.

    2018-06-01

    A new CUDA-C code for tracing orbits around non-charged black holes is presented. This code, named MALBEC, take advantage of the graphic processing units and the CUDA platform for tracking null and timelike test particles in Schwarzschild and Kerr. Also, a new general set of equations that describe the closed circular orbits of any timelike test particle in the equatorial plane is derived. These equations are extremely important in order to compare the analytical behavior of the orbits with the numerical results and verify the correct implementation of the Runge-Kutta algorithm in MALBEC. Finally, other numerical tests are performed, demonstrating that MALBEC is able to reproduce some well-known results in these metrics in a faster and more efficient way than a conventional CPU implementation.

  18. Set-size procedures for controlling variations in speech-reception performance with a fluctuating masker

    PubMed Central

    Bernstein, Joshua G. W.; Summers, Van; Iyer, Nandini; Brungart, Douglas S.

    2012-01-01

    Adaptive signal-to-noise ratio (SNR) tracking is often used to measure speech reception in noise. Because SNR varies with performance using this method, data interpretation can be confounded when measuring an SNR-dependent effect such as the fluctuating-masker benefit (FMB) (the intelligibility improvement afforded by brief dips in the masker level). One way to overcome this confound, and allow FMB comparisons across listener groups with different stationary-noise performance, is to adjust the response set size to equalize performance across groups at a fixed SNR. However, this technique is only valid under the assumption that changes in set size have the same effect on percentage-correct performance for different masker types. This assumption was tested by measuring nonsense-syllable identification for normal-hearing listeners as a function of SNR, set size and masker (stationary noise, 4- and 32-Hz modulated noise and an interfering talker). Set-size adjustment had the same impact on performance scores for all maskers, confirming the independence of FMB (at matched SNRs) and set size. These results, along with those of a second experiment evaluating an adaptive set-size algorithm to adjust performance levels, establish set size as an efficient and effective tool to adjust baseline performance when comparing effects of masker fluctuations between listener groups. PMID:23039460

  19. Computationally efficient algorithm for Gaussian Process regression in case of structured samples

    NASA Astrophysics Data System (ADS)

    Belyaev, M.; Burnaev, E.; Kapushev, Y.

    2016-04-01

    Surrogate modeling is widely used in many engineering problems. Data sets often have Cartesian product structure (for instance factorial design of experiments with missing points). In such case the size of the data set can be very large. Therefore, one of the most popular algorithms for approximation-Gaussian Process regression-can be hardly applied due to its computational complexity. In this paper a computationally efficient approach for constructing Gaussian Process regression in case of data sets with Cartesian product structure is presented. Efficiency is achieved by using a special structure of the data set and operations with tensors. Proposed algorithm has low computational as well as memory complexity compared to existing algorithms. In this work we also introduce a regularization procedure allowing to take into account anisotropy of the data set and avoid degeneracy of regression model.

  20. Efficiency and formalism of quantum games

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lee, C.F.; Johnson, Neil F.

    We show that quantum games are more efficient than classical games and provide a saturated upper bound for this efficiency. We also demonstrate that the set of finite classical games is a strict subset of the set of finite quantum games. Our analysis is based on a rigorous formulation of quantum games, from which quantum versions of the minimax theorem and the Nash equilibrium theorem can be deduced.

  1. Real time groove characterization combining partial least squares and SVR strategies: application to eddy current testing

    NASA Astrophysics Data System (ADS)

    Ahmed, S.; Salucci, M.; Miorelli, R.; Anselmi, N.; Oliveri, G.; Calmon, P.; Reboud, C.; Massa, A.

    2017-10-01

    A quasi real-time inversion strategy is presented for groove characterization of a conductive non-ferromagnetic tube structure by exploiting eddy current testing (ECT) signal. Inversion problem has been formulated by non-iterative Learning-by-Examples (LBE) strategy. Within the framework of LBE, an efficient training strategy has been adopted with the combination of feature extraction and a customized version of output space filling (OSF) adaptive sampling in order to get optimal training set during offline phase. Partial Least Squares (PLS) and Support Vector Regression (SVR) have been exploited for feature extraction and prediction technique respectively to have robust and accurate real time inversion during online phase.

  2. Statistical inference for time course RNA-Seq data using a negative binomial mixed-effect model.

    PubMed

    Sun, Xiaoxiao; Dalpiaz, David; Wu, Di; S Liu, Jun; Zhong, Wenxuan; Ma, Ping

    2016-08-26

    Accurate identification of differentially expressed (DE) genes in time course RNA-Seq data is crucial for understanding the dynamics of transcriptional regulatory network. However, most of the available methods treat gene expressions at different time points as replicates and test the significance of the mean expression difference between treatments or conditions irrespective of time. They thus fail to identify many DE genes with different profiles across time. In this article, we propose a negative binomial mixed-effect model (NBMM) to identify DE genes in time course RNA-Seq data. In the NBMM, mean gene expression is characterized by a fixed effect, and time dependency is described by random effects. The NBMM is very flexible and can be fitted to both unreplicated and replicated time course RNA-Seq data via a penalized likelihood method. By comparing gene expression profiles over time, we further classify the DE genes into two subtypes to enhance the understanding of expression dynamics. A significance test for detecting DE genes is derived using a Kullback-Leibler distance ratio. Additionally, a significance test for gene sets is developed using a gene set score. Simulation analysis shows that the NBMM outperforms currently available methods for detecting DE genes and gene sets. Moreover, our real data analysis of fruit fly developmental time course RNA-Seq data demonstrates the NBMM identifies biologically relevant genes which are well justified by gene ontology analysis. The proposed method is powerful and efficient to detect biologically relevant DE genes and gene sets in time course RNA-Seq data.

  3. Improving labeling efficiency in automatic quality control of MRSI data.

    PubMed

    Pedrosa de Barros, Nuno; McKinley, Richard; Wiest, Roland; Slotboom, Johannes

    2017-12-01

    To improve the efficiency of the labeling task in automatic quality control of MR spectroscopy imaging data. 28'432 short and long echo time (TE) spectra (1.5 tesla; point resolved spectroscopy (PRESS); repetition time (TR)= 1,500 ms) from 18 different brain tumor patients were labeled by two experts as either accept or reject, depending on their quality. For each spectrum, 47 signal features were extracted. The data was then used to run several simulations and test an active learning approach using uncertainty sampling. The performance of the classifiers was evaluated as a function of the number of patients in the training set, number of spectra in the training set, and a parameter α used to control the level of classification uncertainty required for a new spectrum to be selected for labeling. The results showed that the proposed strategy allows reductions of up to 72.97% for short TE and 62.09% for long TE in the amount of data that needs to be labeled, without significant impact in classification accuracy. Further reductions are possible with significant but minimal impact in performance. Active learning using uncertainty sampling is an effective way to increase the labeling efficiency for training automatic quality control classifiers. Magn Reson Med 78:2399-2405, 2017. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.

  4. paraGSEA: a scalable approach for large-scale gene expression profiling

    PubMed Central

    Peng, Shaoliang; Yang, Shunyun

    2017-01-01

    Abstract More studies have been conducted using gene expression similarity to identify functional connections among genes, diseases and drugs. Gene Set Enrichment Analysis (GSEA) is a powerful analytical method for interpreting gene expression data. However, due to its enormous computational overhead in the estimation of significance level step and multiple hypothesis testing step, the computation scalability and efficiency are poor on large-scale datasets. We proposed paraGSEA for efficient large-scale transcriptome data analysis. By optimization, the overall time complexity of paraGSEA is reduced from O(mn) to O(m+n), where m is the length of the gene sets and n is the length of the gene expression profiles, which contributes more than 100-fold increase in performance compared with other popular GSEA implementations such as GSEA-P, SAM-GS and GSEA2. By further parallelization, a near-linear speed-up is gained on both workstations and clusters in an efficient manner with high scalability and performance on large-scale datasets. The analysis time of whole LINCS phase I dataset (GSE92742) was reduced to nearly half hour on a 1000 node cluster on Tianhe-2, or within 120 hours on a 96-core workstation. The source code of paraGSEA is licensed under the GPLv3 and available at http://github.com/ysycloud/paraGSEA. PMID:28973463

  5. Explaining efficient search for conjunctions of motion and form: evidence from negative color effects.

    PubMed

    Dent, Kevin

    2014-05-01

    Dent, Humphreys, and Braithwaite (2011) showed substantial costs to search when a moving target shared its color with a group of ignored static distractors. The present study further explored the conditions under which such costs to performance occur. Experiment 1 tested whether the negative color-sharing effect was specific to cases in which search showed a highly serial pattern. The results showed that the negative color-sharing effect persisted in the case of a target defined as a conjunction of movement and form, even when search was highly efficient. In Experiment 2, the ease with which participants could find an odd-colored target amongst a moving group was examined. Participants searched for a moving target amongst moving and stationary distractors. In Experiment 2A, participants performed a highly serial search through a group of similarly shaped moving letters. Performance was much slower when the target shared its color with a set of ignored static distractors. The exact same displays were used in Experiment 2B; however, participants now responded "present" for targets that shared the color of the static distractors. The same targets that had previously been difficult to find were now found efficiently. The results are interpreted in a flexible framework for attentional control. Targets that are linked with irrelevant distractors by color tend to be ignored. However, this cost can be overridden by top-down control settings.

  6. Efficiency and quality of care in nursing homes: an Italian case study.

    PubMed

    Garavaglia, Giulia; Lettieri, Emanuele; Agasisti, Tommaso; Lopez, Silvano

    2011-03-01

    This study investigates efficiency and quality of care in nursing homes. By means of Data Envelopment Analysis (DEA), the efficiency of 40 nursing homes that deliver their services in the north-western area of the Lombardy Region was assessed over a 3-year period (2005-2007). Lombardy is a very peculiar setting, since it is the only Region in Italy where the healthcare industry is organised as a quasi-market, in which the public authority buys health and nursing services from independent providers-establishing a reimbursement system for this purpose. The analysis is conducted by generating bootstrapped DEA efficiency scores for each nursing home (stage one), then regressing those scores on explanatory variables (stage two). Our DEA model employed two input (i.e. costs for health and nursing services and costs for residential services) and three output variables (case mix, extra nursing hours and residential charges). In the second-stage analysis, Tobit regressions and the Kruskall-Wallis tests of hypothesis to the efficiency scores were applied to define what are the factors that affect efficiency: (a) the ownership (private nursing houses outperform their public counterparts); and (b) the capability to implement strategies for labour cost and nursing costs containment, since the efficiency heavily depends upon the alignment of the costs to the public reimbursement system. Lastly, even though the public institutions are less efficient than the private ones, the results suggest that public nursing homes are moving towards their private counterparts, and thus competition is benefiting efficiency.

  7. International Review of the Development and Implementation of Energy Efficiency Standards and Labeling Programs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhou, Nan; Zheng, Nina; Fridley, David

    2012-02-28

    Appliance energy efficiency standards and labeling (S&L) programs have been important policy tools for regulating the efficiency of energy-using products for over 40 years and continue to expand in terms of geographic and product coverage. The most common S&L programs include mandatory minimum energy performance standards (MEPS) that seek to push the market for efficient products, and energy information and endorsement labels that seek to pull the market. This study seeks to review and compare some of the earliest and most well-developed S&L programs in three countries and one region: the U.S. MEPS and ENERGY STAR, Australia MEPS and Energymore » Label, European Union MEPS and Ecodesign requirements and Energy Label and Japanese Top Runner programs. For each program, key elements of S&L programs are evaluated and comparative analyses across the programs undertaken to identify best practice examples of individual elements as well as cross-cutting factors for success and lessons learned in international S&L program development and implementation. The international review and comparative analysis identified several overarching themes and highlighted some common factors behind successful program elements. First, standard-setting and programmatic implementation can benefit significantly from a legal framework that stipulates a specific timeline or schedule for standard-setting and revision, product coverage and legal sanctions for non-compliance. Second, the different MEPS programs revealed similarities in targeting efficiency gains that are technically feasible and economically justified as the principle for choosing a standard level, in many cases at a level that no product on the current market could reach. Third, detailed survey data such as the U.S. Residential Energy Consumption Survey (RECS) and rigorous analyses provide a strong foundation for standard-setting while incorporating the participation of different groups of stakeholders further strengthen the process. Fourth, sufficient program resources for program implementation and evaluation are critical to the effectiveness of standards and labeling programs and cost-sharing between national and local governments can help ensure adequate resources and uniform implementation. Lastly, check-testing and punitive measures are important forms of enforcement while the cancellation of registration or product sales-based fines have also proven effective in reducing non-compliance. The international comparative analysis also revealed the differing degree to which the level of government decentralization has influenced S&L programs and while no single country has best practices in all elements of standards and labeling development and implementation, national examples of best practices for specific elements do exist. For example, the U.S. has exemplified the use of rigorous analyses for standard-setting and robust data source with the RECS database while Japan's Top Runner standard-setting principle has motivated manufacturers to exceed targets. In terms of standards implementation and enforcement, Australia has demonstrated success with enforcement given its long history of check-testing and enforcement initiatives while mandatory information-sharing between EU jurisdictions on compliance results is another important enforcement mechanism. These examples show that it is important to evaluate not only the drivers of different paths of standards and labeling development, but also the country-specific context for best practice examples in order to understand how and why certain elements of specific S&L programs have been effective.« less

  8. Effects of early rearing conditions on problem-solving skill in captive male chimpanzees (Pan troglodytes).

    PubMed

    Morimura, Naruki; Mori, Yusuke

    2010-06-01

    Early rearing conditions of captive chimpanzees characterize behavioral differences in tool use, response to novelty, and sexual and maternal competence later in life. Restricted rearing conditions during early life hinder the acquisition and execution of such behaviors, which characterize the daily life of animals. This study examined whether rearing conditions affect adult male chimpanzees' behavior skills used for solving a problem with acquired locomotion behavior. Subjects were 13 male residents of the Chimpanzee Sanctuary Uto: 5 wild-born and 8 captive-born. A pretest assessed bed building and tool use abilities to verify behavioral differences between wild- and captive-born subjects, as earlier reports have described. Second, a banana-access test was conducted to investigate the problem-solving ability of climbing a bamboo pillar for accessing a banana, which might be the most efficient food access strategy for this setting. The test was repeated in a social setting. Results show that wild-born subjects were better able than captive-born subjects to use the provided materials for bed building and tool use. Results of the banana-access test show that wild-born subjects more frequently used a bamboo pillar for obtaining a banana with an efficient strategy than captive-born subjects did. Of the eight captive-born subjects, six avoided the bamboo pillars to get a banana and instead used, sometimes in a roundabout way, an iron pillar or fence. Results consistently underscored the adaptive and sophisticated skills of wild-born male chimpanzees in problem-solving tasks. The rearing conditions affected both the behavior acquisition and the execution of behaviors that had already been acquired. (c) 2010 Wiley-Liss, Inc.

  9. Potential Use of BEST® Sediment Trap in Splash - Saltation Transport Process by Simultaneous Wind and Rain Tests

    PubMed Central

    Basaran, Mustafa; Uzun, Oguzhan; Cornelis, Wim; Gabriels, Donald; Erpul, Gunay

    2016-01-01

    The research on wind-driven rain (WDR) transport process of the splash-saltation has increased over the last twenty years as wind tunnel experimental studies provide new insights into the mechanisms of simultaneous wind and rain (WDR) transport. The present study was conducted to investigate the efficiency of the BEST® sediment traps in catching the sand particles transported through the splash-saltation process under WDR conditions. Experiments were conducted in a wind tunnel rainfall simulator facility with water sprayed through sprinkler nozzles and free-flowing wind at different velocities to simulate the WDR conditions. Not only for vertical sediment distribution, but a series of experimental tests for horizontal distribution of sediments was also performed using BEST® collectors to obtain the actual total sediment mass flow by the splash-saltation in the center of the wind tunnel test section. Total mass transport (kg m-2) were estimated by analytically integrating the exponential functional relationship using the measured sediment amounts at the set trap heights for every run. Results revealed the integrated efficiency of the BEST® traps at 6, 9, 12 and 15 m s-1 wind velocities under 55.8, 50.5, 55.0 and 50.5 mm h-1 rain intensities were, respectively, 83, 106, 105, and 102%. Results as well showed that the efficiencies of BEST® did not change much as compared with those under rainless wind condition. PMID:27898716

  10. Chroma sampling and modulation techniques in high dynamic range video coding

    NASA Astrophysics Data System (ADS)

    Dai, Wei; Krishnan, Madhu; Topiwala, Pankaj

    2015-09-01

    High Dynamic Range and Wide Color Gamut (HDR/WCG) Video Coding is an area of intense research interest in the engineering community, for potential near-term deployment in the marketplace. HDR greatly enhances the dynamic range of video content (up to 10,000 nits), as well as broadens the chroma representation (BT.2020). The resulting content offers new challenges in its coding and transmission. The Moving Picture Experts Group (MPEG) of the International Standards Organization (ISO) is currently exploring coding efficiency and/or the functionality enhancements of the recently developed HEVC video standard for HDR and WCG content. FastVDO has developed an advanced approach to coding HDR video, based on splitting the HDR signal into a smoothed luminance (SL) signal, and an associated base signal (B). Both signals are then chroma downsampled to YFbFr 4:2:0 signals, using advanced resampling filters, and coded using the Main10 High Efficiency Video Coding (HEVC) standard, which has been developed jointly by ISO/IEC MPEG and ITU-T WP3/16 (VCEG). Our proposal offers both efficient coding, and backwards compatibility with the existing HEVC Main10 Profile. That is, an existing Main10 decoder can produce a viewable standard dynamic range video, suitable for existing screens. Subjective tests show visible improvement over the anchors. Objective tests show a sizable gain of over 25% in PSNR (RGB domain) on average, for a key set of test clips selected by the ISO/MPEG committee.

  11. Using cost and health impacts to prioritize the targeted testing of tuberculosis in the United States.

    PubMed

    Miller, Thaddeus L; Hilsenrath, Peter; Lykens, Kristine; McNabb, Scott J N; Moonan, Patrick K; Weis, Stephen E

    2006-04-01

    Evaluation improves efficiency and effectiveness. Current U.S. tuberculosis (TB) control policies emphasize the treatment of latent TB infection (LTBI). However, this policy, if not targeted, may be inefficient. We determined the efficiency of a state-law mandated TB screening program and a non state-law mandated one in terms of cost, morbidity, treatment, and disease averted. We evaluated two publicly funded metropolitan TB prevention and control programs through retrospective analyses and modeling. Main outcomes measured were TB incidence and prevalence, TB cases averted, and cost. A non state-law mandated TB program for homeless persons in Tarrant County screened 4.5 persons to identify one with LTBI and 82 persons to identify one with TB. A state-law mandated TB program for jail inmates screened 109 persons to identify one with LTBI and 3274 persons to identify one with TB. The number of patients with LTBI treated to prevent one TB case was 12.1 and 15.3 for the homeless and jail inmate TB programs, respectively. Treatment of LTBI by the homeless and jail inmate TB screening programs will avert 11.9 and 7.9 TB cases at a cost of 14,350 US dollars and 34,761 US dollars per TB case, respectively. Mandated TB screening programs should be risk-based, not population-based. Non mandated targeted testing for TB in congregate settings for the homeless was more efficient than state-law mandated targeted testing for TB among jailed inmates.

  12. Privacy-preserving genome-wide association studies on cloud environment using fully homomorphic encryption.

    PubMed

    Lu, Wen-Jie; Yamada, Yoshiji; Sakuma, Jun

    2015-01-01

    Developed sequencing techniques are yielding large-scale genomic data at low cost. A genome-wide association study (GWAS) targeting genetic variations that are significantly associated with a particular disease offers great potential for medical improvement. However, subjects who volunteer their genomic data expose themselves to the risk of privacy invasion; these privacy concerns prevent efficient genomic data sharing. Our goal is to presents a cryptographic solution to this problem. To maintain the privacy of subjects, we propose encryption of all genotype and phenotype data. To allow the cloud to perform meaningful computation in relation to the encrypted data, we use a fully homomorphic encryption scheme. Noting that we can evaluate typical statistics for GWAS from a frequency table, our solution evaluates frequency tables with encrypted genomic and clinical data as input. We propose to use a packing technique for efficient evaluation of these frequency tables. Our solution supports evaluation of the D' measure of linkage disequilibrium, the Hardy-Weinberg Equilibrium, the χ2 test, etc. In this paper, we take χ2 test and linkage disequilibrium as examples and demonstrate how we can conduct these algorithms securely and efficiently in an outsourcing setting. We demonstrate with experimentation that secure outsourcing computation of one χ2 test with 10, 000 subjects requires about 35 ms and evaluation of one linkage disequilibrium with 10, 000 subjects requires about 80 ms. With appropriate encoding and packing technique, cryptographic solutions based on fully homomorphic encryption for secure computations of GWAS can be practical.

  13. Design of efficient molecular organic light-emitting diodes by a high-throughput virtual screening and experimental approach.

    PubMed

    Gómez-Bombarelli, Rafael; Aguilera-Iparraguirre, Jorge; Hirzel, Timothy D; Duvenaud, David; Maclaurin, Dougal; Blood-Forsythe, Martin A; Chae, Hyun Sik; Einzinger, Markus; Ha, Dong-Gwang; Wu, Tony; Markopoulos, Georgios; Jeon, Soonok; Kang, Hosuk; Miyazaki, Hiroshi; Numata, Masaki; Kim, Sunghan; Huang, Wenliang; Hong, Seong Ik; Baldo, Marc; Adams, Ryan P; Aspuru-Guzik, Alán

    2016-10-01

    Virtual screening is becoming a ground-breaking tool for molecular discovery due to the exponential growth of available computer time and constant improvement of simulation and machine learning techniques. We report an integrated organic functional material design process that incorporates theoretical insight, quantum chemistry, cheminformatics, machine learning, industrial expertise, organic synthesis, molecular characterization, device fabrication and optoelectronic testing. After exploring a search space of 1.6 million molecules and screening over 400,000 of them using time-dependent density functional theory, we identified thousands of promising novel organic light-emitting diode molecules across the visible spectrum. Our team collaboratively selected the best candidates from this set. The experimentally determined external quantum efficiencies for these synthesized candidates were as large as 22%.

  14. Efficient generation of sum-of-products representations of high-dimensional potential energy surfaces based on multimode expansions

    NASA Astrophysics Data System (ADS)

    Ziegler, Benjamin; Rauhut, Guntram

    2016-03-01

    The transformation of multi-dimensional potential energy surfaces (PESs) from a grid-based multimode representation to an analytical one is a standard procedure in quantum chemical programs. Within the framework of linear least squares fitting, a simple and highly efficient algorithm is presented, which relies on a direct product representation of the PES and a repeated use of Kronecker products. It shows the same scalings in computational cost and memory requirements as the potfit approach. In comparison to customary linear least squares fitting algorithms, this corresponds to a speed-up and memory saving by several orders of magnitude. Different fitting bases are tested, namely, polynomials, B-splines, and distributed Gaussians. Benchmark calculations are provided for the PESs of a set of small molecules.

  15. Efficient generation of sum-of-products representations of high-dimensional potential energy surfaces based on multimode expansions.

    PubMed

    Ziegler, Benjamin; Rauhut, Guntram

    2016-03-21

    The transformation of multi-dimensional potential energy surfaces (PESs) from a grid-based multimode representation to an analytical one is a standard procedure in quantum chemical programs. Within the framework of linear least squares fitting, a simple and highly efficient algorithm is presented, which relies on a direct product representation of the PES and a repeated use of Kronecker products. It shows the same scalings in computational cost and memory requirements as the potfit approach. In comparison to customary linear least squares fitting algorithms, this corresponds to a speed-up and memory saving by several orders of magnitude. Different fitting bases are tested, namely, polynomials, B-splines, and distributed Gaussians. Benchmark calculations are provided for the PESs of a set of small molecules.

  16. Design and implementation of a health data interoperability mediator.

    PubMed

    Kuo, Mu-Hsing; Kushniruk, Andre William; Borycki, Elizabeth Marie

    2010-01-01

    The objective of this study is to design and implement a common-gateway oriented mediator to solve the health data interoperability problems that exist among heterogeneous health information systems. The proposed mediator has three main components: (1) a Synonym Dictionary (SD) that stores a set of global metadata and terminologies to serve as the mapping intermediary, (2) a Semantic Mapping Engine (SME) that can be used to map metadata and instance semantics, and (3) a DB-to-XML module that translates source health data stored in a database into XML format and back. A routine admission notification data exchange scenario is used to test the efficiency and feasibility of the proposed mediator. The study results show that the proposed mediator can make health information exchange more efficient.

  17. Contribution of heat transfer to turbine blades and vanes for high temperature industrial gas turbines. Part 2: Heat transfer on serpentine flow passage.

    PubMed

    Takeishi, K; Aoki, S

    2001-05-01

    The improvement of the heat transfer coefficient of the 1st row blades in high temperature industrial gas turbines is one of the most important issues to ensure reliable performance of these components and to attain high thermal efficiency of the facility. This paper deals with the contribution of heat transfer to increase the turbine inlet temperature of such gas turbines in order to attain efficient and environmentally benign engines. Following the experiments described in Part 1, a set of trials was conducted to clarify the influence of the blade's rotating motion on the heat transfer coefficient for internal serpentine flow passages with turbulence promoters. Test results are shown and discussed in this second part of the contribution.

  18. Design of efficient molecular organic light-emitting diodes by a high-throughput virtual screening and experimental approach

    NASA Astrophysics Data System (ADS)

    Gómez-Bombarelli, Rafael; Aguilera-Iparraguirre, Jorge; Hirzel, Timothy D.; Duvenaud, David; MacLaurin, Dougal; Blood-Forsythe, Martin A.; Chae, Hyun Sik; Einzinger, Markus; Ha, Dong-Gwang; Wu, Tony; Markopoulos, Georgios; Jeon, Soonok; Kang, Hosuk; Miyazaki, Hiroshi; Numata, Masaki; Kim, Sunghan; Huang, Wenliang; Hong, Seong Ik; Baldo, Marc; Adams, Ryan P.; Aspuru-Guzik, Alán

    2016-10-01

    Virtual screening is becoming a ground-breaking tool for molecular discovery due to the exponential growth of available computer time and constant improvement of simulation and machine learning techniques. We report an integrated organic functional material design process that incorporates theoretical insight, quantum chemistry, cheminformatics, machine learning, industrial expertise, organic synthesis, molecular characterization, device fabrication and optoelectronic testing. After exploring a search space of 1.6 million molecules and screening over 400,000 of them using time-dependent density functional theory, we identified thousands of promising novel organic light-emitting diode molecules across the visible spectrum. Our team collaboratively selected the best candidates from this set. The experimentally determined external quantum efficiencies for these synthesized candidates were as large as 22%.

  19. Experimental investigation and modeling of a direct-coupled PV/T air collector

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shahsavar, A.; Ameri, M.; Energy and Environmental Engineering Research Center, Shahid Bahonar University, Kerman

    2010-11-15

    Photovoltaic/thermal (PV/T) systems refer to the integration of photovoltaic and solar thermal technologies into one single system, in that both useful heat energy and electricity are produced. The impetus of this paper is to model a direct-coupled PV/T air collector which is designed, built, and tested at a geographic location of Kerman, Iran. In this system, a thin aluminum sheet suspended at the middle of air channel is used to increase the heat exchange surface and consequently improve heat extraction from PV panels. This PV/T system is tested in natural convection and forced convection (with two, four and eight fansmore » operating) and its unsteady results are presented in with and without glass cover cases. A theoretical model is developed and validated against experimental data, where good agreement between the measured values and those calculated by the simulation model were achieved. Comparisons are made between electrical performance of the different modes of operation, and it is concluded that there is an optimum number of fans for achieving maximum electrical efficiency. Also, results show that setting glass cover on photovoltaic panels leads to an increase in thermal efficiency and decrease in electrical efficiency of the system. (author)« less

  20. vitisFlower®: Development and Testing of a Novel Android-Smartphone Application for Assessing the Number of Grapevine Flowers per Inflorescence Using Artificial Vision Techniques

    PubMed Central

    Aquino, Arturo; Millan, Borja; Gaston, Daniel; Diago, María-Paz; Tardaguila, Javier

    2015-01-01

    Grapevine flowering and fruit set greatly determine crop yield. This paper presents a new smartphone application for automatically counting, non-invasively and directly in the vineyard, the flower number in grapevine inflorescence photos by implementing artificial vision techniques. The application, called vitisFlower®, firstly guides the user to appropriately take an inflorescence photo using the smartphone’s camera. Then, by means of image analysis, the flowers in the image are detected and counted. vitisFlower® has been developed for Android devices and uses the OpenCV libraries to maximize computational efficiency. The application was tested on 140 inflorescence images of 11 grapevine varieties taken with two different devices. On average, more than 84% of flowers in the captures were found, with a precision exceeding 94%. Additionally, the application’s efficiency on four different devices covering a wide range of the market’s spectrum was also studied. The results of this benchmarking study showed significant differences among devices, although indicating that the application is efficiently usable even with low-range devices. vitisFlower is one of the first applications for viticulture that is currently freely available on Google Play. PMID:26343664

  1. Evaluation of a Urine Pooling Strategy for the Rapid and Cost-Efficient Prevalence Classification of Schistosomiasis.

    PubMed

    Lo, Nathan C; Coulibaly, Jean T; Bendavid, Eran; N'Goran, Eliézer K; Utzinger, Jürg; Keiser, Jennifer; Bogoch, Isaac I; Andrews, Jason R

    2016-08-01

    A key epidemiologic feature of schistosomiasis is its focal distribution, which has important implications for the spatial targeting of preventive chemotherapy programs. We evaluated the diagnostic accuracy of a urine pooling strategy using a point-of-care circulating cathodic antigen (POC-CCA) cassette test for detection of Schistosoma mansoni, and employed simulation modeling to test the classification accuracy and efficiency of this strategy in determining where preventive chemotherapy is needed in low-endemicity settings. We performed a cross-sectional study involving 114 children aged 6-15 years in six neighborhoods in Azaguié Ahoua, south Côte d'Ivoire to characterize the sensitivity and specificity of the POC-CCA cassette test with urine samples that were tested individually and in pools of 4, 8, and 12. We used a Bayesian latent class model to estimate test characteristics for individual POC-CCA and quadruplicate Kato-Katz thick smears on stool samples. We then developed a microsimulation model and used lot quality assurance sampling to test the performance, number of tests, and total cost per school for each pooled testing strategy to predict the binary need for school-based preventive chemotherapy using a 10% prevalence threshold for treatment. The sensitivity of the urine pooling strategy for S. mansoni diagnosis using pool sizes of 4, 8, and 12 was 85.9%, 79.5%, and 65.4%, respectively, when POC-CCA trace results were considered positive, and 61.5%, 47.4%, and 30.8% when POC-CCA trace results were considered negative. The modeled specificity ranged from 94.0-97.7% for the urine pooling strategies (when POC-CCA trace results were considered negative). The urine pooling strategy, regardless of the pool size, gave comparable and often superior classification performance to stool microscopy for the same number of tests. The urine pooling strategy with a pool size of 4 reduced the number of tests and total cost compared to classical stool microscopy. This study introduces a method for rapid and efficient S. mansoni prevalence estimation through examining pooled urine samples with POC-CCA as an alternative to widely used stool microscopy.

  2. Evaluation of a Urine Pooling Strategy for the Rapid and Cost-Efficient Prevalence Classification of Schistosomiasis

    PubMed Central

    Coulibaly, Jean T.; Bendavid, Eran; N’Goran, Eliézer K.; Utzinger, Jürg; Keiser, Jennifer; Bogoch, Isaac I.; Andrews, Jason R.

    2016-01-01

    Background A key epidemiologic feature of schistosomiasis is its focal distribution, which has important implications for the spatial targeting of preventive chemotherapy programs. We evaluated the diagnostic accuracy of a urine pooling strategy using a point-of-care circulating cathodic antigen (POC-CCA) cassette test for detection of Schistosoma mansoni, and employed simulation modeling to test the classification accuracy and efficiency of this strategy in determining where preventive chemotherapy is needed in low-endemicity settings. Methodology We performed a cross-sectional study involving 114 children aged 6–15 years in six neighborhoods in Azaguié Ahoua, south Côte d’Ivoire to characterize the sensitivity and specificity of the POC-CCA cassette test with urine samples that were tested individually and in pools of 4, 8, and 12. We used a Bayesian latent class model to estimate test characteristics for individual POC-CCA and quadruplicate Kato-Katz thick smears on stool samples. We then developed a microsimulation model and used lot quality assurance sampling to test the performance, number of tests, and total cost per school for each pooled testing strategy to predict the binary need for school-based preventive chemotherapy using a 10% prevalence threshold for treatment. Principal Findings The sensitivity of the urine pooling strategy for S. mansoni diagnosis using pool sizes of 4, 8, and 12 was 85.9%, 79.5%, and 65.4%, respectively, when POC-CCA trace results were considered positive, and 61.5%, 47.4%, and 30.8% when POC-CCA trace results were considered negative. The modeled specificity ranged from 94.0–97.7% for the urine pooling strategies (when POC-CCA trace results were considered negative). The urine pooling strategy, regardless of the pool size, gave comparable and often superior classification performance to stool microscopy for the same number of tests. The urine pooling strategy with a pool size of 4 reduced the number of tests and total cost compared to classical stool microscopy. Conclusions/Significance This study introduces a method for rapid and efficient S. mansoni prevalence estimation through examining pooled urine samples with POC-CCA as an alternative to widely used stool microscopy. PMID:27504954

  3. Calibration and use of filter test facility orifice plates

    NASA Astrophysics Data System (ADS)

    Fain, D. E.; Selby, T. W.

    1984-07-01

    There are three official DOE filter test facilities. These test facilities are used by the DOE, and others, to test nuclear grade HEPA filters to provide Quality Assurance that the filters meet the required specifications. The filters are tested for both filter efficiency and pressure drop. In the test equipment, standard orifice plates are used to set the specified flow rates for the tests. There has existed a need to calibrate the orifice plates from the three facilities with a common calibration source to assure that the facilities have comparable tests. A project has been undertaken to calibrate these orifice plates. In addition to reporting the results of the calibrations of the orifice plates, the means for using the calibration results will be discussed. A comparison of the orifice discharge coefficients for the orifice plates used at the seven facilities will be given. The pros and cons for the use of mass flow or volume flow rates for testing will be discussed. It is recommended that volume flow rates be used as a more practical and comparable means of testing filters. The rationale for this recommendation will be discussed.

  4. Prediction of clinical response to drugs in ovarian cancer using the chemotherapy resistance test (CTR-test).

    PubMed

    Kischkel, Frank Christian; Meyer, Carina; Eich, Julia; Nassir, Mani; Mentze, Monika; Braicu, Ioana; Kopp-Schneider, Annette; Sehouli, Jalid

    2017-10-27

    In order to validate if the test result of the Chemotherapy Resistance Test (CTR-Test) is able to predict the resistances or sensitivities of tumors in ovarian cancer patients to drugs, the CTR-Test result and the corresponding clinical response of individual patients were correlated retrospectively. Results were compared to previous recorded correlations. The CTR-Test was performed on tumor samples from 52 ovarian cancer patients for specific chemotherapeutic drugs. Patients were treated with monotherapies or drug combinations. Resistances were classified as extreme (ER), medium (MR) or slight (SR) resistance in the CTR-Test. Combination treatment resistances were transformed by a scoring system into these classifications. Accurate sensitivity prediction was accomplished in 79% of the cases and accurate prediction of resistance in 100% of the cases in the total data set. The data set of single agent treatment and drug combination treatment were analyzed individually. Single agent treatment lead to an accurate sensitivity in 44% of the cases and the drug combination to 95% accuracy. The detection of resistances was in both cases to 100% correct. ROC curve analysis indicates that the CTR-Test result correlates with the clinical response, at least for the combination chemotherapy. Those values are similar or better than the values from a publication from 1990. Chemotherapy resistance testing in vitro via the CTR-Test is able to accurately detect resistances in ovarian cancer patients. These numbers confirm and even exceed results published in 1990. Better sensitivity detection might be caused by a higher percentage of drug combinations tested in 2012 compared to 1990. Our study confirms the functionality of the CTR-Test to plan an efficient chemotherapeutic treatment for ovarian cancer patients.

  5. Power Hardware-in-the-Loop Evaluation of PV Inverter Grid Support on Hawaiian Electric Feeders: Preprint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nelson, Austin; Prabakar, Kumaraguru; Nagarajan, Adarsh

    As more grid-connected photovoltaic (PV) inverters become compliant with evolving interconnections requirements, there is increased interest from utilities in understanding how to best deploy advanced grid-support functions (GSF) in the field. One efficient and cost-effective method to examine such deployment options is to leverage power hardware-in-the-loop (PHIL) testing methods. Two Hawaiian Electric feeder models were converted to real-time models in the OPAL-RT real-time digital testing platform, and integrated with models of GSF capable PV inverters that were modeled from characterization test data. The integrated model was subsequently used in PHIL testing to evaluate the effects of different fixed power factormore » and volt-watt control settings on voltage regulation of the selected feeders. The results of this study were provided as inputs for field deployment and technical interconnection requirements for grid-connected PV inverters on the Hawaiian Islands.« less

  6. Software Verification of Orion Cockpit Displays

    NASA Technical Reports Server (NTRS)

    Biswas, M. A. Rafe; Garcia, Samuel; Prado, Matthew; Hossain, Sadad; Souris, Matthew; Morin, Lee

    2017-01-01

    NASA's latest spacecraft Orion is in the development process of taking humans deeper into space. Orion is equipped with three main displays to monitor and control the spacecraft. To ensure the software behind the glass displays operates without faults, rigorous testing is needed. To conduct such testing, the Rapid Prototyping Lab at NASA's Johnson Space Center along with the University of Texas at Tyler employed a software verification tool, EggPlant Functional by TestPlant. It is an image based test automation tool that allows users to create scripts to verify the functionality within a program. A set of edge key framework and Common EggPlant Functions were developed to enable creation of scripts in an efficient fashion. This framework standardized the way to code and to simulate user inputs in the verification process. Moreover, the Common EggPlant Functions can be used repeatedly in verification of different displays.

  7. Conception of a test bench to generate known and controlled conditions of refrigerant mass flow.

    PubMed

    Martins, Erick F; Flesch, Carlos A; Flesch, Rodolfo C C; Borges, Maikon R

    2011-07-01

    Refrigerant compressor performance tests play an important role in the evaluation of the energy characteristics of the compressor, enabling an increase in the quality, reliability, and efficiency of these products. Due to the nonexistence of a refrigerating capacity standard, it is common to use previously conditioned compressors for the intercomparison and evaluation of the temporal drift of compressor performance test panels. However, there are some limitations regarding the use of these specific compressors as standards. This study proposes the development of a refrigerating capacity standard which consists of a mass flow meter and a variable-capacity compressor, whose speed is set based on the mass flow rate measured by the meter. From the results obtained in the tests carried out on a bench specifically developed for this purpose, it was possible to validate the concept of a capacity standard. Copyright © 2011 ISA. Published by Elsevier Ltd. All rights reserved.

  8. Design, Project Execution, and Commissioning of the 1.8 K Superfluid Helium Refrigeration System for SRF Cryomodule Testing

    DOE PAGES

    Treite, P.; Nuesslein, U.; Jia, Yi; ...

    2015-07-15

    The Fermilab Cryomodule Test Facility (CMTF) provides a test bed to measure the performance of superconducting radiofrequency (SRF) cryomodules (CM). These SRF components form the basic building blocks of future high intensity accelerators such as the International Linear Collider (ILC) and a Muon Collider. Linde Kryotechnik AG and Linde Cryogenics have designed, constructed and commissioned the superfluid helium refrigerator needed to support SRF component testing at the CMTF Facility. The hybrid refrigerator is designed to operate in a variety of modes and under a wide range of boundary conditions down to 1.8 Kelvin set by CM design. Special features ofmore » the refrigerator include the use of warm and cold compression and high efficiency turbo expanders.This paper gives an overview on the wide range of the challenging cooling requirements, the design, fabrication and the commissioning of the installed cryogenic system.« less

  9. Testing for detailed balance in a financial market

    NASA Astrophysics Data System (ADS)

    Fiebig, H. R.; Musgrove, D. P.

    2015-06-01

    We test a historical price-time series in a financial market (the NASDAQ 100 index) for a statistical property known as detailed balance. The presence of detailed balance would imply that the market can be modeled by a stochastic process based on a Markov chain, thus leading to equilibrium. In economic terms, a positive outcome of the test would support the efficient market hypothesis, a cornerstone of neo-classical economic theory. In contrast to the usage in prevalent economic theory the term equilibrium here is tied to the returns, rather than the price-time series. The test is based on an action functional S constructed from the elements of the detailed balance condition and the historical data set, and then analyzing S by means of simulated annealing. Checks are performed to verify the validity of the analysis method. We discuss the outcome of this analysis.

  10. Evaluation of Penicillin Allergy in the Hospitalized Patient: Opportunities for Antimicrobial Stewardship.

    PubMed

    Chen, Justin R; Khan, David A

    2017-06-01

    Penicillin allergy is often misdiagnosed and is associated with adverse consequences, but testing is infrequently done in the hospital setting. This article reviews historical and contemporary innovations in inpatient penicillin allergy testing and its impact on antimicrobial stewardship. Adoption of the electronic medical record allows rapid identification of admitted patients carrying a penicillin allergy diagnosis. Collaboration with clinical pharmacists and the development of computerized clinical guidelines facilitates increased testing and appropriate use of penicillin and related β-lactams. Education of patients and their outpatient providers is the key to retaining the benefits of penicillin allergy de-labeling. Penicillin allergy testing is feasible in the hospital and offers tangible benefits towards antimicrobial stewardship. Allergists should take the lead in this endeavor and work towards overcoming personnel limitations by partnering with other health care providers and incorporating technology that improves the efficiency of allergy evaluation.

  11. The cobas® 6800/8800 System: a new era of automation in molecular diagnostics.

    PubMed

    Cobb, Bryan; Simon, Christian O; Stramer, Susan L; Body, Barbara; Mitchell, P Shawn; Reisch, Natasa; Stevens, Wendy; Carmona, Sergio; Katz, Louis; Will, Stephen; Liesenfeld, Oliver

    2017-02-01

    Molecular diagnostics is a key component of laboratory medicine. Here, the authors review key triggers of ever-increasing automation in nucleic acid amplification testing (NAAT) with a focus on specific automated Polymerase Chain Reaction (PCR) testing and platforms such as the recently launched cobas® 6800 and cobas® 8800 Systems. The benefits of such automation for different stakeholders including patients, clinicians, laboratory personnel, hospital administrators, payers, and manufacturers are described. Areas Covered: The authors describe how molecular diagnostics has achieved total laboratory automation over time, rivaling clinical chemistry to significantly improve testing efficiency. Finally, the authors discuss how advances in automation decrease the development time for new tests enabling clinicians to more readily provide test results. Expert Commentary: The advancements described enable complete diagnostic solutions whereby specific test results can be combined with relevant patient data sets to allow healthcare providers to deliver comprehensive clinical recommendations in multiple fields ranging from infectious disease to outbreak management and blood safety solutions.

  12. A Tale of Two Chambers: Iterative Approaches and Lessons Learned from Life Support Systems Testing in Altitude Chambers

    NASA Technical Reports Server (NTRS)

    Callini, Gianluca

    2016-01-01

    With a brand new fire set ablaze by a serendipitous convergence of events ranging from a science fiction novel and movie ("The Martian"), to ground-breaking recent discoveries of flowing water on its surface, the drive for the journey to Mars seems to be in a higher gear than ever before. We are developing new spacecraft and support systems to take humans to the Red Planet, while scientists on Earth continue using the International Space Station as a laboratory to evaluate the effects of long duration space flight on the human body. Written from the perspective of a facility test director rather than a researcher, and using past and current life support systems tests as examples, this paper seeks to provide an overview on how facility teams approach testing, the kind of information they need to ensure efficient collaborations and successful tests, and how, together with researchers and principal investigators, we can collectively apply what we learn to execute future tests.

  13. A Static Burst Test for Composite Flywheel Rotors

    NASA Astrophysics Data System (ADS)

    Hartl, Stefan; Schulz, Alexander; Sima, Harald; Koch, Thomas; Kaltenbacher, Manfred

    2016-06-01

    High efficient and safe flywheels are an interesting technology for decentralized energy storage. To ensure all safety aspects, a static test method for a controlled initiation of a burst event for composite flywheel rotors is presented with nearly the same stress distribution as in the dynamic case, rotating with maximum speed. In addition to failure prediction using different maximum stress criteria and a safety factor, a set of tensile and compressive tests is carried out to identify the parameters of the used carbon fiber reinforced plastics (CFRP) material. The static finite element (FE) simulation results of the flywheel static burst test (FSBT) compare well to the quasistatic FE-simulation results of the flywheel rotor using inertia loads. Furthermore, it is demonstrated that the presented method is a very good controllable and observable possibility to test a high speed flywheel energy storage system (FESS) rotor in a static way. Thereby, a much more expensive and dangerous dynamic spin up test with possible uncertainties can be substituted.

  14. Process-based interpretation of conceptual hydrological model performance using a multinational catchment set

    NASA Astrophysics Data System (ADS)

    Poncelet, Carine; Merz, Ralf; Merz, Bruno; Parajka, Juraj; Oudin, Ludovic; Andréassian, Vazken; Perrin, Charles

    2017-08-01

    Most of previous assessments of hydrologic model performance are fragmented, based on small number of catchments, different methods or time periods and do not link the results to landscape or climate characteristics. This study uses large-sample hydrology to identify major catchment controls on daily runoff simulations. It is based on a conceptual lumped hydrological model (GR6J), a collection of 29 catchment characteristics, a multinational set of 1103 catchments located in Austria, France, and Germany and four runoff model efficiency criteria. Two analyses are conducted to assess how features and criteria are linked: (i) a one-dimensional analysis based on the Kruskal-Wallis test and (ii) a multidimensional analysis based on regression trees and investigating the interplay between features. The catchment features most affecting model performance are the flashiness of precipitation and streamflow (computed as the ratio of absolute day-to-day fluctuations by the total amount in a year), the seasonality of evaporation, the catchment area, and the catchment aridity. Nonflashy, nonseasonal, large, and nonarid catchments show the best performance for all the tested criteria. We argue that this higher performance is due to fewer nonlinear responses (higher correlation between precipitation and streamflow) and lower input and output variability for such catchments. Finally, we show that, compared to national sets, multinational sets increase results transferability because they explore a wider range of hydroclimatic conditions.

  15. The use of Optical Character Recognition (OCR) in the digitisation of herbarium specimen labels

    PubMed Central

    Drinkwater, Robyn E.; Cubey, Robert W. N.; Haston, Elspeth M.

    2014-01-01

    Abstract At the Royal Botanic Garden Edinburgh (RBGE) the use of Optical Character Recognition (OCR) to aid the digitisation process has been investigated. This was tested using a herbarium specimen digitisation process with two stages of data entry. Records were initially batch-processed to add data extracted from the OCR text prior to being sorted based on Collector and/or Country. Using images of the specimens, a team of six digitisers then added data to the specimen records. To investigate whether the data from OCR aid the digitisation process, they completed a series of trials which compared the efficiency of data entry between sorted and unsorted batches of specimens. A survey was carried out to explore the opinion of the digitisation staff to the different sorting options. In total 7,200 specimens were processed. When compared to an unsorted, random set of specimens, those which were sorted based on data added from the OCR were quicker to digitise. Of the methods tested here, the most successful in terms of efficiency used a protocol which required entering data into a limited set of fields and where the records were filtered by Collector and Country. The survey and subsequent discussions with the digitisation staff highlighted their preference for working with sorted specimens, in which label layout, locations and handwriting are likely to be similar, and so a familiarity with the Collector or Country is rapidly established. PMID:25009435

  16. Applications of Derandomization Theory in Coding

    NASA Astrophysics Data System (ADS)

    Cheraghchi, Mahdi

    2011-07-01

    Randomized techniques play a fundamental role in theoretical computer science and discrete mathematics, in particular for the design of efficient algorithms and construction of combinatorial objects. The basic goal in derandomization theory is to eliminate or reduce the need for randomness in such randomized constructions. In this thesis, we explore some applications of the fundamental notions in derandomization theory to problems outside the core of theoretical computer science, and in particular, certain problems related to coding theory. First, we consider the wiretap channel problem which involves a communication system in which an intruder can eavesdrop a limited portion of the transmissions, and construct efficient and information-theoretically optimal communication protocols for this model. Then we consider the combinatorial group testing problem. In this classical problem, one aims to determine a set of defective items within a large population by asking a number of queries, where each query reveals whether a defective item is present within a specified group of items. We use randomness condensers to explicitly construct optimal, or nearly optimal, group testing schemes for a setting where the query outcomes can be highly unreliable, as well as the threshold model where a query returns positive if the number of defectives pass a certain threshold. Finally, we design ensembles of error-correcting codes that achieve the information-theoretic capacity of a large class of communication channels, and then use the obtained ensembles for construction of explicit capacity achieving codes. [This is a shortened version of the actual abstract in the thesis.

  17. Evaluations and Comparisons of Treatment Effects Based on Best Combinations of Biomarkers with Applications to Biomedical Studies

    PubMed Central

    Chen, Xiwei; Yu, Jihnhee

    2014-01-01

    Abstract Many clinical and biomedical studies evaluate treatment effects based on multiple biomarkers that commonly consist of pre- and post-treatment measurements. Some biomarkers can show significant positive treatment effects, while other biomarkers can reflect no effects or even negative effects of the treatments, giving rise to a necessity to develop methodologies that may correctly and efficiently evaluate the treatment effects based on multiple biomarkers as a whole. In the setting of pre- and post-treatment measurements of multiple biomarkers, we propose to apply a receiver operating characteristic (ROC) curve methodology based on the best combination of biomarkers maximizing the area under the receiver operating characteristic curve (AUC)-type criterion among all possible linear combinations. In the particular case with independent pre- and post-treatment measurements, we show that the proposed method represents the well-known Su and Liu's (1993) result. Further, proceeding from derived best combinations of biomarkers' measurements, we propose an efficient technique via likelihood ratio tests to compare treatment effects. We show an extensive Monte Carlo study that confirms the superiority of the proposed test in comparison with treatment effects based on multiple biomarkers in a paired data setting. For practical applications, the proposed method is illustrated with a randomized trial of chlorhexidine gluconate on oral bacterial pathogens in mechanically ventilated patients as well as a treatment study for children with attention deficit-hyperactivity disorder and severe mood dysregulation. PMID:25019920

  18. Volume 19, Issue8 (December 2004)Articles in the Current Issue:Research ArticleTowards automation of palynology 1: analysis of pollen shape and ornamentation using simple geometric measures, derived from scanning electron microscope images

    NASA Astrophysics Data System (ADS)

    Treloar, W. J.; Taylor, G. E.; Flenley, J. R.

    2004-12-01

    This is the first of a series of papers on the theme of automated pollen analysis. The automation of pollen analysis could result in numerous advantages for the reconstruction of past environments, with larger data sets made practical, objectivity and fine resolution sampling. There are also applications in apiculture and medicine. Previous work on the classification of pollen using texture measures has been successful with small numbers of pollen taxa. However, as the number of pollen taxa to be identified increases, more features may be required to achieve a successful classification. This paper describes the use of simple geometric measures to augment the texture measures. The feasibility of this new approach is tested using scanning electron microscope (SEM) images of 12 taxa of fresh pollen taken from reference material collected on Henderson Island, Polynesia. Pollen images were captured directly from a SEM connected to a PC. A threshold grey-level was set and binary images were then generated. Pollen edges were then located and the boundaries were traced using a chain coding system. A number of simple geometric variables were calculated directly from the chain code of the pollen and a variable selection procedure was used to choose the optimal subset to be used for classification. The efficiency of these variables was tested using a leave-one-out classification procedure. The system successfully split the original 12 taxa sample into five sub-samples containing no more than six pollen taxa each. The further subdivision of echinate pollen types was then attempted with a subset of four pollen taxa. A set of difference codes was constructed for a range of displacements along the chain code. From these difference codes probability variables were calculated. A variable selection procedure was again used to choose the optimal subset of probabilities that may be used for classification. The efficiency of these variables was again tested using a leave-one-out classification procedure. The proportion of correctly classified pollen ranged from 81% to 100% depending on the subset of variables used. The best set of variables had an overall classification rate averaging at about 95%. This is comparable with the classification rates from the earlier texture analysis work for other types of pollen. Copyright

  19. Spatio-Temporal Features of Visual Exploration in Unilaterally Brain-Damaged Subjects with or without Neglect: Results from a Touchscreen Test

    PubMed Central

    Rabuffetti, Marco; Farina, Elisabetta; Alberoni, Margherita; Pellegatta, Daniele; Appollonio, Ildebrando; Affanni, Paola; Forni, Marco; Ferrarin, Maurizio

    2012-01-01

    Cognitive assessment in a clinical setting is generally made by pencil-and-paper tests, while computer-based tests enable the measurement and the extraction of additional performance indexes. Previous studies have demonstrated that in a research context exploration deficits occur also in patients without evidence of unilateral neglect at pencil-and-paper tests. The objective of this study is to apply a touchscreen-based cancellation test, feasible also in a clinical context, to large groups of control subjects and unilaterally brain-damaged patients, with and without unilateral spatial neglect (USN), in order to assess disturbances of the exploratory skills. A computerized cancellation test on a touchscreen interface was used for assessing the performance of 119 neurologically unimpaired control subjects and 193 patients with unilateral right or left hemispheric brain damage, either with or without USN. A set of performance indexes were defined including Latency, Proximity, Crossings and their spatial lateral gradients, and Preferred Search Direction. Classic outcome scores were computed as well. Results show statistically significant differences among groups (assumed p<0.05). Right-brain-damaged patients with USN were significantly slower (median latency per detected item was 1.18 s) and less efficient (about 13 search-path crossings) in the search than controls (median latency 0.64 s; about 3 crossings). Their preferred search direction (53.6% downward, 36.7% leftward) was different from the one in control patients (88.2% downward, 2.1% leftward). Right-brain-damaged patients without USN showed a significantly abnormal behavior (median latency 0.84 s, about 5 crossings, 83.3% downward and 9.1% leftward direction) situated half way between controls and right-brain-damaged patients with USN. Left-brain-damaged patients without USN were significantly slower and less efficient than controls (latency 1.19 s, about 7 crossings), preserving a normal preferred search direction (93.7% downward). Therefore, the proposed touchscreen-based assessment had evidenced disorders in spatial exploration also in patients without clinically diagnosed USN. PMID:22347489

  20. 49 CFR 239.301 - Operational (efficiency) tests.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 49 Transportation 4 2011-10-01 2011-10-01 false Operational (efficiency) tests. 239.301 Section... ADMINISTRATION, DEPARTMENT OF TRANSPORTATION PASSENGER TRAIN EMERGENCY PREPAREDNESS Operational (Efficiency) Tests; Inspection of Records and Recordkeeping § 239.301 Operational (efficiency) tests. (a) Each...

Top