Sample records for separate test set

  1. Locating hardware faults in a parallel computer

    DOEpatents

    Archer, Charles J.; Megerian, Mark G.; Ratterman, Joseph D.; Smith, Brian E.

    2010-04-13

    Locating hardware faults in a parallel computer, including defining within a tree network of the parallel computer two or more sets of non-overlapping test levels of compute nodes of the network that together include all the data communications links of the network, each non-overlapping test level comprising two or more adjacent tiers of the tree; defining test cells within each non-overlapping test level, each test cell comprising a subtree of the tree including a subtree root compute node and all descendant compute nodes of the subtree root compute node within a non-overlapping test level; performing, separately on each set of non-overlapping test levels, an uplink test on all test cells in a set of non-overlapping test levels; and performing, separately from the uplink tests and separately on each set of non-overlapping test levels, a downlink test on all test cells in a set of non-overlapping test levels.

  2. A Preliminary Assessment of Phase Separator Ground-Based and Reduced-Gravity Testing for ALS Systems

    NASA Technical Reports Server (NTRS)

    Hall, Nancy Rabel

    2006-01-01

    A viewgraph presentation of phase separator ground-based and reduced-gravity testing for Advanced Life Support (ALS) systems is shown. The topics include: 1) Multiphase Flow Technology Program; 2) Types of Separators; 3) MOBI Phase Separators; 4) Experiment set-up; and 5) Preliminary comparison/results.

  3. Analysis of Duplicated Multiple-Samples Rank Data Using the Mack-Skillings Test.

    PubMed

    Carabante, Kennet Mariano; Alonso-Marenco, Jose Ramon; Chokumnoyporn, Napapan; Sriwattana, Sujinda; Prinyawiwatkul, Witoon

    2016-07-01

    Appropriate analysis for duplicated multiple-samples rank data is needed. This study compared analysis of duplicated rank preference data using the Friedman versus Mack-Skillings tests. Panelists (n = 125) ranked twice 2 orange juice sets: different-samples set (100%, 70%, vs. 40% juice) and similar-samples set (100%, 95%, vs. 90%). These 2 sample sets were designed to get contrasting differences in preference. For each sample set, rank sum data were obtained from (1) averaged rank data of each panelist from the 2 replications (n = 125), (2) rank data of all panelists from each of the 2 separate replications (n = 125 each), (3) jointed rank data of all panelists from the 2 replications (n = 125), and (4) rank data of all panelists pooled from the 2 replications (n = 250); rank data (1), (2), and (4) were separately analyzed by the Friedman test, although those from (3) by the Mack-Skillings test. The effect of sample sizes (n = 10 to 125) was evaluated. For the similar-samples set, higher variations in rank data from the 2 replications were observed; therefore, results of the main effects were more inconsistent among methods and sample sizes. Regardless of analysis methods, the larger the sample size, the higher the χ(2) value, the lower the P-value (testing H0 : all samples are not different). Analyzing rank data (2) separately by replication yielded inconsistent conclusions across sample sizes, hence this method is not recommended. The Mack-Skillings test was more sensitive than the Friedman test. Furthermore, it takes into account within-panelist variations and is more appropriate for analyzing duplicated rank data. © 2016 Institute of Food Technologists®

  4. Development of the Orion Crew-Service Module Umbilical Retention and Release Mechanism

    NASA Technical Reports Server (NTRS)

    Delap, Damon C.; Glidden, Joel Micah; Lamoreaux, Christopher

    2013-01-01

    The Orion CSM umbilical retention and release mechanism supports and protects all of the cross-module commodities between the spacecrafts crew and service modules. These commodities include explosive transfer lines, wiring for power and data, and flexible hoses for ground purge and life support systems. The mechanism employs a single separation interface which is retained with pyrotechnically actuated separation bolts and supports roughly two dozen electrical and fluid connectors. When module separation is commanded, either for nominal on-orbit CONOPS or in the event of an abort, the mechanism must release the separation interface and sever all commodity connections within milliseconds of command receipt. There are a number of unique and novel aspects of the design solution developed by the Orion mechanisms team. The design is highly modular and can easily be adapted to other vehiclesmodules and alternate commodity sets. It will be flight tested during Orions Exploration Flight Test 1 (EFT-1) in 2014, and the Orion team anticipates reuse of the design for all future missions. The design packages fluid, electrical, and ordnance disconnects in a single separation interface. It supports abort separations even in cases where aerodynamic loading prevents the deployment of the umbilical arm. Unlike the Apollo CSM umbilical which was a destructive separation device, the Orion design is resettable and flight units can be tested for separation performance prior to flight.Initial development testing of the mechanisms separation interface resulted in binding failures due to connector misalignments. The separation interface was redesigned with a robust linear guide system, and the connector separation and boom deployment were separated into two discretely sequenced events. These changes addressed the root cause of the binding failure by providing better control of connector alignment. The new design was tuned and validated analytically via Monte Carlo simulation. The analytical validation was followed by a repeat of the initial test suite plus test cases at thermal extremes and test cases with imposed mechanical failures demonstrating fault tolerance. The mechanism was then exposed to the qualification vibration environment. Finally, separation testing was performed at full speed with live ordnance.All tests of the redesigned mechanism resulted in successful separation of the umbilical interface with adequate force margins and timing. The test data showed good agreement with the predictions of the Monte Carlo simulation. The simulation proved invaluable due to the number of variables affecting the separation and the uncertainty associated with each. The simulation allowed for rapid assessment of numerous trades and contingency scenarios, and can be easily reconfigured for varying commodity sets and connector layouts.

  5. ωB97X-V: A 10-parameter, range-separated hybrid, generalized gradient approximation density functional with nonlocal correlation, designed by a survival-of-the-fittest strategy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mardirossian, Narbe; Head-Gordon, Martin

    2013-12-18

    A 10-parameter, range-separated hybrid (RSH), generalized gradient approximation (GGA) density functional with nonlocal correlation (VV10) is presented in this paper. Instead of truncating the B97-type power series inhomogeneity correction factors (ICF) for the exchange, same-spin correlation, and opposite-spin correlation functionals uniformly, all 16 383 combinations of the linear parameters up to fourth order (m = 4) are considered. These functionals are individually fit to a training set and the resulting parameters are validated on a primary test set in order to identify the 3 optimal ICF expansions. Through this procedure, it is discovered that the functional that performs best onmore » the training and primary test sets has 7 linear parameters, with 3 additional nonlinear parameters from range-separation and nonlocal correlation. The resulting density functional, ωB97X-V, is further assessed on a secondary test set, the parallel-displaced coronene dimer, as well as several geometry datasets. Finally and furthermore, the basis set dependence and integration grid sensitivity of ωB97X-V are analyzed and documented in order to facilitate the use of the functional.« less

  6. Exploring the limit of accuracy for density functionals based on the generalized gradient approximation: Local, global hybrid, and range-separated hybrid functionals with and without dispersion corrections

    DOE PAGES

    Mardirossian, Narbe; Head-Gordon, Martin

    2014-03-25

    The limit of accuracy for semi-empirical generalized gradient approximation (GGA) density functionals is explored in this paper by parameterizing a variety of local, global hybrid, and range-separated hybrid functionals. The training methodology employed differs from conventional approaches in 2 main ways: (1) Instead of uniformly truncating the exchange, same-spin correlation, and opposite-spin correlation functional inhomogeneity correction factors, all possible fits up to fourth order are considered, and (2) Instead of selecting the optimal functionals based solely on their training set performance, the fits are validated on an independent test set and ranked based on their overall performance on the trainingmore » and test sets. The 3 different methods of accounting for exchange are trained both with and without dispersion corrections (DFT-D2 and VV10), resulting in a total of 491 508 candidate functionals. For each of the 9 functional classes considered, the results illustrate the trade-off between improved training set performance and diminished transferability. Since all 491 508 functionals are uniformly trained and tested, this methodology allows the relative strengths of each type of functional to be consistently compared and contrasted. Finally, the range-separated hybrid GGA functional paired with the VV10 nonlocal correlation functional emerges as the most accurate form for the present training and test sets, which span thermochemical energy differences, reaction barriers, and intermolecular interactions involving lighter main group elements.« less

  7. ωB97M-V: A combinatorially optimized, range-separated hybrid, meta-GGA density functional with VV10 nonlocal correlation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mardirossian, Narbe; Head-Gordon, Martin

    2016-06-07

    A combinatorially optimized, range-separated hybrid, meta-GGA density functional with VV10 nonlocal correlation is presented in this paper. The final 12-parameter functional form is selected from approximately 10 × 10 9 candidate fits that are trained on a training set of 870 data points and tested on a primary test set of 2964 data points. The resulting density functional, ωB97M-V, is further tested for transferability on a secondary test set of 1152 data points. For comparison, ωB97M-V is benchmarked against 11 leading density functionals including M06-2X, ωB97X-D, M08-HX, M11, ωM05-D, ωB97X-V, and MN15. Encouragingly, the overall performance of ωB97M-V on nearlymore » 5000 data points clearly surpasses that of all of the tested density functionals. Finally, in order to facilitate the use of ωB97M-V, its basis set dependence and integration grid sensitivity are thoroughly assessed, and recommendations that take into account both efficiency and accuracy are provided.« less

  8. Experimental Data from the Benchmark SuperCritical Wing Wind Tunnel Test on an Oscillating Turntable

    NASA Technical Reports Server (NTRS)

    Heeg, Jennifer; Piatak, David J.

    2013-01-01

    The Benchmark SuperCritical Wing (BSCW) wind tunnel model served as a semi-blind testcase for the 2012 AIAA Aeroelastic Prediction Workshop (AePW). The BSCW was chosen as a testcase due to its geometric simplicity and flow physics complexity. The data sets examined include unforced system information and forced pitching oscillations. The aerodynamic challenges presented by this AePW testcase include a strong shock that was observed to be unsteady for even the unforced system cases, shock-induced separation and trailing edge separation. The current paper quantifies these characteristics at the AePW test condition and at a suggested benchmarking test condition. General characteristics of the model's behavior are examined for the entire available data set.

  9. Reliability, precision, and measurement in the context of data from ability tests, surveys, and assessments

    NASA Astrophysics Data System (ADS)

    Fisher, W. P., Jr.; Elbaum, B.; Coulter, A.

    2010-07-01

    Reliability coefficients indicate the proportion of total variance attributable to differences among measures separated along a quantitative continuum by a testing, survey, or assessment instrument. Reliability is usually considered to be influenced by both the internal consistency of a data set and the number of items, though textbooks and research papers rarely evaluate the extent to which these factors independently affect the data in question. Probabilistic formulations of the requirements for unidimensional measurement separate consistency from error by modelling individual response processes instead of group-level variation. The utility of this separation is illustrated via analyses of small sets of simulated data, and of subsets of data from a 78-item survey of over 2,500 parents of children with disabilities. Measurement reliability ultimately concerns the structural invariance specified in models requiring sufficient statistics, parameter separation, unidimensionality, and other qualities that historically have made quantification simple, practical, and convenient for end users. The paper concludes with suggestions for a research program aimed at focusing measurement research more on the calibration and wide dissemination of tools applicable to individuals, and less on the statistical study of inter-variable relations in large data sets.

  10. SELDI-TOF-MS proteomic profiling of serum, urine, and amniotic fluid in neural tube defects.

    PubMed

    Liu, Zhenjiang; Yuan, Zhengwei; Zhao, Qun

    2014-01-01

    Neural tube defects (NTDs) are common birth defects, whose specific biomarkers are needed. The purpose of this pilot study is to determine whether protein profiling in NTD-mothers differ from normal controls using SELDI-TOF-MS. ProteinChip Biomarker System was used to evaluate 82 maternal serum samples, 78 urine samples and 76 amniotic fluid samples. The validity of classification tree was then challenged with a blind test set including another 20 NTD-mothers and 18 controls in serum samples, and another 19 NTD-mothers and 17 controls in urine samples, and another 20 NTD-mothers and 17 controls in amniotic fluid samples. Eight proteins detected in serum samples were up-regulated and four proteins were down-regulated in the NTD group. Four proteins detected in urine samples were up-regulated and one protein was down-regulated in the NTD group. Six proteins detected in amniotic fluid samples were up-regulated and one protein was down-regulated in the NTD group. The classification tree for serum samples separated NTDs from healthy individuals, achieving a sensitivity of 91% and a specificity of 97% in the training set, and achieving a sensitivity of 90% and a specificity of 97% and a positive predictive value of 95% in the test set. The classification tree for urine samples separated NTDs from controls, achieving a sensitivity of 95% and a specificity of 94% in the training set, and achieving a sensitivity of 89% and a specificity of 82% and a positive predictive value of 85% in the test set. The classification tree for amniotic fluid samples separated NTDs from controls, achieving a sensitivity of 93% and a specificity of 89% in the training set, and achieving a sensitivity of 90% and a specificity of 88% and a positive predictive value of 90% in the test set. These suggest that SELDI-TOF-MS is an additional method for NTDs pregnancies detection.

  11. Development of the Orion Crew-Service Module Umbilical Retention and Release Mechanism

    NASA Technical Reports Server (NTRS)

    Delap, Damon; Glidden, Joel; Lamoreaux, Christopher

    2013-01-01

    The Orion Crew-Service Module umbilical retention and release mechanism supports, protects and disconnects all of the cross-module commodities between the spacecraft's crew and service modules. These commodities include explosive transfer lines, wiring for power and data, and flexible hoses for ground purge and life support systems. Initial development testing of the mechanism's separation interface resulted in binding failures due to connector misalignments. The separation interface was redesigned with a robust linear guide system, and the connector separation and boom deployment were separated into two discretely sequenced events. Subsequent analysis and testing verified that the design changes corrected the binding. This umbilical separation design will be used on Exploration Flight Test 1 (EFT-1) as well as all future Orion flights. The design is highly modular and can easily be adapted to other vehicles/modules and alternate commodity sets.

  12. Separation-Individuation Difficulties and Cognitive-Behavioral Indicators of Eating Disorders among College Women.

    ERIC Educational Resources Information Center

    Friedlander, Myrna L.; Siegel, Sheri M.

    1990-01-01

    Tested theoretical link between difficulties with separation-individuation and cognitive-behavioral indicators characteristic of anorexia nervosa and bulimia. Assessed 124 college women using three self-report measures. Results suggest strong relation between 2 sets of variables and support theoretical assertions about factors that contribute to…

  13. A support vector machine based test for incongruence between sets of trees in tree space

    PubMed Central

    2012-01-01

    Background The increased use of multi-locus data sets for phylogenetic reconstruction has increased the need to determine whether a set of gene trees significantly deviate from the phylogenetic patterns of other genes. Such unusual gene trees may have been influenced by other evolutionary processes such as selection, gene duplication, or horizontal gene transfer. Results Motivated by this problem we propose a nonparametric goodness-of-fit test for two empirical distributions of gene trees, and we developed the software GeneOut to estimate a p-value for the test. Our approach maps trees into a multi-dimensional vector space and then applies support vector machines (SVMs) to measure the separation between two sets of pre-defined trees. We use a permutation test to assess the significance of the SVM separation. To demonstrate the performance of GeneOut, we applied it to the comparison of gene trees simulated within different species trees across a range of species tree depths. Applied directly to sets of simulated gene trees with large sample sizes, GeneOut was able to detect very small differences between two set of gene trees generated under different species trees. Our statistical test can also include tree reconstruction into its test framework through a variety of phylogenetic optimality criteria. When applied to DNA sequence data simulated from different sets of gene trees, results in the form of receiver operating characteristic (ROC) curves indicated that GeneOut performed well in the detection of differences between sets of trees with different distributions in a multi-dimensional space. Furthermore, it controlled false positive and false negative rates very well, indicating a high degree of accuracy. Conclusions The non-parametric nature of our statistical test provides fast and efficient analyses, and makes it an applicable test for any scenario where evolutionary or other factors can lead to trees with different multi-dimensional distributions. The software GeneOut is freely available under the GNU public license. PMID:22909268

  14. Test fixture design for boron-aluminum and beryllium test panels

    NASA Technical Reports Server (NTRS)

    Breaux, C. G.

    1973-01-01

    A detailed description of the test fixture design and the backup analysis of the fixture assembly and its components are presented. The test fixture is required for the separate testing of two boron-aluminum and two beryllium compression panels. This report is presented in conjunction with a complete set of design drawings on the test fixture system.

  15. Automobile driver on-road performance test. Volume 2, Administrator's manual

    DOT National Transportation Integrated Search

    1981-09-30

    This report provides procedures for setting up, administering, and scoring the Automobile Driver On-Road Performance Test (ADOPT). The ADOPT checks 21 separate driving performances. Performances are checked at pre-determined locations along a 10-minu...

  16. Automatic sleep stage classification using two facial electrodes.

    PubMed

    Virkkala, Jussi; Velin, Riitta; Himanen, Sari-Leena; Värri, Alpo; Müller, Kiti; Hasan, Joel

    2008-01-01

    Standard sleep stage classification is based on visual analysis of central EEG, EOG and EMG signals. Automatic analysis with a reduced number of sensors has been studied as an easy alternative to the standard. In this study, a single-channel electro-oculography (EOG) algorithm was developed for separation of wakefulness, SREM, light sleep (S1, S2) and slow wave sleep (S3, S4). The algorithm was developed and tested with 296 subjects. Additional validation was performed on 16 subjects using a low weight single-channel Alive Monitor. In the validation study, subjects attached the disposable EOG electrodes themselves at home. In separating the four stages total agreement (and Cohen's Kappa) in the training data set was 74% (0.59), in the testing data set 73% (0.59) and in the validation data set 74% (0.59). Self-applicable electro-oculography with only two facial electrodes was found to provide reasonable sleep stage information.

  17. Computer-aided detection of bladder wall thickening in CT urography (CTU)

    NASA Astrophysics Data System (ADS)

    Cha, Kenny H.; Hadjiiski, Lubomir M.; Chan, Heang-Ping; Caoili, Elaine M.; Cohan, Richard H.; Weizer, Alon Z.; Gordon, Marshall N.; Samala, Ravi K.

    2018-02-01

    We are developing a computer-aided detection system for bladder cancer in CT urography (CTU). Bladder wall thickening is a manifestation of bladder cancer and its detection is more challenging than the detection of bladder masses. We first segmented the inner and outer bladder walls using our method that combined deep-learning convolutional neural network with level sets. The non-contrast-enhanced region was separated from the contrast-enhanced region with a maximum-intensity-projection-based method. The non-contrast region was smoothed and gray level threshold was applied to the contrast and non-contrast regions separately to extract the bladder wall and potential lesions. The bladder wall was transformed into a straightened thickness profile, which was analyzed to identify regions of wall thickening candidates. Volume-based features of the wall thickening candidates were analyzed with linear discriminant analysis (LDA) to differentiate bladder wall thickenings from false positives. A data set of 112 patients, 87 with wall thickening and 25 with normal bladders, was collected retrospectively with IRB approval, and split into independent training and test sets. Of the 57 training cases, 44 had bladder wall thickening and 13 were normal. Of the 55 test cases, 43 had wall thickening and 12 were normal. The LDA classifier was trained with the training set and evaluated with the test set. FROC analysis showed that the system achieved sensitivities of 93.2% and 88.4% for the training and test sets, respectively, at 0.5 FPs/case.

  18. Computer-aided detection of bladder masses in CT urography (CTU)

    NASA Astrophysics Data System (ADS)

    Cha, Kenny H.; Hadjiiski, Lubomir M.; Chan, Heang-Ping; Caoili, Elaine M.; Cohan, Richard H.; Weizer, Alon; Samala, Ravi K.

    2017-03-01

    We are developing a computer-aided detection system for bladder cancer in CT urography (CTU). We have previously developed methods for detection of bladder masses within the contrast-enhanced and the non-contrastenhanced regions of the bladder individually. In this study, we investigated methods for detection of bladder masses within the entire bladder. The bladder was segmented using our method that combined deep-learning convolutional neural network with level sets. The non-contrast-enhanced region was separated from the contrast-enhanced region with a maximum-intensity-projection-based method. The non-contrast region was smoothed and gray level threshold was applied to the contrast and non-contrast regions separately to extract the bladder wall and potential masses. The bladder wall was transformed into a straightened thickness profile, which was analyzed to identify lesion candidates in a prescreening step. The candidates were mapped back to the 3D CT volume and segmented using our auto-initialized cascaded level set (AI-CALS) segmentation method. Twenty-seven morphological features were extracted for each candidate. A data set of 57 patients with 71 biopsy-proven bladder lesions was used, which was split into independent training and test sets: 42 training cases with 52 lesions, and 15 test cases with 19 lesions. Using the training set, feature selection was performed and a linear discriminant (LDA) classifier was designed to merge the selected features for classification of bladder lesions and false positives. The trained classifier was evaluated with the test set. FROC analysis showed that the system achieved a sensitivity of 86.5% at 3.3 FPs/case for the training set, and 84.2% at 3.7 FPs/case for the test set.

  19. An enhanced trend surface analysis equation for regional-residual separation of gravity data

    NASA Astrophysics Data System (ADS)

    Obasi, A. I.; Onwuemesi, A. G.; Romanus, O. M.

    2016-12-01

    Trend surface analysis is a geological term for a mathematical technique which separates a given map set into a regional component and a local component. This work has extended the steps for the derivation of the constants in the trend surface analysis equation from the popularly known matrix and simultaneous form to a more simplified and easily achievable format. To achieve this, matrix inversion was applied to the existing equations and the outcome was tested for suitability using a large volume of gravity data set acquired from the Anambra Basin, south-eastern Nigeria. Tabulation of the field data set was done using the Microsoft Excel spread sheet, while gravity maps were generated from the data set using Oasis Montaj software. A comparison of the residual gravity map produced using the new equations with its software derived counterpart has shown that the former has a higher enhancing capacity than the latter. This equation has shown strong suitability for application in the separation of gravity data sets into their regional and residual components.

  20. Development of Conceptually Focused Early Numeracy Skill Indicators

    ERIC Educational Resources Information Center

    Methe, Scott A.; Begeny, John C.; Leary, Lemontrel L.

    2011-01-01

    This research was conducted to evaluate the technical properties of a set of early numeracy CBM tests that were designed to operationalize early numeric concepts. Data were collected over the course of a school year from 113 kindergarten and first-grade children using nine separate tests with three alternative forms. In addition, test-retest…

  1. Measuring CAMD technique performance. 2. How "druglike" are drugs? Implications of Random test set selection exemplified using druglikeness classification models.

    PubMed

    Good, Andrew C; Hermsmeier, Mark A

    2007-01-01

    Research into the advancement of computer-aided molecular design (CAMD) has a tendency to focus on the discipline of algorithm development. Such efforts are often wrought to the detriment of the data set selection and analysis used in said algorithm validation. Here we highlight the potential problems this can cause in the context of druglikeness classification. More rigorous efforts are applied to the selection of decoy (nondruglike) molecules from the ACD. Comparisons are made between model performance using the standard technique of random test set creation with test sets derived from explicit ontological separation by drug class. The dangers of viewing druglike space as sufficiently coherent to permit simple classification are highlighted. In addition the issues inherent in applying unfiltered data and random test set selection to (Q)SAR models utilizing large and supposedly heterogeneous databases are discussed.

  2. Li-ion Battery Separators, Mechanical Integrity and Failure Mechanisms Leading to Soft and Hard Internal Shorts

    PubMed Central

    Zhang, Xiaowei; Sahraei, Elham; Wang, Kai

    2016-01-01

    Separator integrity is an important factor in preventing internal short circuit in lithium-ion batteries. Local penetration tests (nail or conical punch) often produce presumably sporadic results, where in exactly similar cell and test set-ups one cell goes to thermal runaway while the other shows minimal reactions. We conducted an experimental study of the separators under mechanical loading, and discovered two distinct deformation and failure mechanisms, which could explain the difference in short circuit characteristics of otherwise similar tests. Additionally, by investigation of failure modes, we provided a hypothesis about the process of formation of local “soft short circuits” in cells with undetectable failure. Finally, we proposed a criterion for predicting onset of soft short from experimental data. PMID:27581185

  3. FJ44 Turbofan Engine Test at NASA Glenn Research Center's Aero-Acoustic Propulsion Laboratory

    NASA Technical Reports Server (NTRS)

    Lauer, Joel T.; McAllister, Joseph; Loew, Raymond A.; Sutliff, Daniel L.; Harley, Thomas C.

    2009-01-01

    A Williams International FJ44-3A 3000-lb thrust class turbofan engine was tested in the NASA Glenn Research Center s Aero-Acoustic Propulsion Laboratory. This report presents the test set-up and documents the test conditions. Farfield directivity, in-duct unsteady pressures, duct mode data, and phased-array data were taken and are reported separately.

  4. Scale separation for multi-scale modeling of free-surface and two-phase flows with the conservative sharp interface method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Han, L.H., E-mail: Luhui.Han@tum.de; Hu, X.Y., E-mail: Xiangyu.Hu@tum.de; Adams, N.A., E-mail: Nikolaus.Adams@tum.de

    In this paper we present a scale separation approach for multi-scale modeling of free-surface and two-phase flows with complex interface evolution. By performing a stimulus-response operation on the level-set function representing the interface, separation of resolvable and non-resolvable interface scales is achieved efficiently. Uniform positive and negative shifts of the level-set function are used to determine non-resolvable interface structures. Non-resolved interface structures are separated from the resolved ones and can be treated by a mixing model or a Lagrangian-particle model in order to preserve mass. Resolved interface structures are treated by the conservative sharp-interface model. Since the proposed scale separationmore » approach does not rely on topological information, unlike in previous work, it can be implemented in a straightforward fashion into a given level set based interface model. A number of two- and three-dimensional numerical tests demonstrate that the proposed method is able to cope with complex interface variations accurately and significantly increases robustness against underresolved interface structures.« less

  5. Separation of left and right lungs using 3-dimensional information of sequential computed tomography images and a guided dynamic programming algorithm.

    PubMed

    Park, Sang Cheol; Leader, Joseph Ken; Tan, Jun; Lee, Guee Sang; Kim, Soo Hyung; Na, In Seop; Zheng, Bin

    2011-01-01

    This article presents a new computerized scheme that aims to accurately and robustly separate left and right lungs on computed tomography (CT) examinations. We developed and tested a method to separate the left and right lungs using sequential CT information and a guided dynamic programming algorithm using adaptively and automatically selected start point and end point with especially severe and multiple connections. The scheme successfully identified and separated all 827 connections on the total 4034 CT images in an independent testing data set of CT examinations. The proposed scheme separated multiple connections regardless of their locations, and the guided dynamic programming algorithm reduced the computation time to approximately 4.6% in comparison with the traditional dynamic programming and avoided the permeation of the separation boundary into normal lung tissue. The proposed method is able to robustly and accurately disconnect all connections between left and right lungs, and the guided dynamic programming algorithm is able to remove redundant processing.

  6. Optimal SVM parameter selection for non-separable and unbalanced datasets.

    PubMed

    Jiang, Peng; Missoum, Samy; Chen, Zhao

    2014-10-01

    This article presents a study of three validation metrics used for the selection of optimal parameters of a support vector machine (SVM) classifier in the case of non-separable and unbalanced datasets. This situation is often encountered when the data is obtained experimentally or clinically. The three metrics selected in this work are the area under the ROC curve (AUC), accuracy, and balanced accuracy. These validation metrics are tested using computational data only, which enables the creation of fully separable sets of data. This way, non-separable datasets, representative of a real-world problem, can be created by projection onto a lower dimensional sub-space. The knowledge of the separable dataset, unknown in real-world problems, provides a reference to compare the three validation metrics using a quantity referred to as the "weighted likelihood". As an application example, the study investigates a classification model for hip fracture prediction. The data is obtained from a parameterized finite element model of a femur. The performance of the various validation metrics is studied for several levels of separability, ratios of unbalance, and training set sizes.

  7. Description of a Computer Program Written for Approach and Landing Test Post Flight Data Extraction of Proximity Separation Aerodynamic Coefficients and Aerodynamic Data Base Verification

    NASA Technical Reports Server (NTRS)

    Homan, D. J.

    1977-01-01

    A computer program written to calculate the proximity aerodynamic force and moment coefficients of the Orbiter/Shuttle Carrier Aircraft (SCA) vehicles based on flight instrumentation is described. The ground reduced aerodynamic coefficients and instrumentation errors (GRACIE) program was developed as a tool to aid in flight test verification of the Orbiter/SCA separation aerodynamic data base. The program calculates the force and moment coefficients of each vehicle in proximity to the other, using the load measurement system data, flight instrumentation data and the vehicle mass properties. The uncertainty in each coefficient is determined, based on the quoted instrumentation accuracies. A subroutine manipulates the Orbiter/747 Carrier Separation Aerodynamic Data Book to calculate a comparable set of predicted coefficients for comparison to the calculated flight test data.

  8. A broad survey of recombination in animal mitochondria.

    PubMed

    Piganeau, Gwenaël; Gardner, Michael; Eyre-Walker, Adam

    2004-12-01

    Recombination in mitochondrial DNA (mtDNA) remains a controversial topic. Here we present a survey of 279 animal mtDNA data sets, of which 12 were from asexual species. Using four separate tests, we show that there is widespread evidence of recombination; for one test as many as 14.2% of the data sets reject a model of clonal inheritance and in several data sets, including primates, the recombinants can be identified visually. We show that none of the tests give significant results for obligate clonal species (apomictic pathogens) and that the sexual species show significantly greater evidence of recombination than asexual species. For some data sets, such as Macaca nemestrina, additional data sets suggest that the recombinants are not artifacts. For others, it cannot be determined whether the recombinants are real or produced by laboratory error. Either way, the results have important implications for how mtDNA is sequenced and used.

  9. Rhesus Monkeys (Macaca Mulatta) Maintain Learning Set Despite Second-Order Stimulus-Response Spatial Discontiguity

    ERIC Educational Resources Information Center

    Beran, Michael J.; Washburn, David A.; Rumbaugh, Duane M.

    2007-01-01

    In many discrimination-learning tests, spatial separation between stimuli and response loci disrupts performance in rhesus macaques. However, monkeys are unaffected by such stimulus-response spatial discontiguity when responses occur through joystick-based computerized movement of a cursor. To examine this discrepancy, five monkeys were tested on…

  10. TESTING SOLIDS SETTING APPARATUSES FOR DESIGN AND OPERATION OF WET-WEATHER FLOW SOLIDS-LIQUID SEPARATION PROCESSES

    EPA Science Inventory

    This study was a side-by-side comparison of two settling evaluation methods: one traditional and one new. The project investigated whether these column tests were capable of capturing or representing the rapidly settling particles present in wet-weather flows (WWF). The report r...

  11. Influence of torque control motors and the operator's proficiency on ProTaper failures.

    PubMed

    Yared, Ghassan; Bou Dagher, Fadia; Kulkarni, Kiran

    2003-08-01

    The purpose of this study was to evaluate the influence of 2 electric torque control motors and operator experience with a specific nickel-titanium rotary instrumentation technique on the incidence of deformation and separation of instruments. ProTaper (PT) nickel-titanium rotary instruments were used at 300 rpm. In the first part of the study, electric high torque control (group 1) and low torque control (group 2) motors were compared. In the second part of the study, 3 operators with varying experience (groups 3, 4, and 5) were also compared. Twenty sets of PT instruments and 100 canals of extracted human molars were used in each group. Each set of PT instruments was used in up to 5 canals and sterilized before each case. For irrigation, 2.5% NaOCl was used. The number of deformed and separated instruments among the groups (within each part of the study) was statistically analyzed for significance with pair-wise comparisons by using the Fisher exact test (alpha =.05). In part 1, instrument deformation and separation did not occur in groups 1 and 2. In part 2, 25 and 12 instruments were deformed and separated, respectively, with the least experienced operator. Instrument deformation and separation did not occur with the most experienced operator. The Fisher exact test revealed a significant difference between groups 3 and 4 with respect to instrument deformation (P =.0296). In addition, the Fisher exact test revealed that the incidence of instrument deformation was statistically different between groups 3 and 5 (P <.0001) and groups 4 and 5 (P =.0018). The incidence of instrument separation was significantly higher in group 5 than in groups 3 and 4 (P =.001). Preclinical training in the use of the PT technique at 300 rpm is crucial to prevent instrument separation and reduce the incidence of instrument deformation. The use of an electric high torque control motor is safe with the experienced operator.

  12. Revisiting separation properties of convex fuzzy sets

    USDA-ARS?s Scientific Manuscript database

    Separation of convex sets by hyperplanes has been extensively studied on crisp sets. In a seminal paper separability and convexity are investigated, however there is a flaw on the definition of degree of separation. We revisited separation on convex fuzzy sets that have level-wise (crisp) disjointne...

  13. An Evaluation of a Worksite Exercise Intervention Using the Social Cognitive Theory: A Pilot Study

    ERIC Educational Resources Information Center

    Amaya, Megan; Petosa, R. L.

    2012-01-01

    Purpose: To increase exercise adherence among insufficiently active adult employees. Design: A quasi-experimental separate samples pre-test-post-test group design was used to compare treatment and comparison group. Setting: The worksite. Subjects: Employees (n = 127) who did not meet current American College of Sports Medicine (ACSM)…

  14. Facilitating Recognition Memory: The Use of Distinctive Contexts in Study Materials and Tests.

    ERIC Educational Resources Information Center

    Marlin, Carol A.; And Others

    The effects of distinctive background settings on children's recognition memory for subjects and objects of related sentences was examined. As a follow-up to a study by Levin, Ghatala, and Truman (1979), the effects of presenting distinctive background contexts in sentences and multiple-choice tests were separated from the effects of providing…

  15. Multi-Beam Approach for Accelerating Alignment and Calibration of HyspIRI-Like Imaging Spectrometers

    NASA Technical Reports Server (NTRS)

    Eastwood, Michael L.; Green, Robert O.; Mouroulis, Pantazis; Hochberg, Eric B.; Hein, Randall C.; Kroll, Linley A.; Geier, Sven; Coles, James B.; Meehan, Riley

    2012-01-01

    A paper describes an optical stimulus that produces more consistent results, and can be automated for unattended, routine generation of data analysis products needed by the integration and testing team assembling a high-fidelity imaging spectrometer system. One key attribute of the system is an arrangement of pick-off mirrors that provides multiple input beams (five in this implementation) to simultaneously provide stimulus light to several field angles along the field of view of the sensor under test, allowing one data set to contain all the information that previously required five data sets to be separately collected. This stimulus can also be fed by quickly reconfigured sources that ultimately provide three data set types that would previously be collected separately using three different setups: Spectral Response Function (SRF), Cross-track Response Function (CRF), and Along-track Response Function (ARF), respectively. This method also lends itself to expansion of the number of field points if less interpolation across the field of view is desirable. An absolute minimum of three is required at the beginning stages of imaging spectrometer alignment.

  16. Quantitative assessment of the presence of a single leg separation in Björk-Shiley convexoconcave prosthetic heart valves.

    PubMed

    Vrooman, H A; Maliepaard, C; van der Linden, L P; Jessurun, E R; Ludwig, J W; Plokker, H W; Schalij, M J; Weeda, H W; Laufer, J L; Huysmans, H A; Reiber, J H

    1997-09-01

    The authors developed an analytic software package for the objective and reproducible assessment of a single leg separation (SLS) in the outlet strut of Björk-Shiley convexoconcave (BSCC) prosthetic heart valves. The radiographic cinefilm recordings of 18 phantom valves (12 intact and 6 SLS) and of 43 patient valves were acquired. After digitization of regions of interest in a cineframe, several processing steps were carried out to obtain a one-dimensional corrected and averaged density profile along the central axis of each strut leg. To characterize the degree of possible separation, two quantitative measures were introduced: the normalized pit depth (NPD) and the depth-sigma ratio (DSR). The group of 43 patient studies was divided into a learning set (25 patients) and a test set (18 patients). All phantom valves with an SLS were detected (sensitivity, 100%) at a specificity of 100%. The threshold values for the NPD and the DSR to decide whether a fracture was present or not were 3.6 and 2.5, respectively. On the basis of the visual interpretations of the 25 patient studies (learning set) by an expert panel, it was concluded that none of the patients had an SLS. To achieve a 100% specificity by quantitative analysis, the threshold values for the NPD and the DSR were set at 5.8 and 2.5, respectively, for the patient data. Based on these threshold values, the analysis of patient data from the test set resulted in one false-negative detection and three false-positive detections. An analytic software package for the detection of an SLS was developed. Phantom data showed excellent sensitivity (100%) and specificity (100%). Further research and software development is needed to increase the sensitivity and specificity for patient data.

  17. Computer-aided diagnosis of lung cancer: the effect of training data sets on classification accuracy of lung nodules.

    PubMed

    Gong, Jing; Liu, Ji-Yu; Sun, Xi-Wen; Zheng, Bin; Nie, Sheng-Dong

    2018-02-05

    This study aims to develop a computer-aided diagnosis (CADx) scheme for classification between malignant and benign lung nodules, and also assess whether CADx performance changes in detecting nodules associated with early and advanced stage lung cancer. The study involves 243 biopsy-confirmed pulmonary nodules. Among them, 76 are benign, 81 are stage I and 86 are stage III malignant nodules. The cases are separated into three data sets involving: (1) all nodules, (2) benign and stage I malignant nodules, and (3) benign and stage III malignant nodules. A CADx scheme is applied to segment lung nodules depicted on computed tomography images and we initially computed 66 3D image features. Then, three machine learning models namely, a support vector machine, naïve Bayes classifier and linear discriminant analysis, are separately trained and tested by using three data sets and a leave-one-case-out cross-validation method embedded with a Relief-F feature selection algorithm. When separately using three data sets to train and test three classifiers, the average areas under receiver operating characteristic curves (AUC) are 0.94, 0.90 and 0.99, respectively. When using the classifiers trained using data sets with all nodules, average AUC values are 0.88 and 0.99 for detecting early and advanced stage nodules, respectively. AUC values computed from three classifiers trained using the same data set are consistent without statistically significant difference (p  >  0.05). This study demonstrates (1) the feasibility of applying a CADx scheme to accurately distinguish between benign and malignant lung nodules, and (2) a positive trend between CADx performance and cancer progression stage. Thus, in order to increase CADx performance in detecting subtle and early cancer, training data sets should include more diverse early stage cancer cases.

  18. Computer-aided diagnosis of lung cancer: the effect of training data sets on classification accuracy of lung nodules

    NASA Astrophysics Data System (ADS)

    Gong, Jing; Liu, Ji-Yu; Sun, Xi-Wen; Zheng, Bin; Nie, Sheng-Dong

    2018-02-01

    This study aims to develop a computer-aided diagnosis (CADx) scheme for classification between malignant and benign lung nodules, and also assess whether CADx performance changes in detecting nodules associated with early and advanced stage lung cancer. The study involves 243 biopsy-confirmed pulmonary nodules. Among them, 76 are benign, 81 are stage I and 86 are stage III malignant nodules. The cases are separated into three data sets involving: (1) all nodules, (2) benign and stage I malignant nodules, and (3) benign and stage III malignant nodules. A CADx scheme is applied to segment lung nodules depicted on computed tomography images and we initially computed 66 3D image features. Then, three machine learning models namely, a support vector machine, naïve Bayes classifier and linear discriminant analysis, are separately trained and tested by using three data sets and a leave-one-case-out cross-validation method embedded with a Relief-F feature selection algorithm. When separately using three data sets to train and test three classifiers, the average areas under receiver operating characteristic curves (AUC) are 0.94, 0.90 and 0.99, respectively. When using the classifiers trained using data sets with all nodules, average AUC values are 0.88 and 0.99 for detecting early and advanced stage nodules, respectively. AUC values computed from three classifiers trained using the same data set are consistent without statistically significant difference (p  >  0.05). This study demonstrates (1) the feasibility of applying a CADx scheme to accurately distinguish between benign and malignant lung nodules, and (2) a positive trend between CADx performance and cancer progression stage. Thus, in order to increase CADx performance in detecting subtle and early cancer, training data sets should include more diverse early stage cancer cases.

  19. Signature extension studies

    NASA Technical Reports Server (NTRS)

    Vincent, R. K.; Thomas, G. S.; Nalepka, R. F.

    1974-01-01

    The importance of specific spectral regions to signature extension is explored. In the recent past, the signature extension task was focused on the development of new techniques. Tested techniques are now used to investigate this spectral aspect of the large area survey. Sets of channels were sought which, for a given technique, were the least affected by several sources of variation over four data sets and yet provided good object class separation on each individual data set. Using sets of channels determined as part of this study, signature extension was accomplished between data sets collected over a six-day period and over a range of about 400 kilometers.

  20. Separation of multiphosphorylated peptide isomers by hydrophilic interaction chromatography on an aminopropyl phase.

    PubMed

    Singer, David; Kuhlmann, Julia; Muschket, Matthias; Hoffmann, Ralf

    2010-08-01

    The separation of isomeric phosphorylated peptides is challenging and often impossible for multiphosphorylated isomers using chromatographic and capillary electrophoretic methods. In this study we investigated the separation of a set of single-, double-, and triple-phosphorylated peptides (corresponding to the human tau protein) by ion-pair reversed-phase chromatography (IP-RPC) and hydrophilic interaction chromatography (HILIC). In HILIC both hydroxyl and aminopropyl stationary phases were tested with aqueous acetonitrile in order to assess their separation efficiency. The hydroxyl phase separated the phosphopeptides very well from the unphosphorylated analogue, while on the aminopropyl phase even isomeric phosphopeptides attained baseline separation. Thus, up to seven phosphorylated versions of a given tau domain were separated. Furthermore, the low concentration of an acidic ammonium formate buffer allowed an online analysis with electrospray ionization tandem mass spectrometry (ESI-MS/MS) to be conducted, enabling peptide sequencing and identification of phosphorylation sites.

  1. Criterion Referenced Assessment Bank. Grade 6 Skill Clusters, Objectives, and Illustrations.

    ERIC Educational Resources Information Center

    Montgomery County Public Schools, Rockville, MD.

    Part of a series of competency-based test materials for grades six through ten, this set of nine test booklets for sixth graders contains multiple-choice questions designed to aid in the evaluation of the pupils' library skills. Accompanied by a separate, tenth booklet of illustrations which are to be used in conjunction with the questions, the…

  2. 49 CFR 173.121 - Class 3-Assignment of packing group.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... packing group must be determined by applying the following criteria: Flash point (closed-cup) Initial... (73.4 °F) using the ISO standard cup with a 4 mm (0.16 inch) jet as set forth in ISO 2431 (IBR, see... using the ISO standard cup with a 6 mm (0.24 inch) jet. (ii) Solvent Separation Test. This test is...

  3. The direct assignment option as a modular design component: an example for the setting of two predefined subgroups.

    PubMed

    An, Ming-Wen; Lu, Xin; Sargent, Daniel J; Mandrekar, Sumithra J

    2015-01-01

    A phase II design with an option for direct assignment (stop randomization and assign all patients to experimental treatment based on interim analysis, IA) for a predefined subgroup was previously proposed. Here, we illustrate the modularity of the direct assignment option by applying it to the setting of two predefined subgroups and testing for separate subgroup main effects. We power the 2-subgroup direct assignment option design with 1 IA (DAD-1) to test for separate subgroup main effects, with assessment of power to detect an interaction in a post-hoc test. Simulations assessed the statistical properties of this design compared to the 2-subgroup balanced randomized design with 1 IA, BRD-1. Different response rates for treatment/control in subgroup 1 (0.4/0.2) and in subgroup 2 (0.1/0.2, 0.4/0.2) were considered. The 2-subgroup DAD-1 preserves power and type I error rate compared to the 2-subgroup BRD-1, while exhibiting reasonable power in a post-hoc test for interaction. The direct assignment option is a flexible design component that can be incorporated into broader design frameworks, while maintaining desirable statistical properties, clinical appeal, and logistical simplicity.

  4. Does linear separability really matter? Complex visual search is explained by simple search

    PubMed Central

    Vighneshvel, T.; Arun, S. P.

    2013-01-01

    Visual search in real life involves complex displays with a target among multiple types of distracters, but in the laboratory, it is often tested using simple displays with identical distracters. Can complex search be understood in terms of simple searches? This link may not be straightforward if complex search has emergent properties. One such property is linear separability, whereby search is hard when a target cannot be separated from its distracters using a single linear boundary. However, evidence in favor of linear separability is based on testing stimulus configurations in an external parametric space that need not be related to their true perceptual representation. We therefore set out to assess whether linear separability influences complex search at all. Our null hypothesis was that complex search performance depends only on classical factors such as target-distracter similarity and distracter homogeneity, which we measured using simple searches. Across three experiments involving a variety of artificial and natural objects, differences between linearly separable and nonseparable searches were explained using target-distracter similarity and distracter heterogeneity. Further, simple searches accurately predicted complex search regardless of linear separability (r = 0.91). Our results show that complex search is explained by simple search, refuting the widely held belief that linear separability influences visual search. PMID:24029822

  5. Automotive Lubricant Specification and Testing

    NASA Astrophysics Data System (ADS)

    Fox, M. F.

    This chapter concerns commercial lubricant specification and testing, drawing together the many themes of previous chapters. Military lubricant standards were a very strong initial influence during World War II and led to the separate historical development of the North American and European specification systems. The wide range of functions that a successful lubricant must satisfy is discussed, together with issues of balancing special or universal applications, single or multiple engine tests, the philosophy of accelerated testing and the question of 'who sets the standards?' The role of engine tests and testing organisations is examined.

  6. Magnetic separation of coal fly ash from Bulgarian power plants.

    PubMed

    Shoumkova, Annie S

    2011-10-01

    Fly ash from three coal-burning power plants in Bulgaria: 'Maritza 3', 'Republika' and 'Rousse East' were subjected to wet low-intensity magnetic separation. The tests were performed at different combinations of magnetic field intensity, flow velocity and diameter of matrix elements. It was found that all parameters investigated affected the separation efficiency, but their influence was interlinked and was determined by the properties of the material and the combination of other conditions. Among the fly ash characteristics, the most important parameters, determining the magnetic separation applicability, were mineralogical composition and distribution of minerals in particles. The main factors limiting the process were the presence of paramagnetic Fe-containing mineral and amorphous matter, and the existence of poly-mineral particles and aggregates of magnetic and non-magnetic particles. It was demonstrated that the negative effect of both factors could be considerably limited by the selection of a proper set of separation conditions. The dependences between concentration of ferromagnetic iron in the ash, their magnetic properties and magnetic fraction yields were studied. It was experimentally proved that, for a certain set of separation conditions, the yields of magnetic fractions were directly proportional to the saturation magnetization of the ferromagnetic components of the ash. The main properties of typical magnetic and non-magnetic fractions were studied.

  7. Evaluation of supercapacitors for space applications under thermal vacuum conditions

    NASA Astrophysics Data System (ADS)

    Chin, Keith C.; Green, Nelson W.; Brandon, Erik J.

    2018-03-01

    Commercially available supercapacitor cells from three separate vendors were evaluated for use in a space environment using thermal vacuum (Tvac) testing. Standard commercial cells are not hermetically sealed, but feature crimp or double seam seals between the header and the can, which may not maintain an adequate seal under vacuum. Cells were placed in a small vacuum chamber, and cycled between three separate temperature set points. Charging and discharging of cells was executed following each temperature soak, to confirm there was no significant impact on performance. A final electrical performance check, visual inspection and mass check following testing were also performed, to confirm the integrity of the cells had not been compromised during exposure to thermal cycling under vacuum. All cells tested were found to survive this testing protocol and exhibited no significant impact on electrical performance.

  8. Statistical Analysis of a Large Sample Size Pyroshock Test Data Set Including Post Flight Data Assessment. Revision 1

    NASA Technical Reports Server (NTRS)

    Hughes, William O.; McNelis, Anne M.

    2010-01-01

    The Earth Observing System (EOS) Terra spacecraft was launched on an Atlas IIAS launch vehicle on its mission to observe planet Earth in late 1999. Prior to launch, the new design of the spacecraft's pyroshock separation system was characterized by a series of 13 separation ground tests. The analysis methods used to evaluate this unusually large amount of shock data will be discussed in this paper, with particular emphasis on population distributions and finding statistically significant families of data, leading to an overall shock separation interface level. The wealth of ground test data also allowed a derivation of a Mission Assurance level for the flight. All of the flight shock measurements were below the EOS Terra Mission Assurance level thus contributing to the overall success of the EOS Terra mission. The effectiveness of the statistical methodology for characterizing the shock interface level and for developing a flight Mission Assurance level from a large sample size of shock data is demonstrated in this paper.

  9. Guidance on Nanomaterial Hazards and Risks

    DTIC Science & Technology

    2015-05-21

    and at room temperature and 37 C°– solid separation by centrifugation, filtration , or chemical techniques (more experimental techniques combining...members in this potency sequence using bolus in vivo testing, verify the bolus results with selective inhalation testing. The potency of members of...measures in in vitro and limited in vivo experimental systems would facilitate the characterization of dose-response relationships across a set of ENMs

  10. Separating Contributions of Hearing, Lexical Knowledge, and Speech Production to Speech-Perception Scores in Children with Hearing Impairments.

    ERIC Educational Resources Information Center

    Paatsch, Louise E.; Blamey, Peter J.; Sarant, Julia Z.; Martin, Lois F.A.; Bow, Catherine P.

    2004-01-01

    Open-set word and sentence speech-perception test scores are commonly used as a measure of hearing abilities in children and adults using cochlear implants and/or hearing aids. These tests ore usually presented auditorily with a verbal response. In the case of children, scores are typically lower and more variable than for adults with hearing…

  11. Development of on line automatic separation device for apple and sleeve

    NASA Astrophysics Data System (ADS)

    Xin, Dengke; Ning, Duo; Wang, Kangle; Han, Yuhang

    2018-04-01

    Based on STM32F407 single chip microcomputer as control core, automatic separation device of fruit sleeve is designed. This design consists of hardware and software. In hardware, it includes mechanical tooth separator and three degree of freedom manipulator, as well as industrial control computer, image data acquisition card, end effector and other structures. The software system is based on Visual C++ development environment, to achieve localization and recognition of fruit sleeve with the technology of image processing and machine vision, drive manipulator of foam net sets of capture, transfer, the designated position task. Test shows: The automatic separation device of the fruit sleeve has the advantages of quick response speed and high separation success rate, and can realize separation of the apple and plastic foam sleeve, and lays the foundation for further studying and realizing the application of the enterprise production line.

  12. Electron in higher-dimensional weakly charged rotating black hole spacetimes

    NASA Astrophysics Data System (ADS)

    Cariglia, Marco; Frolov, Valeri P.; Krtouš, Pavel; Kubizňák, David

    2013-03-01

    We demonstrate separability of the Dirac equation in weakly charged rotating black hole spacetimes in all dimensions. The electromagnetic field of the black hole is described by a test field approximation, with the vector potential proportional to the primary Killing vector field. It is shown that the demonstrated separability can be intrinsically characterized by the existence of a complete set of mutually commuting first-order symmetry operators generated from the principal Killing-Yano tensor. The presented results generalize the results on integrability of charged particle motion and separability of charged scalar field studied in V. P. Frolov and P. Krtous [Phys. Rev. D 83, 024016 (2011)].

  13. Evaluation of the aspartate aminotransferase/platelet ratio index and enhanced liver fibrosis tests to detect significant fibrosis due to chronic hepatitis C.

    PubMed

    Petersen, John R; Stevenson, Heather L; Kasturi, Krishna S; Naniwadekar, Ashutosh; Parkes, Julie; Cross, Richard; Rosenberg, William M; Xiao, Shu-Yuan; Snyder, Ned

    2014-04-01

    The assessment of liver fibrosis in chronic hepatitis C patients is important for prognosis and making decisions regarding antiviral treatment. Although liver biopsy is considered the reference standard for assessing hepatic fibrosis in patients with chronic hepatitis C, it is invasive and associated with sampling and interobserver variability. Serum fibrosis markers have been utilized as surrogates for a liver biopsy. We completed a prospective study of 191 patients in which blood draws and liver biopsies were performed on the same visit. Using liver biopsies the sensitivity, specificity, and negative and positive predictive values for both aspartate aminotransferase/platelet ratio index (APRI) and enhanced liver fibrosis (ELF) were determined. The patients were divided into training and validation patient sets to develop and validate a clinically useful algorithm for differentiating mild and significant fibrosis. The area under the ROC curve for the APRI and ELF tests for the training set was 0.865 and 0.880, respectively. The clinical sensitivity in separating mild (F0-F1) from significant fibrosis (F2-F4) was 80% and 86.0% with a clinical specificity of 86.7% and 77.8%, respectively. For the validation sets the area under the ROC curve for the APRI and ELF tests was, 0.855 and 0.780, respectively. The clinical sensitivity of the APRI and ELF tests in separating mild (F0-F1) from significant (F2-F4) fibrosis for the validation set was 90.0% and 70.0% with a clinical specificity of 73.3% and 86.7%, respectively. There were no differences between the APRI and ELF tests in distinguishing mild from significant fibrosis for either the training or validation sets (P=0.61 and 0.20, respectively). Using APRI as the primary test followed by ELF for patients in the intermediate zone, would have decreased the number of liver biopsies needed by 40% for the validation set. Overall, use of our algorithm would have decreased the number of patients who needed a liver biopsy from 95 to 24-a 74.7% reduction. This study has shown that the APRI and ELF tests are equally accurate in distinguishing mild from significant liver fibrosis, and combining them into a validated algorithm improves their performance in distinguishing mild from significant fibrosis.

  14. Detection of Sickle Cell Hemoglobin in Haiti by Genotyping and Hemoglobin Solubility Tests

    PubMed Central

    Carter, Tamar E.; von Fricken, Michael; Romain, Jean R.; Memnon, Gladys; St. Victor, Yves; Schick, Laura; Okech, Bernard A.; Mulligan, Connie J.

    2014-01-01

    Sickle cell disease is a growing global health concern because infants born with the disorder in developing countries are now surviving longer with little access to diagnostic and management options. In Haiti, the current state of sickle cell disease/trait in the population is unclear. To inform future screening efforts in Haiti, we assayed sickle hemoglobin mutations using traditional hemoglobin solubility tests (HST) and add-on techniques, which incorporated spectrophotometry and insoluble hemoglobin separation. We also generated genotype data as a metric for HST performance. We found 19 of 202 individuals screened with HST were positive for sickle hemoglobin, five of whom did not carry the HbS allele. We show that spectrophotometry and insoluble hemoglobin separation add-on techniques could resolve false positives associated with the traditional HST approach, with some limitations. We also discuss the incorporation of insoluble hemoglobin separation observation with HST in suboptimal screening settings like Haiti. PMID:24957539

  15. Accreditation - ISO/IEC 17025

    NASA Astrophysics Data System (ADS)

    Kaus, Rüdiger

    This chapter gives the background on the accreditation of testing and calibration laboratories according to ISO/IEC 17025 and sets out the requirements of this international standard. ISO 15189 describes similar requirements especially tailored for medical laboratories. Because of these similarities ISO 15189 is not separately mentioned throughout this lecture.

  16. Element fracture technique for hypervelocity impact simulation

    NASA Astrophysics Data System (ADS)

    Zhang, Xiao-tian; Li, Xiao-gang; Liu, Tao; Jia, Guang-hui

    2015-05-01

    Hypervelocity impact dynamics is the theoretical support of spacecraft shielding against space debris. The numerical simulation has become an important approach for obtaining the ballistic limits of the spacecraft shields. Currently, the most widely used algorithm for hypervelocity impact is the smoothed particle hydrodynamics (SPH). Although the finite element method (FEM) is widely used in fracture mechanics and low-velocity impacts, the standard FEM can hardly simulate the debris cloud generated by hypervelocity impact. This paper presents a successful application of the node-separation technique for hypervelocity impact debris cloud simulation. The node-separation technique assigns individual/coincident nodes for the adjacent elements, and it applies constraints to the coincident node sets in the modeling step. In the explicit iteration, the cracks are generated by releasing the constrained node sets that meet the fracture criterion. Additionally, the distorted elements are identified from two aspects - self-piercing and phase change - and are deleted so that the constitutive computation can continue. FEM with the node-separation technique is used for thin-wall hypervelocity impact simulations. The internal structures of the debris cloud in the simulation output are compared with that in the test X-ray graphs under different material fracture criteria. It shows that the pressure criterion is more appropriate for hypervelocity impact. The internal structures of the debris cloud are also simulated and compared under different thickness-to-diameter ratios (t/D). The simulation outputs show the same spall pattern with the tests. Finally, the triple-plate impact case is simulated with node-separation FEM.

  17. DANTi: Detect and Avoid iN The Cockpit

    NASA Technical Reports Server (NTRS)

    Chamberlain, James; Consiglio, Maria; Munoz, Cesar

    2017-01-01

    Mid-air collision risk continues to be a concern for manned aircraft operations, especially near busy non-towered airports. The use of Detect and Avoid (DAA) technologies and draft standards developed for unmanned aircraft systems (UAS), either alone or in combination with other collision avoidance technologies, may be useful in mitigating this collision risk for manned aircraft. This paper describes a NASA research effort known as DANTi (DAA iN The Cockpit), including the initial development of the concept of use, a software prototype, and results from initial flight tests conducted with this prototype. The prototype used a single Automatic Dependent Surveillance - Broadcast (ADS-B) traffic sensor and the own aircraft's position, track, heading and air data information, along with NASA-developed DAA software to display traffic alerts and maneuver guidance to manned aircraft pilots on a portable tablet device. Initial flight tests with the prototype showed a successful DANTi proof-of-concept, but also demonstrated that the traffic separation parameter set specified in the RTCA SC-228 Phase I DAA MOPS may generate excessive false alerts during traffic pattern operations. Several parameter sets with smaller separation values were also tested in flight, one of which yielded more timely alerts for the maneuvers tested. Results from this study may further inform future DANTi efforts as well as Phase II DAA MOPS development.

  18. Finding Risk Groups by Optimizing Artificial Neural Networks on the Area under the Survival Curve Using Genetic Algorithms.

    PubMed

    Kalderstam, Jonas; Edén, Patrik; Ohlsson, Mattias

    2015-01-01

    We investigate a new method to place patients into risk groups in censored survival data. Properties such as median survival time, and end survival rate, are implicitly improved by optimizing the area under the survival curve. Artificial neural networks (ANN) are trained to either maximize or minimize this area using a genetic algorithm, and combined into an ensemble to predict one of low, intermediate, or high risk groups. Estimated patient risk can influence treatment choices, and is important for study stratification. A common approach is to sort the patients according to a prognostic index and then group them along the quartile limits. The Cox proportional hazards model (Cox) is one example of this approach. Another method of doing risk grouping is recursive partitioning (Rpart), which constructs a decision tree where each branch point maximizes the statistical separation between the groups. ANN, Cox, and Rpart are compared on five publicly available data sets with varying properties. Cross-validation, as well as separate test sets, are used to validate the models. Results on the test sets show comparable performance, except for the smallest data set where Rpart's predicted risk groups turn out to be inverted, an example of crossing survival curves. Cross-validation shows that all three models exhibit crossing of some survival curves on this small data set but that the ANN model manages the best separation of groups in terms of median survival time before such crossings. The conclusion is that optimizing the area under the survival curve is a viable approach to identify risk groups. Training ANNs to optimize this area combines two key strengths from both prognostic indices and Rpart. First, a desired minimum group size can be specified, as for a prognostic index. Second, the ability to utilize non-linear effects among the covariates, which Rpart is also able to do.

  19. Case-based statistical learning applied to SPECT image classification

    NASA Astrophysics Data System (ADS)

    Górriz, Juan M.; Ramírez, Javier; Illán, I. A.; Martínez-Murcia, Francisco J.; Segovia, Fermín.; Salas-Gonzalez, Diego; Ortiz, A.

    2017-03-01

    Statistical learning and decision theory play a key role in many areas of science and engineering. Some examples include time series regression and prediction, optical character recognition, signal detection in communications or biomedical applications for diagnosis and prognosis. This paper deals with the topic of learning from biomedical image data in the classification problem. In a typical scenario we have a training set that is employed to fit a prediction model or learner and a testing set on which the learner is applied to in order to predict the outcome for new unseen patterns. Both processes are usually completely separated to avoid over-fitting and due to the fact that, in practice, the unseen new objects (testing set) have unknown outcomes. However, the outcome yields one of a discrete set of values, i.e. the binary diagnosis problem. Thus, assumptions on these outcome values could be established to obtain the most likely prediction model at the training stage, that could improve the overall classification accuracy on the testing set, or keep its performance at least at the level of the selected statistical classifier. In this sense, a novel case-based learning (c-learning) procedure is proposed which combines hypothesis testing from a discrete set of expected outcomes and a cross-validated classification stage.

  20. QSAR models for thiophene and imidazopyridine derivatives inhibitors of the Polo-Like Kinase 1.

    PubMed

    Comelli, Nieves C; Duchowicz, Pablo R; Castro, Eduardo A

    2014-10-01

    The inhibitory activity of 103 thiophene and 33 imidazopyridine derivatives against Polo-Like Kinase 1 (PLK1) expressed as pIC50 (-logIC50) was predicted by QSAR modeling. Multivariate linear regression (MLR) was employed to model the relationship between 0D and 3D molecular descriptors and biological activities of molecules using the replacement method (MR) as variable selection tool. The 136 compounds were separated into several training and test sets. Two splitting approaches, distribution of biological data and structural diversity, and the statistical experimental design procedure D-optimal distance were applied to the dataset. The significance of the training set models was confirmed by statistically higher values of the internal leave one out cross-validated coefficient of determination (Q2) and external predictive coefficient of determination for the test set (Rtest2). The model developed from a training set, obtained with the D-optimal distance protocol and using 3D descriptor space along with activity values, separated chemical features that allowed to distinguish high and low pIC50 values reasonably well. Then, we verified that such model was sufficient to reliably and accurately predict the activity of external diverse structures. The model robustness was properly characterized by means of standard procedures and their applicability domain (AD) was analyzed by leverage method. Copyright © 2014 Elsevier B.V. All rights reserved.

  1. 40 CFR 86.1442 - Information required.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... CST. Elements of this general data may be located separately from the CST emission data, as long as the general data can easily be presented together with the CST emission data when a complete data set... Trucks; Certification Short Test Procedures § 86.1442 Information required. (a) General data. The...

  2. 40 CFR 86.1442 - Information required.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... CST. Elements of this general data may be located separately from the CST emission data, as long as the general data can easily be presented together with the CST emission data when a complete data set... Trucks; Certification Short Test Procedures § 86.1442 Information required. (a) General data. The...

  3. 40 CFR 86.1442 - Information required.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... CST. Elements of this general data may be located separately from the CST emission data, as long as the general data can easily be presented together with the CST emission data when a complete data set... Trucks; Certification Short Test Procedures § 86.1442 Information required. (a) General data. The...

  4. 40 CFR 86.1442 - Information required.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... CST. Elements of this general data may be located separately from the CST emission data, as long as the general data can easily be presented together with the CST emission data when a complete data set... Trucks; Certification Short Test Procedures § 86.1442 Information required. (a) General data. The...

  5. Total control: a critical analysis of mandatory HIV testing in U.S. prisons.

    PubMed

    Gagnon, Marilou; Jacob, Jean Daniel; Cormier, Luc

    2013-01-01

    The aim of this paper is to explore the relationship between mandatory HIV testing and the institutional management of inmates in U.S. prisons. Mandatory HIV testing has been largely overlooked by the nursing community even though it has important human rights and ethical implications. Drawing on the work of Goffman (1990) on the inner workings of total institutions, the present article critically examines the deployment of mandatory HIV testing in U.S. prisons. To set the stage, we define mandatory HIV testing and describe the methods of HIV testing currently used in U.S. prison settings. Then, we provide a brief overview of the concept of total institution and the mortification process. Finally, we expand on the relationship between mandatory HIV testing and much larger institutional objectives of total control, total structuring, total isolation, and separation of inmates from society (as summarized by Farrington, 1992). And lastly, we provide a brief discussion on the implications of mandatory HIV testing (as a method of HIV testing) from a nursing perspective.

  6. Application of two tests of multivariate discordancy to fisheries data sets

    USGS Publications Warehouse

    Stapanian, M.A.; Kocovsky, P.M.; Garner, F.C.

    2008-01-01

    The generalized (Mahalanobis) distance and multivariate kurtosis are two powerful tests of multivariate discordancies (outliers). Unlike the generalized distance test, the multivariate kurtosis test has not been applied as a test of discordancy to fisheries data heretofore. We applied both tests, along with published algorithms for identifying suspected causal variable(s) of discordant observations, to two fisheries data sets from Lake Erie: total length, mass, and age from 1,234 burbot, Lota lota; and 22 combinations of unique subsets of 10 morphometrics taken from 119 yellow perch, Perca flavescens. For the burbot data set, the generalized distance test identified six discordant observations and the multivariate kurtosis test identified 24 discordant observations. In contrast with the multivariate tests, the univariate generalized distance test identified no discordancies when applied separately to each variable. Removing discordancies had a substantial effect on length-versus-mass regression equations. For 500-mm burbot, the percent difference in estimated mass after removing discordancies in our study was greater than the percent difference in masses estimated for burbot of the same length in lakes that differed substantially in productivity. The number of discordant yellow perch detected ranged from 0 to 2 with the multivariate generalized distance test and from 6 to 11 with the multivariate kurtosis test. With the kurtosis test, 108 yellow perch (90.7%) were identified as discordant in zero to two combinations, and five (4.2%) were identified as discordant in either all or 21 of the 22 combinations. The relationship among the variables included in each combination determined which variables were identified as causal. The generalized distance test identified between zero and six discordancies when applied separately to each variable. Removing the discordancies found in at least one-half of the combinations (k=5) had a marked effect on a principal components analysis. In particular, the percent of the total variation explained by second and third principal components, which explain shape, increased by 52 and 44% respectively when the discordancies were removed. Multivariate applications of the tests have numerous ecological advantages over univariate applications, including improved management of fish stocks and interpretation of multivariate morphometric data. ?? 2007 Springer Science+Business Media B.V.

  7. Accuracy, repeatability, and reproducibility of Artemis very high-frequency digital ultrasound arc-scan lateral dimension measurements

    PubMed Central

    Reinstein, Dan Z.; Archer, Timothy J.; Silverman, Ronald H.; Coleman, D. Jackson

    2008-01-01

    Purpose To determine the accuracy, repeatability, and reproducibility of measurement of lateral dimensions using the Artemis (Ultralink LLC) very high-frequency (VHF) digital ultrasound (US) arc scanner. Setting London Vision Clinic, London, United Kingdom. Methods A test object was measured first with a micrometer and then with the Artemis arc scanner. Five sets of 10 consecutive B-scans of the test object were performed with the scanner. The test object was removed from the system between each scan set. One expert observer and one newly trained observer separately measured the lateral dimension of the test object. Two-factor analysis of variance was performed. The accuracy was calculated as the average bias of the scan set averages. The repeatability and reproducibility coefficients were calculated. The coefficient of variation (CV) was calculated for repeatability and reproducibility. Results The test object was measured to be 10.80 mm wide. The mean lateral dimension bias was 0.00 mm. The repeatability coefficient was 0.114 mm. The reproducibility coefficient was 0.026 mm. The repeatability CV was 0.38%, and the reproducibility CV was 0.09%. There was no statistically significant variation between observers (P = .0965). There was a statistically significant variation between scan sets (P = .0036) attributed to minor vertical changes in the alignment of the test object between consecutive scan sets. Conclusion The Artemis VHF digital US arc scanner obtained accurate, repeatable, and reproducible measurements of lateral dimensions of the size commonly found in the anterior segment. PMID:17081860

  8. Influence of rotational speed, torque and operator proficiency on failure of Greater Taper files.

    PubMed

    Yared, G M; Dagher, F E Bou; Machtou, P; Kulkarni, G K

    2002-01-01

    The purpose of this study was to evaluate the influence of rotational speed. torque, and operator experience on the incidence of locking, deformation, and separation of instruments when using a specific Ni-Ti rotary instrumentation technique in extracted human teeth. Greater Taper Ni-Ti rotary instruments (GT) were used in a crown-down technique. In one group (rotational speed evaluation) of canals (n = 300) speeds of 150, 2 50 and 350 r.p.m. (subgroups 1, 2 and 3) were used. Each one of the subgroups included 100 canals. In a second group (evaluation of torque) (n = 300) torque was set at 20, 30 and 55 Ncm (subgroups 4, 5 and 6). In the third group (evaluation of operator proficiency) (n = 300 three operators with varying experience (subgroups 7, 8 and 9) were also compared. Each subgroup included the use of 10 sets of GT rotary instruments and 100 canals of extracted human molars. Each set of instruments was used in up to 10 canals and sterilized before each case. NaOCl 2.5% was used as an irrigant. The number of locked, deformed, and separated instruments was recorded for each group. Statistical analysis was carried out with pairwise comparisons using Fisher's exact tests for each of the failure type. When the influence of rotational speed was evaluated, instrument deformation and separation did not occur in subgroups 1 (150) r.p.m.), 2 (250 r.p.m.), and 3 (350) r.p.m.). Instrument locking occurred in subgroup 3 only. Statistical analysis demonstrated a significant difference between the 150 and 350 r.p.m. groups and between the 250 and 350 r.p.m. groups with respect to instrument locking. In torque evaluation, neither separation, deformation nor locking occurred during the use of the instruments, at 150 r.p.m., and at the different torque values. When the operators were compared, although two instruments were separated in canals prepared by the least experienced operator. Fisher's exact tests did not demonstrate a significant difference between the three subgroups. Instrument locking, deformation, and separation did not occur with the most experienced operator. None of the instruments separated with the trained operator. Preclinical training in the use of the GT rotary instruments when used with a crown-down technique at 150 r.p.m. was crucial in avoiding instrument separation and reducing the incidence of instrument locking and deformation.

  9. End-to-End Commitment

    NASA Technical Reports Server (NTRS)

    Newcomb, John

    2004-01-01

    The end-to-end test would verify the complex sequence of events from lander separation to landing. Due to the large distances involved and the significant delay time in sending a command and receiving verification, the lander needed to operate autonomously after it separated from the orbiter. It had to sense conditions, make decisions, and act accordingly. We were flying into a relatively unknown set of conditions-a Martian atmosphere of unknown pressure, density, and consistency to land on a surface of unknown altitude, and one which had an unknown bearing strength.

  10. Visualization of Surface Flow on a Prolate Spheroid Model Suspended by Magnetic Suspension and Balance System

    NASA Astrophysics Data System (ADS)

    Ambo, Takumi; Nakamura, Yuki; Ochiai, Taku; Nonomura, Taku; Asai, Keisuke

    2017-11-01

    In this study, the surface flow on a 6:1 prolate spheroid model was visualized by oil flow method in the magnetic suspension and balance system (MSBS). The MSBS is a support-free system for wind-tunnel test in that a model is levitated by magnetic force. In this experiment, the 0.3-m MSBS was installed in the low-speed wind tunnel. The Reynolds number was 0.5 million and the angle of attack was set 0 and 5 degrees. In addition to free-levitation tests, a thin rod simulating disturbance of a support system was placed on the model surface and the influence of support interference was evaluated. The obtained results indicate that complicated separation patterns are present even at zero angle of attack. At α = 5°, separation pattern becomes more complicated than that at α = 0° and the streamlines form a highly three-dimensional structure. A characteristic pattern of open separation is observed and a focal point is formed at the end of the separation line. In evaluation of the support interference, the separation is delayed in the downstream of the rod, suggesting that the change of separation pattern is caused by the transition of laminar boundary layer behind the rod. These results indicate that one must take particular care to the support interference in studying three-dimensional separation on a prolate spheroid.

  11. Ares I and Ares I-X Stage Separation Aerodynamic Testing

    NASA Technical Reports Server (NTRS)

    Pinier, Jeremy T.; Niskey, Charles J.

    2011-01-01

    The aerodynamics of the Ares I crew launch vehicle (CLV) and Ares I-X flight test vehicle (FTV) during stage separation was characterized by testing 1%-scale models at the Arnold Engineering Development Center s (AEDC) von Karman Gas Dynamics Facility (VKF) Tunnel A at Mach numbers of 4.5 and 5.5. To fill a large matrix of data points in an efficient manner, an injection system supported the upper stage and a captive trajectory system (CTS) was utilized as a support system for the first stage located downstream of the upper stage. In an overall extremely successful test, this complex experimental setup associated with advanced postprocessing of the wind tunnel data has enabled the construction of a multi-dimensional aerodynamic database for the analysis and simulation of the critical phase of stage separation at high supersonic Mach numbers. Additionally, an extensive set of data from repeated wind tunnel runs was gathered purposefully to ensure that the experimental uncertainty would be accurately quantified in this type of flow where few historical data is available for comparison on this type of vehicle and where Reynolds-averaged Navier-Stokes (RANS) computational simulations remain far from being a reliable source of static aerodynamic data.

  12. Dynamic changes in ear temperature in relation to separation distress in dogs.

    PubMed

    Riemer, Stefanie; Assis, Luciana; Pike, Thomas W; Mills, Daniel S

    2016-12-01

    Infrared thermography can visualize changes in body surface temperature that result from stress-induced physiological changes and alterations of blood flow patterns. Here we explored its use for remote stress monitoring (i.e. removing need for human presence) in a sample of six pet dogs. Dogs were tested in a brief separation test involving contact with their owner, a stranger, and social isolation for two one-minute-periods. Tests were filmed using a thermographic camera set up in a corner of the room, around 7m from where the subjects spent most of the time. Temperature was measured from selected regions of both ear pinnae simultaneously. Temperatures of both ear pinnae showed a pattern of decrease during separation and increase when a person (either the owner or a stranger) was present, with no lateralized temperature differences between the two ears. Long distance thermographic measurement is a promising technique for non-invasive remote stress assessment, although there are some limitations related to dogs' hair structure over the ears, making it unsuitable for some subjects. Copyright © 2016 Elsevier Inc. All rights reserved.

  13. Study of limitations and attributes of microprocessor testing techniques

    NASA Technical Reports Server (NTRS)

    Mccaskill, R.; Sohl, W. E.

    1977-01-01

    All microprocessor units have a similar architecture from which a basic test philosophy can be adopted and used to develop an approach to test each module separately in order to verify the functionality of each module within the device using the input/output pins of the device and its instruction set; test for destructive interaction between functional modules; and verify all timing, status information, and interrupt operations of the device. Block and test flow diagrams are given for the 8080, 8008, 2901, 6800, and 1802 microprocessors. Manufacturers are listed and problems encountered in testing the modules are discussed. Test equipment and methods are described.

  14. Potential misdiagnosis of von Willebrand disease and haemophilia caused by ineffective mixing of thawed plasma.

    PubMed

    Favaloro, E J; Oliver, S; Mohammed, S; Ahuja, M; Grzechnik, E; Azimulla, S; McDonald, J; Lima-Oliveira, G; Lippi, G

    2017-09-01

    von Willebrand disease (VWD) reflects a loss or dysfunction in von Willebrand factor (VWF), while haemophilia represents a loss or dysfunction of clotting factors such as factor VIII (FVIII) or FIX. Their diagnosis requires laboratory testing, with this potentially compromised by preanalytical events, including poor sample quality. This study assessed the effect of inadequate mixing as a potential cause of VWD and haemophilia misdiagnosis. After completion of requested testing, 48 consecutive patient samples comprising separate aliquots from single collections were individually pooled, appropriately mixed, then frozen in separate aliquots, either at -20°C or -80°C for 2-7 days. Each sample set was then thawed and the separate aliquots subjected to separate mixing protocols (several inversions, blood roller, vortex) vs a non-mix sample, and all aliquots then tested for various VWF and factor assays. Non-mixing led to substantial reduction in VWF and factors in about 25% of samples, that in some cases could lead to misdiagnosis of VWD or haemophilia. Interestingly, there were also some differences observed with respect to different mixing protocols. Our study identified ineffective or variable mixing of thawed plasma samples as potential causes of misdiagnosis of VWD or haemophilia. Further education regarding the importance of appropriate mixing appears warranted. © 2017 John Wiley & Sons Ltd.

  15. Evaluation of a multi-site program designed to strengthen relational bonds for siblings separated by foster care.

    PubMed

    Waid, Jeffrey; Wojciak, Armeda Stevenson

    2017-10-01

    Sibling relationships in foster care settings have received increased attention in recent years. Despite growing evidence regarding the protective potential of sibling relationships for youth in care, some sibling groups continue to experience foster care related separation, and few programs exist to address the needs of these youth. This study describes and evaluates Camp To Belong, a multi-site program designed to provide short-term reunification to separated sibling groups through a week-long summer camp experience. Using a pre-test post-test survey design, this paper examines changes in youth ratings of sibling conflict and sibling support across camps located in six geographically distinct regions of the United States. The effects of youth age, number of prior camp exposures, and camp location were tested using multilevel modeling procedures. Findings suggest that participation in Camp To Belong may reduce sibling conflict, and improvements in sibling support are noted for youth who have had prior exposure to the camp's programming. Camp-level variance in the sibling support outcome highlight the complex nature of relationships for siblings separated by foster care, and suggest the need for additional research. Lessons learned from this multi-site evaluation and future directions are discussed. Copyright © 2017 Elsevier Ltd. All rights reserved.

  16. Sample Acquisition and Analytical Chemistry Challenges to Verifying Compliance to Aviators Breathing Oxygen (ABO) Purity Specification

    NASA Technical Reports Server (NTRS)

    Graf, John

    2015-01-01

    NASA has been developing and testing two different types of oxygen separation systems. One type of oxygen separation system uses pressure swing technology, the other type uses a solid electrolyte electrochemical oxygen separation cell. Both development systems have been subjected to long term testing, and performance testing under a variety of environmental and operational conditions. Testing these two systems revealed that measuring the product purity of oxygen, and determining if an oxygen separation device meets Aviator's Breathing Oxygen (ABO) specifications is a subtle and sometimes difficult analytical chemistry job. Verifying product purity of cryogenically produced oxygen presents a different set of analytical chemistry challenges. This presentation will describe some of the sample acquisition and analytical chemistry challenges presented by verifying oxygen produced by an oxygen separator - and verifying oxygen produced by cryogenic separation processes. The primary contaminant that causes gas samples to fail to meet ABO requirements is water. The maximum amount of water vapor allowed is 7 ppmv. The principal challenge of verifying oxygen produced by an oxygen separator is that it is produced relatively slowly, and at comparatively low temperatures. A short term failure that occurs for just a few minutes in the course of a 1 week run could cause an entire tank to be rejected. Continuous monitoring of oxygen purity and water vapor could identify problems as soon as they occur. Long term oxygen separator tests were instrumented with an oxygen analyzer and with an hygrometer: a GE Moisture Monitor Series 35. This hygrometer uses an aluminum oxide sensor. The user's manual does not report this, but long term exposure to pure oxygen causes the aluminum oxide sensor head to bias dry. Oxygen product that exceeded the 7 ppm specification was improperly accepted, because the sensor had biased. The bias is permanent - exposure to air does not cause the sensor to return to its original response - but the bias can be accounted for by recalibrating the sensor. After this issue was found, continuous measurements of water vapor in the oxygen product were made using an FTIR. The FTIR cell is relatively large, so response time is slow - but moisture measurements were repeatable and accurate. Verifying ABO compliance for oxygen produced by commercial cryogenic processes has a different set of sample acquisition and analytical chemistry challenges. Customers want analytical chemists to conserve as much as possible. Hygrometers are not exposed to hours of continuous flow of oxygen, so they don't bias, but small amounts of contamination in valves can cause a "fail". K bottles are periodically cleaned and recertified - after cleaning residual moisture can cause a "fail". Operators let bottle pressure drop to room pressure, introduce outside air into the bottle, and the subsequent fill will "fail". Outside storage of K-bottles has allowed enough in-leakage, so contents will "fail".

  17. Bench test evaluation of adaptive servoventilation devices for sleep apnea treatment.

    PubMed

    Zhu, Kaixian; Kharboutly, Haissam; Ma, Jianting; Bouzit, Mourad; Escourrou, Pierre

    2013-09-15

    Adaptive servoventilation devices are marketed to overcome sleep disordered breathing with apneas and hypopneas of both central and obstructive mechanisms often experienced by patients with chronic heart failure. The clinical efficacy of these devices is still questioned. This study challenged the detection and treatment capabilities of the three commercially available adaptive servoventilation devices in response to sleep disordered breathing events reproduced on an innovative bench test. The bench test consisted of a computer-controlled piston and a Starling resistor. The three devices were subjected to a flow sequence composed of central and obstructive apneas and hypopneas including Cheyne-Stokes respiration derived from a patient. The responses of the devices were separately evaluated with the maximum and the clinical settings (titrated expiratory positive airway pressure), and the detected events were compared to the bench-scored values. The three devices responded similarly to central events, by increasing pressure support to raise airflow. All central apneas were eliminated, whereas hypopneas remained. The three devices responded differently to the obstructive events with the maximum settings. These obstructive events could be normalized with clinical settings. The residual events of all the devices were scored lower than bench test values with the maximum settings, but were in agreement with the clinical settings. However, their mechanisms were misclassified. The tested devices reacted as expected to the disordered breathing events, but not sufficiently to normalize the breathing flow. The device-scored results should be used with caution to judge efficacy, as their validity depends upon the initial settings.

  18. Spatial variability in airborne pollen concentrations.

    PubMed

    Raynor, G S; Ogden, E C; Hayes, J V

    1975-03-01

    Tests were conducted to determine the relationship between airborne pollen concentrations and distance. Simultaneous samples were taken in 171 tests with sets of eight rotoslide samplers spaced from one to 486 M. apart in straight lines. Use of all possible pairs gave 28 separation distances. Tests were conducted over a 2-year period in urban and rural locations distant from major pollen sources during both tree and ragweed pollen seasons. Samples were taken at a height of 1.5 M. during 5-to 20-minute periods. Tests were grouped by pollen type, location, year, and direction of the wind relative to the line. Data were analyzed to evaluate variability without regard to sampler spacing and variability as a function of separation distance. The mean, standard deviation, coefficient of variation, ratio of maximum to the mean, and ratio of minimum to the mean were calculated for each test, each group of tests, and all cases. The average coefficient of variation is 0.21, the maximum over the mean, 1.39 and the minimum over the mean, 0.69. No relationship was found with experimental conditions. Samples taken at the minimum separation distance had a mean difference of 18 per cent. Differences between pairs of samples increased with distance in 10 of 13 groups. These results suggest that airborne pollens are not always well mixed in the lower atmosphere and that a sample becomes less representative with increasing distance from the sampling location.

  19. Institute for Clean Energy Technology Mississippi State University NSR&D Aged HEPA Filter Study Final Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jacks, Robert; Stormo, Julie; Rose, Coralie

    Data have demonstrated that filter media lose tensile strength and the ability to resist the effects of moisture as a function of age. Testing of new and aged filters needs to be conducted to correlate reduction of physical strength of HEPA media to the ability of filters to withstand upset conditions. Appendix C of the Nuclear Air Cleaning Handbook provides the basis for DOE’s HEPA filter service life guidance. However, this appendix also points out the variability of data, and it does not correlate performance of aged filters to degradation of media due to age. Funding awarded by NSR&D tomore » initiate full-scale testing of aged HEPA filters addresses the issue of correlating media degradation due to age with testing of new and aged HEPA filters under a generic design basis event set of conditions. This funding has accelerated the process of describing this study via: (1) establishment of a Technical Working Group of all stakeholders, (2) development and approval of a test plan, (3) development of testing and autopsy procedures, (4) acquiring an initial set of aged filters, (5) testing the initial set of aged filters, and (6) developing the filter test report content for each filter tested. This funding was very timely and has moved the project forward by at least three years. Activities have been correlated with testing conducted under DOE-EM funding for evaluating performance envelopes for AG-1 Section FC Separator and Separatorless filters. This coordination allows correlation of results from the NSR&D Aged Filter Study with results from testing new filters of the Separator and Separatorless Filter Study. DOE-EM efforts have identified approximately 100 more filters of various ages that have been stored under Level B conditions. NSR&D funded work allows a time for rigorous review among subject matter experts before moving forward with development of the testing matrix that will be used for additional filters. The NSR&D data sets are extremely valuable in as much as establishing a selfimproving, NQA-1 program capable of advancing the service lifetime study of HEPA filters. The data and reports are available for careful and critical review by subject matter experts before the next set of filters is tested and can be found in the appendices of this final report. NSR&D funds have not only initiated the Aged HEPA Filter Study alluded to in Appendix C of the NACH, but have also enhanced the technical integrity and effectiveness of all of the follow-on testing for this long-term study.« less

  20. Fast comprehensive two-dimensional gas chromatography method for fatty acid methyl ester separation and quantification using dual ionic liquid columns.

    PubMed

    Nosheen, Asia; Mitrevski, Blagoj; Bano, Asghari; Marriott, Philip J

    2013-10-18

    Safflower oil is a complex mixture of C18 saturated and unsaturated fatty acids amongst other fatty acids, and achieving separation between these similar structure components using one dimensional gas chromatography (GC) may be difficult. This investigation aims to obtain improved separation of fatty acid methyl esters in safflower oil, and their quantification using comprehensive two-dimensional GC (GC×GC). Here, GC×GC separation is accomplished by the coupling of two ionic liquid (IL) column phases: the combination of SLB-IL111 with IL59 column phases was finally selected since it provided excellent separation of a FAME standard mixture, as well as fatty acids in safflower and linseed oil, compared to other tested column sets. Safflower oil FAME were well separated in a short run of 16min. FAME validation was demonstrated by method reproducibility, linearity over a range up to 500mgL(-1), and limits of detection which ranged from 1.9mgL(-1) to 5.2mgL(-1) at a split ratio of 20:1. Quantification was carried out using two dilution levels of 200-fold for major components and 20-fold for trace components. The fatty acids C15:0 and C17:0 were not reported previously in safflower oil. The SLB-IL111/IL59 column set proved to be an effective and novel configuration for separation and quantification of vegetable and animal oil fatty acids. Copyright © 2013 Elsevier B.V. All rights reserved.

  1. Results of an experimental investigation to determine separation characteristics for the Orbiter/747 using a 0.0125-scale model (48-0 AX1318I-1 747) in the Ames Research center 14-foot wind tunnel (CA23B), volume 1

    NASA Technical Reports Server (NTRS)

    Esparza, V.

    1976-01-01

    Separation data were obtained at a Mach number of 0.6 and three incidence angles of 4 deg, 6 deg, and 9 deg. The orbiter angle of attack was varied from 0 to 14 degrees. Longitudinal, lateral and normal separation increments were obtained for fixed 747 angles of attack of 0 deg, 2 deg, and 4 deg while varying orbiter angle of attack. Control surface settings on the 747 carrier included rudder deflections of 0 deg and 10 deg and horizontal stabilizer deflections of -1 deg and +5 deg. Photographs of tested configurations are shown.

  2. Comprehensive Modeling of Temperature-Dependent Degradation Mechanisms in Lithium Iron Phosphate Batteries

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Smith, Kandler A; Schimpe, Michael; von Kuepach, Markus Edler

    For reliable lifetime predictions of lithium-ion batteries, models for cell degradation are required. A comprehensive semi-empirical model based on a reduced set of internal cell parameters and physically justified degradation functions for the capacity loss is developed and presented for a commercial lithium iron phosphate/graphite cell. One calendar and several cycle aging effects are modeled separately. Emphasis is placed on the varying degradation at different temperatures. Degradation mechanisms for cycle aging at high and low temperatures as well as the increased cycling degradation at high state of charge are calculated separately.For parameterization, a lifetime test study is conducted including storagemore » and cycle tests. Additionally, the model is validated through a dynamic current profile based on real-world application in a stationary energy storage system revealing the accuracy. The model error for the cell capacity loss in the application-based tests is at the end of testing below 1 % of the original cell capacity.« less

  3. Effect of Red Bull energy drink on repeated Wingate cycle performance and bench-press muscle endurance.

    PubMed

    Forbes, Scott C; Candow, Darren G; Little, Jonathan P; Magnus, Charlene; Chilibeck, Philip D

    2007-10-01

    The purpose of this study was to determine the effects of Red Bull energy drink on Wingate cycle performance and muscle endurance. Healthy young adults (N = 15, 11 men, 4 women, 21 +/- 5 y old) participated in a crossover study in which they were randomized to supplement with Red Bull (2 mg/kg body mass of caffeine) or isoenergetic, isovolumetric, noncaffeinated placebo, separated by 7 d. Muscle endurance (bench press) was assessed by the maximum number of repetitions over 3 sets (separated by 1-min rest intervals) at an intensity corresponding to 70% of baseline 1-repetition maximum. Three 30-s Wingate cycling tests (load = 0.075 kp/kg body mass), with 2 min recovery between tests, were used to assess peak and average power output. Red Bull energy drink significantly increased total bench-press repetitions over 3 sets (Red Bull = 34 +/- 9 vs. placebo = 32 +/- 8, P %%%lt; 0.05) but had no effect on Wingate peak or average power (Red Bull = 701 +/- 124 W vs. placebo = 700 +/- 132 W, Red Bull = 479 +/- 74 W vs. placebo = 471 +/- 74 W, respectively). Red Bull energy drink significantly increased upper body muscle endurance but had no effect on anaerobic peak or average power during repeated Wingate cycling tests in young healthy adults.

  4. Effect of slice thickness on brain magnetic resonance image texture analysis

    PubMed Central

    2010-01-01

    Background The accuracy of texture analysis in clinical evaluation of magnetic resonance images depends considerably on imaging arrangements and various image quality parameters. In this paper, we study the effect of slice thickness on brain tissue texture analysis using a statistical approach and classification of T1-weighted images of clinically confirmed multiple sclerosis patients. Methods We averaged the intensities of three consecutive 1-mm slices to simulate 3-mm slices. Two hundred sixty-four texture parameters were calculated for both the original and the averaged slices. Wilcoxon's signed ranks test was used to find differences between the regions of interest representing white matter and multiple sclerosis plaques. Linear and nonlinear discriminant analyses were applied with several separate training and test sets to determine the actual classification accuracy. Results Only moderate differences in distributions of the texture parameter value for 1-mm and simulated 3-mm-thick slices were found. Our study also showed that white matter areas are well separable from multiple sclerosis plaques even if the slice thickness differs between training and test sets. Conclusions Three-millimeter-thick magnetic resonance image slices acquired with a 1.5 T clinical magnetic resonance scanner seem to be sufficient for texture analysis of multiple sclerosis plaques and white matter tissue. PMID:20955567

  5. A computational analysis of lower bounds for the economic lot sizing problem in remanufacturing with separate setups

    NASA Astrophysics Data System (ADS)

    Aishah Syed Ali, Sharifah

    2017-09-01

    This paper considers economic lot sizing problem in remanufacturing with separate setup (ELSRs), where remanufactured and new products are produced on dedicated production lines. Since this problem is NP-hard in general, which leads to computationally inefficient and low-quality of solutions, we present (a) a multicommodity formulation and (b) a strengthened formulation based on a priori addition of valid inequalities in the space of original variables, which are then compared with the Wagner-Whitin based formulation available in the literature. Computational experiments on a large number of test data sets are performed to evaluate the different approaches. The numerical results show that our strengthened formulation outperforms all the other tested approaches in terms of linear relaxation bounds. Finally, we conclude with future research directions.

  6. Comparison of mine waste assessment methods at the Rattler mine site, Virginia Canyon, Colorado

    USGS Publications Warehouse

    Hageman, Phil L.; Smith, Kathleen S.; Wildeman, Thomas R.; Ranville, James F.

    2005-01-01

    In a joint project, the mine waste-piles at the Rattler Mine near Idaho Springs, Colorado, were sampled and analyzed by scientists from the U.S. Geological Survey (USGS) and the Colorado School of Mines (CSM). Separate sample collection, sample leaching, and leachate analyses were performed by both groups and the results were compared. For the study, both groups used the USGS sampling procedure and the USGS Field Leach Test (FLT). The leachates generated from these tests were analyzed for a suite of elements using ICP-AES (CSM) and ICP-MS (USGS). Leachate geochemical fingerprints produced by the two groups for composites collected from the same mine waste showed good agreement. In another set of tests, CSM collected another set of Rattler mine waste composite samples using the USGS sampling procedure. This set of composite samples was leached using the Colorado Division of Minerals and Geology (CDMG) leach test, and a modified Toxicity Characteristic Leaching Procedure (TCLP) leach test. Leachate geochemical fingerprints produced using these tests showed a variation of more than a factor of two from the geochemical fingerprints produced using the USGS FLT leach test. We have concluded that the variation in the results is due to the different parameters of the leaching tests and not due to the sampling or analytical methods.

  7. Incorporation of physical constraints in optimal surface search for renal cortex segmentation

    NASA Astrophysics Data System (ADS)

    Li, Xiuli; Chen, Xinjian; Yao, Jianhua; Zhang, Xing; Tian, Jie

    2012-02-01

    In this paper, we propose a novel approach for multiple surfaces segmentation based on the incorporation of physical constraints in optimal surface searching. We apply our new approach to solve the renal cortex segmentation problem, an important but not sufficiently researched issue. In this study, in order to better restrain the intensity proximity of the renal cortex and renal column, we extend the optimal surface search approach to allow for varying sampling distance and physical separation constraints, instead of the traditional fixed sampling distance and numerical separation constraints. The sampling distance of each vertex-column is computed according to the sparsity of the local triangular mesh. Then the physical constraint learned from a priori renal cortex thickness is applied to the inter-surface arcs as the separation constraints. Appropriate varying sampling distance and separation constraints were learnt from 6 clinical CT images. After training, the proposed approach was tested on a test set of 10 images. The manual segmentation of renal cortex was used as the reference standard. Quantitative analysis of the segmented renal cortex indicates that overall segmentation accuracy was increased after introducing the varying sampling distance and physical separation constraints (the average true positive volume fraction (TPVF) and false positive volume fraction (FPVF) were 83.96% and 2.80%, respectively, by using varying sampling distance and physical separation constraints compared to 74.10% and 0.18%, respectively, by using fixed sampling distance and numerical separation constraints). The experimental results demonstrated the effectiveness of the proposed approach.

  8. The lead time tradeoff: the case of health states better than dead.

    PubMed

    Pinto-Prades, José Luis; Rodríguez-Míguez, Eva

    2015-04-01

    Lead time tradeoff (L-TTO) is a variant of the time tradeoff (TTO). L-TTO introduces a lead period in full health before illness onset, avoiding the need to use 2 different procedures for states better and worse than dead. To estimate utilities, additive separability is assumed. We tested to what extent violations of this assumption can bias utilities estimated with L-TTO. A sample of 500 members of the Spanish general population evaluated 24 health states, using face-to-face interviews. A total of 188 subjects were interviewed with L-TTO and the rest with TTO. Both samples evaluated the same set of 24 health states, divided into 4 groups with 6 health states per set. Each subject evaluated 1 of the sets. A random effects regression model was fitted to our data. Only health states better than dead were included in the regression since it is in this subset where additive separability can be tested clearly. Utilities were higher in L-TTO in relation to TTO (on average L-TTO adds about 0.2 points to the utility of health states), suggesting that additive separability is violated. The difference between methods increased with the severity of the health state. Thus, L-TTO adds about 0.14 points to the average utility of the less severe states, 0.23 to the intermediate states, and 0.28 points to the more severe estates. L-TTO produced higher utilities than TTO. Health problems are perceived as less severe if a lead period in full health is added upfront, implying that there are interactions between disjointed time periods. The advantages of this method have to be compared with the cost of modeling the interaction between periods. © The Author(s) 2014.

  9. Paid Sick Leave and Job Stability

    PubMed Central

    Hill, Heather D.

    2013-01-01

    A compelling, but unsubstantiated, argument for paid sick leave legislation is that workers with leave are better able to address own and family member health needs without risking a voluntary or involuntary job separation. This study tests that claim using the Medical Expenditure Panel Survey and regression models controlling for a large set of worker and job characteristics, as well as with propensity score techniques. Results suggest that paid sick leave decreases the probability of job separation by at least 2.5 percentage points, or 25%. The association is strongest for workers without paid vacation leave and for mothers. PMID:24235780

  10. Tri-county pilot study. [Texas

    NASA Technical Reports Server (NTRS)

    Reeves, C. A. (Principal Investigator); Austin, T. W.; Kerber, A. G.

    1976-01-01

    The author has identified the following significant results. An area inventory was performed for three southeast Texas counties (Montgomery, Walker, and San Jacinto) totaling 0.65 million hectares. The inventory was performed using a two level hierarchy. Level 1 was divided into forestland, rangeland, and other land. Forestland was separated into Level 2 categories: pine, hardwood, and mixed; rangeland was not separated further. Results consisted of area statistics for each county and for the entire study site for pine, hardwood, mixed, rangeland, and other land. Color coded county classification maps were produced for the May data set, and procedures were developed and tested.

  11. Influence of rotational speed, torque and operator's proficiency on ProFile failures.

    PubMed

    Yared, G M; Bou Dagher, F E; Machtou, P

    2001-01-01

    The purpose of this study was to evaluate the influence of rotational speed, torque, and operator experience with a specific Ni-Ti rotary instrumentation technique on the incidence of locking, deformation and separation of instruments. ProFile Ni-Ti rotary instruments (PRI) sizes 40-15 with a 6% taper were used in a crown-down technique. In one group of canals (n = 300) speeds of 150, 250 and 350 rpm (subgroups 1, 2 and 3) were used. Each one of the subgroups included 100 canals. In a second group (n = 300) torque was set at 20, 30 and 55 Ncm (subgroups 4, 5 and 6). In the third group (n = 300) three operators with varying experience (subgroups 7, 8 and 9) were also compared. Each subgroup included the use of 10 sets of PRI and 100 canals of extracted human molars. Each set of PRI was used in up to 10 canals and then sterilized before each case. NaOCl 2.5% was used as an irrigant. The number of locked, deformed, and separated instruments for the different groups, and within each part of the study was analysed statistically for significance with chi-squared tests. In group 1 only one instrument was deformed in the 150-rpm group and no instruments separated or locked. In the 250-rpm group instrument separation did not occur, however, a high incidence of locking, deformation and separation was noted in the 350-rpm group. In general, instrument sizes 30-15 locked, deformed and separated. Chi-squared statistics showed a significant difference between the 150 and 350 rpm groups but no difference between the 150 and 250 rpm groups with regard to instrument separation. Overall, there was a trend toward a higher incidence of instrument deformation and separation in smaller instruments. Locking and separation occurred during the final passage of the instruments, in the last (tenth) canal in each subgroup. In the second group, neither separation nor deformation and locking occurred during the use of the ProFile instruments, at 150 rpm, and at the different torque values. In the third group, chi-squared analysis demonstrated that significantly more instruments separated with the least experienced operator. Instrument locking, deformation, and separation did not occur with the most experienced operator. Preclinical training in the use of the PRI technique with crown-down at 150 rpm were crucial in avoiding instrument separation and reducing the incidence of instrument locking and deformation.

  12. Understanding Evolutionary Potential in Virtual CPU Instruction Set Architectures

    PubMed Central

    Bryson, David M.; Ofria, Charles

    2013-01-01

    We investigate fundamental decisions in the design of instruction set architectures for linear genetic programs that are used as both model systems in evolutionary biology and underlying solution representations in evolutionary computation. We subjected digital organisms with each tested architecture to seven different computational environments designed to present a range of evolutionary challenges. Our goal was to engineer a general purpose architecture that would be effective under a broad range of evolutionary conditions. We evaluated six different types of architectural features for the virtual CPUs: (1) genetic flexibility: we allowed digital organisms to more precisely modify the function of genetic instructions, (2) memory: we provided an increased number of registers in the virtual CPUs, (3) decoupled sensors and actuators: we separated input and output operations to enable greater control over data flow. We also tested a variety of methods to regulate expression: (4) explicit labels that allow programs to dynamically refer to specific genome positions, (5) position-relative search instructions, and (6) multiple new flow control instructions, including conditionals and jumps. Each of these features also adds complication to the instruction set and risks slowing evolution due to epistatic interactions. Two features (multiple argument specification and separated I/O) demonstrated substantial improvements in the majority of test environments, along with versions of each of the remaining architecture modifications that show significant improvements in multiple environments. However, some tested modifications were detrimental, though most exhibit no systematic effects on evolutionary potential, highlighting the robustness of digital evolution. Combined, these observations enhance our understanding of how instruction architecture impacts evolutionary potential, enabling the creation of architectures that support more rapid evolution of complex solutions to a broad range of challenges. PMID:24376669

  13. Results of an experimental investigation to determine separation characteristics for the Orbiter/747 using a 0.0125-scale model (48-0 AX1318I-1 747) in the Ames Research Center 14-foot wind tunnel (CA23B)

    NASA Technical Reports Server (NTRS)

    Esparza, V.

    1976-01-01

    Aerodynamic separation data obtained from a wind tunnel test of an 0.0125-scale SSV Orbiter model of a VC70-000002 Configuration and a 0.0125-scale 747 model was presented. Separation data was obtained at a Mach number of 0.6 and three incidence angles of 4, 6, and 8 degrees. The orbiter angle of attack was varied from 0 to 14 degrees. Longitudinal, lateral and normal separation increments were obtained for fixed 747 angles of attack of 0, 2, and 4 degrees while varying the orbiter angle of attack. Control surface settings on the 747 carrier included rudder deflections of 0 and 10 degrees and horizontal stabilizer deflections of -1 and +5 degrees.

  14. The dynamic relationships between union dissolution and women's employment: a life-history analysis of 16 countries.

    PubMed

    van Damme, Maike; Kalmijn, Matthijs

    2014-11-01

    The specialization theory from Gary Becker is often used to explain the effect of women's work on the risk of divorce. The main argument is that women with little work experience have higher economic costs to exit marriage. Using the Fertility and Family Surveys, we test for 16 countries to what extent women's employment increases the risk of separation. We also more directly examine the role of economic exit costs in separation by investigating the effect of separated women's work history during the union on women's post-separation employment. The results imply that Becker was right to some extent, especially in contexts with little female employment support. However, in settings where women's employment opportunities are more ample, sociological or psychological theories have probably more explanatory power to explain the causes and consequences of union dissolution. Copyright © 2014 Elsevier Inc. All rights reserved.

  15. StarBooster Demonstrator Cluster Configuration Analysis/Verification Program

    NASA Technical Reports Server (NTRS)

    DeTurris, Dianne J.

    2003-01-01

    In order to study the flight dynamics of the cluster configuration of two first stage boosters and upper-stage, flight-testing of subsonic sub-scale models has been undertaken using two glideback boosters launched on a center upper-stage. Three high power rockets clustered together were built and flown to demonstrate vertical launch, separation and horizontal recovery of the boosters. Although the boosters fly to conventional aircraft landing, the centerstage comes down separately under its own parachute. The goal of the project has been to collect data during separation and flight for comparison with a six degree of freedom simulation. The configuration for the delta wing canard boosters comes from a design by Starcraft Boosters, Inc. The subscale rockets were constructed of foam covered in carbon or fiberglass and were launched with commercially available solid rocket motors. The first set of boosters built were 3-ft tall with a 4-ft tall centerstage, and two additional sets of boosters were made that were each over 5-ft tall with a 7.5 ft centerstage. The rocket cluster is launched vertically, then after motor bum out the boosters are separated and flown to a horizontal landing under radio-control. An on-board data acquisition system recorded data during both the launch and glide phases of flight.

  16. An efficient sequential strategy for realizing cross-gradient joint inversion: method and its application to 2-D cross borehole seismic traveltime and DC resistivity tomography

    NASA Astrophysics Data System (ADS)

    Gao, Ji; Zhang, Haijiang

    2018-05-01

    Cross-gradient joint inversion that enforces structural similarity between different models has been widely utilized in jointly inverting different geophysical data types. However, it is a challenge to combine different geophysical inversion systems with the cross-gradient structural constraint into one joint inversion system because they may differ greatly in the model representation, forward modelling and inversion algorithm. Here we propose a new joint inversion strategy that can avoid this issue. Different models are separately inverted using the existing inversion packages and model structure similarity is only enforced through cross-gradient minimization between two models after each iteration. Although the data fitting and structural similarity enforcing processes are decoupled, our proposed strategy is still able to choose appropriate models to balance the trade-off between geophysical data fitting and structural similarity. This is realized by using model perturbations from separate data inversions to constrain the cross-gradient minimization process. We have tested this new strategy on 2-D cross borehole synthetic seismic traveltime and DC resistivity data sets. Compared to separate geophysical inversions, our proposed joint inversion strategy fits the separate data sets at comparable levels while at the same time resulting in a higher structural similarity between the velocity and resistivity models.

  17. Comparisons of false negative rates from a trend test alone and from a trend test jointly with a control-high groups pairwise test in the determination of the carcinogenicity of new drugs.

    PubMed

    Lin, Karl K; Rahman, Mohammad A

    2018-05-21

    Interest has been expressed in using a joint test procedure that requires that the results of both a trend test and a pairwise comparison test between the control and the high groups be statistically significant simultaneously at the levels of significance recommended in the FDA 2001 draft guidance for industry document for the separate tests in order for the drug effect on the development of an individual tumor type to be considered as statistically significant. Results of our simulation studies show that there is a serious consequence of large inflations of the false negative rate through large decreases of false positive rate in the use of the above joint test procedure in the final interpretation of the carcinogenicity potential of a new drug if the levels of significance recommended for separate tests are used. The inflation can be as high as 204.5% of the false negative rate when the trend test alone is required to test if the effect is statistically significant. To correct the problem, new sets of levels of significance have also been developed for those who want to use the joint test in reviews of carcinogenicity studies.

  18. Hyperspectral data discrimination methods

    NASA Astrophysics Data System (ADS)

    Casasent, David P.; Chen, Xuewen

    2000-12-01

    Hyperspectral data provides spectral response information that provides detailed chemical, moisture, and other description of constituent parts of an item. These new sensor data are useful in USDA product inspection. However, such data introduce problems such as the curse of dimensionality, the need to reduce the number of features used to accommodate realistic small training set sizes, and the need to employ discriminatory features and still achieve good generalization (comparable training and test set performance). Several two-step methods are compared to a new and preferable single-step spectral decomposition algorithm. Initial results on hyperspectral data for good/bad almonds and for good/bad (aflatoxin infested) corn kernels are presented. The hyperspectral application addressed differs greatly from prior USDA work (PLS) in which the level of a specific channel constituent in food was estimated. A validation set (separate from the test set) is used in selecting algorithm parameters. Threshold parameters are varied to select the best Pc operating point. Initial results show that nonlinear features yield improved performance.

  19. Cosmic Bell Test: Measurement Settings from Milky Way Stars

    NASA Astrophysics Data System (ADS)

    Handsteiner, Johannes; Friedman, Andrew S.; Rauch, Dominik; Gallicchio, Jason; Liu, Bo; Hosp, Hannes; Kofler, Johannes; Bricher, David; Fink, Matthias; Leung, Calvin; Mark, Anthony; Nguyen, Hien T.; Sanders, Isabella; Steinlechner, Fabian; Ursin, Rupert; Wengerowsky, Sören; Guth, Alan H.; Kaiser, David I.; Scheidl, Thomas; Zeilinger, Anton

    2017-02-01

    Bell's theorem states that some predictions of quantum mechanics cannot be reproduced by a local-realist theory. That conflict is expressed by Bell's inequality, which is usually derived under the assumption that there are no statistical correlations between the choices of measurement settings and anything else that can causally affect the measurement outcomes. In previous experiments, this "freedom of choice" was addressed by ensuring that selection of measurement settings via conventional "quantum random number generators" was spacelike separated from the entangled particle creation. This, however, left open the possibility that an unknown cause affected both the setting choices and measurement outcomes as recently as mere microseconds before each experimental trial. Here we report on a new experimental test of Bell's inequality that, for the first time, uses distant astronomical sources as "cosmic setting generators." In our tests with polarization-entangled photons, measurement settings were chosen using real-time observations of Milky Way stars while simultaneously ensuring locality. Assuming fair sampling for all detected photons, and that each stellar photon's color was set at emission, we observe statistically significant ≳7.31 σ and ≳11.93 σ violations of Bell's inequality with estimated p values of ≲1.8 ×10-13 and ≲4.0 ×10-33, respectively, thereby pushing back by ˜600 years the most recent time by which any local-realist influences could have engineered the observed Bell violation.

  20. Cosmic Bell Test: Measurement Settings from Milky Way Stars.

    PubMed

    Handsteiner, Johannes; Friedman, Andrew S; Rauch, Dominik; Gallicchio, Jason; Liu, Bo; Hosp, Hannes; Kofler, Johannes; Bricher, David; Fink, Matthias; Leung, Calvin; Mark, Anthony; Nguyen, Hien T; Sanders, Isabella; Steinlechner, Fabian; Ursin, Rupert; Wengerowsky, Sören; Guth, Alan H; Kaiser, David I; Scheidl, Thomas; Zeilinger, Anton

    2017-02-10

    Bell's theorem states that some predictions of quantum mechanics cannot be reproduced by a local-realist theory. That conflict is expressed by Bell's inequality, which is usually derived under the assumption that there are no statistical correlations between the choices of measurement settings and anything else that can causally affect the measurement outcomes. In previous experiments, this "freedom of choice" was addressed by ensuring that selection of measurement settings via conventional "quantum random number generators" was spacelike separated from the entangled particle creation. This, however, left open the possibility that an unknown cause affected both the setting choices and measurement outcomes as recently as mere microseconds before each experimental trial. Here we report on a new experimental test of Bell's inequality that, for the first time, uses distant astronomical sources as "cosmic setting generators." In our tests with polarization-entangled photons, measurement settings were chosen using real-time observations of Milky Way stars while simultaneously ensuring locality. Assuming fair sampling for all detected photons, and that each stellar photon's color was set at emission, we observe statistically significant ≳7.31σ and ≳11.93σ violations of Bell's inequality with estimated p values of ≲1.8×10^{-13} and ≲4.0×10^{-33}, respectively, thereby pushing back by ∼600  years the most recent time by which any local-realist influences could have engineered the observed Bell violation.

  1. Expanding syphilis testing: a scoping review of syphilis testing interventions among key populations.

    PubMed

    Ong, Jason J; Fu, Hongyun; Smith, M Kumi; Tucker, Joseph D

    2018-05-01

    Syphilis is an important sexually transmitted infection (STI). Despite inexpensive and effective treatment, few key populations receive syphilis testing. Innovative strategies are needed to increase syphilis testing among key populations. Areas covered: This scoping review focused on strategies to increase syphilis testing in key populations (men who have sex with men (MSM), sex workers, people who use drugs, transgender people, and incarcerated individuals). Expert commentary: We identified many promising syphilis testing strategies, particularly among MSM. These innovations are separated into diagnostic, clinic-based, and non-clinic based strategies. In terms of diagnostics, self-testing, dried blood spots, and point-of-care testing can decentralize syphilis testing. Effective syphilis self-testing pilots suggest the need for further attention and research. In terms of clinic-based strategies, modifying default clinical procedures can nudge physicians to more frequently recommend syphilis testing. In terms of non-clinic based strategies, venue-based screening (e.g. in correctional facilities, drug rehabilitation centres) and mobile testing units have been successfully implemented in a variety of settings. Integration of syphilis with HIV testing may facilitate implementation in settings where individuals have increased sexual risk. There is a strong need for further syphilis testing research and programs.

  2. A new CT collimator for producing two simultaneous overlapping slices from one scan. [for biomedical applications

    NASA Technical Reports Server (NTRS)

    Kwoh, Y. S.; Glenn, W. V., Jr.; Reed, I. S.; Truong, T. K.

    1981-01-01

    A new CT collimator is developed which is capable of producing two simultaneous successive overlapping images from a single scan. The collimator represents a modification of the standard EMI 5005 collimator achieved by alternately masking one end or portions of both ends of the X-ray detectors at a 13-mm beamwidth so that a set of 540 filtered projections is obtained for each scan which can be separated into two sets of interleaved projections corresponding to views 3 mm apart. Tests have demonstrated that the quality of the images produced from these two projections almost equals the quality of those produced by the standard collimator from two separate scans. The new collimator may thus be used to achieve a speed improvement in the generation of overlapping sections as well as a reduction in X-ray dosage.

  3. Combination of large and small basis sets in electronic structure calculations on large systems

    NASA Astrophysics Data System (ADS)

    Røeggen, Inge; Gao, Bin

    2018-04-01

    Two basis sets—a large and a small one—are associated with each nucleus of the system. Each atom has its own separate one-electron basis comprising the large basis set of the atom in question and the small basis sets for the partner atoms in the complex. The perturbed atoms in molecules and solids model is at core of the approach since it allows for the definition of perturbed atoms in a system. It is argued that this basis set approach should be particularly useful for periodic systems. Test calculations are performed on one-dimensional arrays of H and Li atoms. The ground-state energy per atom in the linear H array is determined versus bond length.

  4. Consistency of QSAR models: Correct split of training and test sets, ranking of models and performance parameters.

    PubMed

    Rácz, A; Bajusz, D; Héberger, K

    2015-01-01

    Recent implementations of QSAR modelling software provide the user with numerous models and a wealth of information. In this work, we provide some guidance on how one should interpret the results of QSAR modelling, compare and assess the resulting models, and select the best and most consistent ones. Two QSAR datasets are applied as case studies for the comparison of model performance parameters and model selection methods. We demonstrate the capabilities of sum of ranking differences (SRD) in model selection and ranking, and identify the best performance indicators and models. While the exchange of the original training and (external) test sets does not affect the ranking of performance parameters, it provides improved models in certain cases (despite the lower number of molecules in the training set). Performance parameters for external validation are substantially separated from the other merits in SRD analyses, highlighting their value in data fusion.

  5. Continuity and Separation in Symmetric Topologies

    ERIC Educational Resources Information Center

    Harris, J.; Lynch, M.

    2007-01-01

    In this note, it is shown that in a symmetric topological space, the pairs of sets separated by the topology determine the topology itself. It is then shown that when the codomain is symmetric, functions which separate only those pairs of sets that are already separated are continuous, generalizing a result found by M. Lynch.

  6. Optical chromatographic sample separation of hydrodynamically focused mixtures

    PubMed Central

    Terray, A.; Hebert, C. G.; Hart, S. J.

    2014-01-01

    Optical chromatography relies on the balance between the opposing optical and fluid drag forces acting on a particle. A typical configuration involves a loosely focused laser directly counter to the flow of particle-laden fluid passing through a microfluidic device. This equilibrium depends on the intrinsic properties of the particle, including size, shape, and refractive index. As such, uniquely fine separations are possible using this technique. Here, we demonstrate how matching the diameter of a microfluidic flow channel to that of the focusing laser in concert with a unique microfluidic platform can be used as a method to fractionate closely related particles in a mixed sample. This microfluidic network allows for a monodisperse sample of both polystyrene and poly(methyl methacrylate) spheres to be injected, hydrodynamically focused, and completely separated. To test the limit of separation, a mixed polystyrene sample containing two particles varying in diameter by less than 0.5 μm was run in the system. The analysis of the resulting separation sets the framework for continued work to perform ultra-fine separations. PMID:25553179

  7. Performance of an Optimally Tuned Range-Separated Hybrid Functional for 0-0 Electronic Excitation Energies.

    PubMed

    Jacquemin, Denis; Moore, Barry; Planchat, Aurélien; Adamo, Carlo; Autschbach, Jochen

    2014-04-08

    Using a set of 40 conjugated molecules, we assess the performance of an "optimally tuned" range-separated hybrid functional in reproducing the experimental 0-0 energies. The selected protocol accounts for the impact of solvation using a corrected linear-response continuum approach and vibrational corrections through calculations of the zero-point energies of both ground and excited-states and provides basis set converged data thanks to the systematic use of diffuse-containing atomic basis sets at all computational steps. It turns out that an optimally tuned long-range corrected hybrid form of the Perdew-Burke-Ernzerhof functional, LC-PBE*, delivers both the smallest mean absolute error (0.20 eV) and standard deviation (0.15 eV) of all tested approaches, while the obtained correlation (0.93) is large but remains slightly smaller than its M06-2X counterpart (0.95). In addition, the efficiency of two other recently developed exchange-correlation functionals, namely SOGGA11-X and ωB97X-D, has been determined in order to allow more complete comparisons with previously published data.

  8. Bench Test Evaluation of Adaptive Servoventilation Devices for Sleep Apnea Treatment

    PubMed Central

    Zhu, Kaixian; Kharboutly, Haissam; Ma, Jianting; Bouzit, Mourad; Escourrou, Pierre

    2013-01-01

    Rationale: Adaptive servoventilation devices are marketed to overcome sleep disordered breathing with apneas and hypopneas of both central and obstructive mechanisms often experienced by patients with chronic heart failure. The clinical efficacy of these devices is still questioned. Study Objectives: This study challenged the detection and treatment capabilities of the three commercially available adaptive servoventilation devices in response to sleep disordered breathing events reproduced on an innovative bench test. Methods: The bench test consisted of a computer-controlled piston and a Starling resistor. The three devices were subjected to a flow sequence composed of central and obstructive apneas and hypopneas including Cheyne-Stokes respiration derived from a patient. The responses of the devices were separately evaluated with the maximum and the clinical settings (titrated expiratory positive airway pressure), and the detected events were compared to the bench-scored values. Results: The three devices responded similarly to central events, by increasing pressure support to raise airflow. All central apneas were eliminated, whereas hypopneas remained. The three devices responded differently to the obstructive events with the maximum settings. These obstructive events could be normalized with clinical settings. The residual events of all the devices were scored lower than bench test values with the maximum settings, but were in agreement with the clinical settings. However, their mechanisms were misclassified. Conclusion: The tested devices reacted as expected to the disordered breathing events, but not sufficiently to normalize the breathing flow. The device-scored results should be used with caution to judge efficacy, as their validity depends upon the initial settings. Citation: Zhu K; Kharboutly H; Ma J; Bouzit M; Escourrou P. Bench test evaluation of adaptive servoventilation devices for sleep apnea treatment. J Clin Sleep Med 2013;9(9):861-871. PMID:23997698

  9. A design method for entrance sections of transonic wind tunnels with rectangular cross sections

    NASA Technical Reports Server (NTRS)

    Lionel, L.; Mcdevitt, J. B.

    1975-01-01

    A mathematical technique developed to design entrance sections for transonic or high-speed subsonic wind tunnels with rectangular cross sections is discribed. The transition from a circular cross-section setting chamber to a rectangular test section is accomplished smoothly so as not to introduce secondary flows (vortices or boundary-layer separation) into a uniform test stream. The results of static-pressure measurements in the transition region and of static and total-pressure surveys in the test section of a pilot model for a new facility at the Ames Research Center are presented.

  10. Beam Test Studies of 3D Pixel Sensors Irradiated Non-Uniformly for the ATLAS Forward Physics Detector

    DTIC Science & Technology

    2013-02-21

    telescope consists of six Mimosa tracking planes, the readout data acquisition system and the trigger hardware, and provides a ≈ 3µm track point- ing...is larger than the Mimosa sensors of the telescope, separate sets of data were taken to cover the irradiated and non-irradiated regions of the sensors

  11. Non-Hierarchical Clustering as a Method to Analyse an Open-Ended Questionnaire on Algebraic Thinking

    ERIC Educational Resources Information Center

    Di Paola, Benedetto; Battaglia, Onofrio Rosario; Fazio, Claudio

    2016-01-01

    The problem of taking a data set and separating it into subgroups, where the members of each subgroup are more similar to each other than they are to members outside the subgroup, has been extensively studied in science and mathematics education research. Student responses to written questions and multiple-choice tests have been characterised and…

  12. Multivariate modelling and personality organization: a comparative study of the Defense Mechanism Test and linguistic expressions.

    PubMed

    Sundbom, E; Jeanneau, M

    1996-03-01

    The main aim of the study is to establish an empirical connection between perceptual defences as measured by the Defense Mechanism Test (DMT)--a projective percept-genetic method--and manifest linguistic expressions based on word pattern analyses. The subjects were 25 psychiatric patients with the diagnoses neurotic personality organization (NPO), borderline personality organization (BPO) and psychotic personality organization (PPO) in accordance with Kernberg's theory. A set of 130 DMT variables and 40 linguistic variables were analyzed by means of partial least squares (PLS) discriminant analysis separately and then pooled together. The overall hypothesis was that it would be possible to define the personality organization of the patients in terms of an amalgam of perceptual defences and word patterns, and that these two kinds of data would confirm each other. The result of the combined PLS analysis revealed a very good separation between the diagnostic groups as measured by the pooled variable sets. Among other things, it was shown that NPO patients are principally characterized by linguistic variables, whereas BPO and PPO patients are better defined by perceptual defences as measured by the DMT method.

  13. Enhanced oil recovery system

    DOEpatents

    Goldsberry, Fred L.

    1989-01-01

    All energy resources available from a geopressured geothermal reservoir are used for the production of pipeline quality gas using a high pressure separator/heat exchanger and a membrane separator, and recovering waste gas from both the membrane separator and a low pressure separator in tandem with the high pressure separator for use in enhanced oil recovery, or in powering a gas engine and turbine set. Liquid hydrocarbons are skimmed off the top of geothermal brine in the low pressure separator. High pressure brine from the geothermal well is used to drive a turbine/generator set before recovering waste gas in the first separator. Another turbine/generator set is provided in a supercritical binary power plant that uses propane as a working fluid in a closed cycle, and uses exhaust heat from the combustion engine and geothermal energy of the brine in the separator/heat exchanger to heat the propane.

  14. Estimation of time-variable fast flow path chemical concentrations for application in tracer-based hydrograph separation analyses

    USGS Publications Warehouse

    Kronholm, Scott C.; Capel, Paul D.

    2016-01-01

    Mixing models are a commonly used method for hydrograph separation, but can be hindered by the subjective choice of the end-member tracer concentrations. This work tests a new variant of mixing model that uses high-frequency measures of two tracers and streamflow to separate total streamflow into water from slowflow and fastflow sources. The ratio between the concentrations of the two tracers is used to create a time-variable estimate of the concentration of each tracer in the fastflow end-member. Multiple synthetic data sets, and data from two hydrologically diverse streams, are used to test the performance and limitations of the new model (two-tracer ratio-based mixing model: TRaMM). When applied to the synthetic streams under many different scenarios, the TRaMM produces results that were reasonable approximations of the actual values of fastflow discharge (±0.1% of maximum fastflow) and fastflow tracer concentrations (±9.5% and ±16% of maximum fastflow nitrate concentration and specific conductance, respectively). With real stream data, the TRaMM produces high-frequency estimates of slowflow and fastflow discharge that align with expectations for each stream based on their respective hydrologic settings. The use of two tracers with the TRaMM provides an innovative and objective approach for estimating high-frequency fastflow concentrations and contributions of fastflow water to the stream. This provides useful information for tracking chemical movement to streams and allows for better selection and implementation of water quality management strategies.

  15. Optimization of capillary zone electrophoresis for charge heterogeneity testing of biopharmaceuticals using enhanced method development principles.

    PubMed

    Moritz, Bernd; Locatelli, Valentina; Niess, Michele; Bathke, Andrea; Kiessig, Steffen; Entler, Barbara; Finkler, Christof; Wegele, Harald; Stracke, Jan

    2017-12-01

    CZE is a well-established technique for charge heterogeneity testing of biopharmaceuticals. It is based on the differences between the ratios of net charge and hydrodynamic radius. In an extensive intercompany study, it was recently shown that CZE is very robust and can be easily implemented in labs that did not perform it before. However, individual characteristics of some examined proteins resulted in suboptimal resolution. Therefore, enhanced method development principles were applied here to investigate possibilities for further method optimization. For this purpose, a high number of different method parameters was evaluated with the aim to improve CZE separation. For the relevant parameters, design of experiments (DoE) models were generated and optimized in several ways for different sets of responses like resolution, peak width and number of peaks. In spite of product specific DoE optimization it was found that the resulting combination of optimized parameters did result in significant improvement of separation for 13 out of 16 different antibodies and other molecule formats. These results clearly demonstrate generic applicability of the optimized CZE method. Adaptation to individual molecular properties may sometimes still be required in order to achieve optimal separation but the set screws discussed in this study [mainly pH, identity of the polymer additive (HPC versus HPMC) and the concentrations of additives like acetonitrile, butanolamine and TETA] are expected to significantly reduce the effort for specific optimization. 2017 The Authors. Electrophoresis published by Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.

  16. Comprehensive Modeling of Temperature-Dependent Degradation Mechanisms in Lithium Iron Phosphate Batteries

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schimpe, Michael; von Kuepach, M. E.; Naumann, M.

    For reliable lifetime predictions of lithium-ion batteries, models for cell degradation are required. A comprehensive semi-empirical model based on a reduced set of internal cell parameters and physically justified degradation functions for the capacity loss is developed and presented for a commercial lithium iron phosphate/graphite cell. One calendar and several cycle aging effects are modeled separately. Emphasis is placed on the varying degradation at different temperatures. Degradation mechanisms for cycle aging at high and low temperatures as well as the increased cycling degradation at high state of charge are calculated separately. For parameterization, a lifetime test study is conducted includingmore » storage and cycle tests. Additionally, the model is validated through a dynamic current profile based on real-world application in a stationary energy storage system revealing the accuracy. Tests for validation are continued for up to 114 days after the longest parametrization tests. In conclusion, the model error for the cell capacity loss in the application-based tests is at the end of testing below 1% of the original cell capacity and the maximum relative model error is below 21%.« less

  17. Comprehensive Modeling of Temperature-Dependent Degradation Mechanisms in Lithium Iron Phosphate Batteries

    DOE PAGES

    Schimpe, Michael; von Kuepach, M. E.; Naumann, M.; ...

    2018-01-12

    For reliable lifetime predictions of lithium-ion batteries, models for cell degradation are required. A comprehensive semi-empirical model based on a reduced set of internal cell parameters and physically justified degradation functions for the capacity loss is developed and presented for a commercial lithium iron phosphate/graphite cell. One calendar and several cycle aging effects are modeled separately. Emphasis is placed on the varying degradation at different temperatures. Degradation mechanisms for cycle aging at high and low temperatures as well as the increased cycling degradation at high state of charge are calculated separately. For parameterization, a lifetime test study is conducted includingmore » storage and cycle tests. Additionally, the model is validated through a dynamic current profile based on real-world application in a stationary energy storage system revealing the accuracy. Tests for validation are continued for up to 114 days after the longest parametrization tests. In conclusion, the model error for the cell capacity loss in the application-based tests is at the end of testing below 1% of the original cell capacity and the maximum relative model error is below 21%.« less

  18. Exact relativistic Toda chain eigenfunctions from Separation of Variables and gauge theory

    NASA Astrophysics Data System (ADS)

    Sciarappa, Antonio

    2017-10-01

    We provide a proposal, motivated by Separation of Variables and gauge theory arguments, for constructing exact solutions to the quantum Baxter equation associated to the N-particle relativistic Toda chain and test our proposal against numerical results. Quantum Mechanical non-perturbative corrections, essential in order to obtain a sensible solution, are taken into account in our gauge theory approach by considering codimension two defects on curved backgrounds (squashed S 5 and degenerate limits) rather than flat space; this setting also naturally incorporates exact quantization conditions and energy spectrum of the relativistic Toda chain as well as its modular dual structure.

  19. Separate class true discovery rate degree of association sets for biomarker identification.

    PubMed

    Crager, Michael R; Ahmed, Murat

    2014-01-01

    In 2008, Efron showed that biological features in a high-dimensional study can be divided into classes and a separate false discovery rate (FDR) analysis can be conducted in each class using information from the entire set of features to assess the FDR within each class. We apply this separate class approach to true discovery rate degree of association (TDRDA) set analysis, which is used in clinical-genomic studies to identify sets of biomarkers having strong association with clinical outcome or state while controlling the FDR. Careful choice of classes based on prior information can increase the identification power of the separate class analysis relative to the overall analysis.

  20. Assessment of the Uniqueness of Wind Tunnel Strain-Gage Balance Load Predictions

    NASA Technical Reports Server (NTRS)

    Ulbrich, N.

    2016-01-01

    A new test was developed to assess the uniqueness of wind tunnel strain-gage balance load predictions that are obtained from regression models of calibration data. The test helps balance users to gain confidence in load predictions of non-traditional balance designs. It also makes it possible to better evaluate load predictions of traditional balances that are not used as originally intended. The test works for both the Iterative and Non-Iterative Methods that are used in the aerospace testing community for the prediction of balance loads. It is based on the hypothesis that the total number of independently applied balance load components must always match the total number of independently measured bridge outputs or bridge output combinations. This hypothesis is supported by a control volume analysis of the inputs and outputs of a strain-gage balance. It is concluded from the control volume analysis that the loads and bridge outputs of a balance calibration data set must separately be tested for linear independence because it cannot always be guaranteed that a linearly independent load component set will result in linearly independent bridge output measurements. Simple linear math models for the loads and bridge outputs in combination with the variance inflation factor are used to test for linear independence. A highly unique and reversible mapping between the applied load component set and the measured bridge output set is guaranteed to exist if the maximum variance inflation factor of both sets is less than the literature recommended threshold of five. Data from the calibration of a six{component force balance is used to illustrate the application of the new test to real-world data.

  1. Spectral dependence of texture features integrated with hyperspectral data for area target classification improvement

    NASA Astrophysics Data System (ADS)

    Bangs, Corey F.; Kruse, Fred A.; Olsen, Chris R.

    2013-05-01

    Hyperspectral data were assessed to determine the effect of integrating spectral data and extracted texture feature data on classification accuracy. Four separate spectral ranges (hundreds of spectral bands total) were used from the Visible and Near Infrared (VNIR) and Shortwave Infrared (SWIR) portions of the electromagnetic spectrum. Haralick texture features (contrast, entropy, and correlation) were extracted from the average gray-level image for each of the four spectral ranges studied. A maximum likelihood classifier was trained using a set of ground truth regions of interest (ROIs) and applied separately to the spectral data, texture data, and a fused dataset containing both. Classification accuracy was measured by comparison of results to a separate verification set of test ROIs. Analysis indicates that the spectral range (source of the gray-level image) used to extract the texture feature data has a significant effect on the classification accuracy. This result applies to texture-only classifications as well as the classification of integrated spectral data and texture feature data sets. Overall classification improvement for the integrated data sets was near 1%. Individual improvement for integrated spectral and texture classification of the "Urban" class showed approximately 9% accuracy increase over spectral-only classification. Texture-only classification accuracy was highest for the "Dirt Path" class at approximately 92% for the spectral range from 947 to 1343nm. This research demonstrates the effectiveness of texture feature data for more accurate analysis of hyperspectral data and the importance of selecting the correct spectral range to be used for the gray-level image source to extract these features.

  2. The use of FDTD in establishing in vitro experimentation conditions representative of lifelike cell phone radiation on the spermatozoa.

    PubMed

    Mouradi, Rand; Desai, Nisarg; Erdemir, Ahmet; Agarwal, Ashok

    2012-01-01

    Recent studies have shown that exposing human semen samples to cell phone radiation leads to a significant decline in sperm parameters. In daily living, a cell phone is usually kept in proximity to the groin, such as in a trouser pocket, separated from the testes by multiple layers of tissue. The aim of this study was to calculate the distance between cell phone and semen sample to set up an in vitro experiment that can mimic real life conditions (cell phone in trouser pocket separated by multiple tissue layers). For this reason, a computational model of scrotal tissues was designed by considering these separating layers, the results of which were used in a series of simulations using the Finite Difference Time Domain (FDTD) method. To provide an equivalent effect of multiple tissue layers, these results showed that the distance between a cell phone and semen sample should be 0.8 cm to 1.8 cm greater than the anticipated distance between a cell phone and the testes.

  3. Experimentation and evaluation of advanced integrated system concepts

    NASA Astrophysics Data System (ADS)

    Ross, M.; Garrigus, K.; Gottschalck, J.; Rinearson, L.; Longee, E.

    1980-09-01

    This final report examines the implementation of a time-phased test bed for experimentation and evaluation of advanced system concepts relative to the future Defense Switched Network (DSN). After identifying issues pertinent to the DSN, a set of experiments which address these issues are developed. Experiments are ordered based on their immediacy and relative importance to DSN development. The set of experiments thus defined allows requirements for a time phased implementation of a test bed to be identified, and several generic test bed architectures which meet these requirements are examined. Specific architecture implementations are costed and cost/schedule profiles are generated as a function of experimental capability. The final recommended system consists of two separate test beds: a circuit switch test bed, configured around an off-the-shelf commercial switch, and directed toward the examination of nearer term and transitional issues raised by the evolving DSN; and a packet/hybrid test bed, featuring a discrete buildup of new hardware and software modules, and directed toward examination of the more advanced integrated voice and data telecommunications issues and concepts.

  4. Segmenting data sets for RIP.

    PubMed

    de Sanctis, Daniele; Nanao, Max H

    2012-09-01

    Specific radiation damage can be used for the phasing of macromolecular crystal structures. In practice, however, the optimization of the X-ray dose used to `burn' the crystal to induce specific damage can be difficult. Here, a method is presented in which a single large data set that has not been optimized in any way for radiation-damage-induced phasing (RIP) is segmented into multiple sub-data sets, which can then be used for RIP. The efficacy of this method is demonstrated using two model systems and two test systems. A method to improve the success of this type of phasing experiment by varying the composition of the two sub-data sets with respect to their separation by image number, and hence by absorbed dose, as well as their individual completeness is illustrated.

  5. MAVRIC Flutter Model Transonic Limit Cycle Oscillation Test

    NASA Technical Reports Server (NTRS)

    Edwards, John W.; Schuster, David M.; Spain, Charles V.; Keller, Donald F.; Moses, Robert W.

    2001-01-01

    The Models for Aeroelastic Validation Research Involving Computation semi-span wind-tunnel model (MAVRIC-I), a business jet wing-fuselage flutter model, was tested in NASA Langley's Transonic Dynamics Tunnel with the goal of obtaining experimental data suitable for Computational Aeroelasticity code validation at transonic separation onset conditions. This research model is notable for its inexpensive construction and instrumentation installation procedures. Unsteady pressures and wing responses were obtained for three wingtip configurations of clean, tipstore, and winglet. Traditional flutter boundaries were measured over the range of M = 0.6 to 0.9 and maps of Limit Cycle Oscillation (LCO) behavior were made in the range of M = 0.85 to 0.95. Effects of dynamic pressure and angle-of-attack were measured. Testing in both R134a heavy gas and air provided unique data on Reynolds number, transition effects, and the effect of speed of sound on LCO behavior. The data set provides excellent code validation test cases for the important class of flow conditions involving shock-induced transonic flow separation onset at low wing angles, including LCO behavior.

  6. MAVRIC Flutter Model Transonic Limit Cycle Oscillation Test

    NASA Technical Reports Server (NTRS)

    Edwards, John W.; Schuster, David M.; Spain, Charles V.; Keller, Donald F.; Moses, Robert W.

    2001-01-01

    The Models for Aeroelastic Validation Research Involving Computation semi-span wind-tunnel model (MAVRIC-I), a business jet wing-fuselage flutter model, was tested in NASA Langley's Transonic Dynamics Tunnel with the goal of obtaining experimental data suitable for Computational Aeroelasticity code validation at transonic separation onset conditions. This research model is notable for its inexpensive construction and instrumentation installation procedures. Unsteady pressures and wing responses were obtained for three wingtip configurations clean, tipstore, and winglet. Traditional flutter boundaries were measured over the range of M = 0.6 to 0.9 and maps of Limit Cycle Oscillation (LCO) behavior were made in the range of M = 0.85 to 0.95. Effects of dynamic pressure and angle-of-attack were measured. Testing in both R134a heavy gas and air provided unique data on Reynolds number, transition effects, and the effect of speed of sound on LCO behavior. The data set provides excellent code validation test cases for the important class of flow conditions involving shock-induced transonic flow separation onset at low wing angles, including Limit Cycle Oscillation behavior.

  7. Runway Scheduling Using Generalized Dynamic Programming

    NASA Technical Reports Server (NTRS)

    Montoya, Justin; Wood, Zachary; Rathinam, Sivakumar

    2011-01-01

    A generalized dynamic programming method for finding a set of pareto optimal solutions for a runway scheduling problem is introduced. The algorithm generates a set of runway fight sequences that are optimal for both runway throughput and delay. Realistic time-based operational constraints are considered, including miles-in-trail separation, runway crossings, and wake vortex separation. The authors also model divergent runway takeoff operations to allow for reduced wake vortex separation. A modeled Dallas/Fort Worth International airport and three baseline heuristics are used to illustrate preliminary benefits of using the generalized dynamic programming method. Simulated traffic levels ranged from 10 aircraft to 30 aircraft with each test case spanning 15 minutes. The optimal solution shows a 40-70 percent decrease in the expected delay per aircraft over the baseline schedulers. Computational results suggest that the algorithm is promising for real-time application with an average computation time of 4.5 seconds. For even faster computation times, two heuristics are developed. As compared to the optimal, the heuristics are within 5% of the expected delay per aircraft and 1% of the expected number of runway operations per hour ad can be 100x faster.

  8. Improved Flow Modulator Construction for GC × GC with Quadrupole Mass Spectrometry.

    PubMed

    Ston, Martin; Cabala, Radomir; Bierhanzl, Vaclav Matej; Krajicek, Jan; Bosakova, Zuzana

    2016-08-18

    Improvement and testing of a flow modulator for the application in comprehensive two-dimensional gas chromatography separations is the subject of the presented paper. This improved setup constructed from two independent capillary branches each consisting of a pressure regulator, a pressure sensor, a two-way solenoid valve and a microfluidic T-connector, allows an independent and easy settings of the pressures and flow velocities in the modulator and provides system flexibility in an operation without need of any component exchange. The estimated flow rates were 0.4 mL/min in the first column and 3.2 mL/min in the second column. This setup was compared with the commercial Zoex cryogenic modulator for the separation of 17 selected solvents at isothermal conditions. Modulator working conditions were optimized and its separation power was demonstrated on the analysis of a lavender extract under an application of two orthogonal capillary column sets (nonpolar-polar vs. polar-nonpolar) and temperature program. The results were evaluated by two commercial software packages and discussed with respect to the identification compliance. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  9. Biostatistics Series Module 3: Comparing Groups: Numerical Variables.

    PubMed

    Hazra, Avijit; Gogtay, Nithya

    2016-01-01

    Numerical data that are normally distributed can be analyzed with parametric tests, that is, tests which are based on the parameters that define a normal distribution curve. If the distribution is uncertain, the data can be plotted as a normal probability plot and visually inspected, or tested for normality using one of a number of goodness of fit tests, such as the Kolmogorov-Smirnov test. The widely used Student's t-test has three variants. The one-sample t-test is used to assess if a sample mean (as an estimate of the population mean) differs significantly from a given population mean. The means of two independent samples may be compared for a statistically significant difference by the unpaired or independent samples t-test. If the data sets are related in some way, their means may be compared by the paired or dependent samples t-test. The t-test should not be used to compare the means of more than two groups. Although it is possible to compare groups in pairs, when there are more than two groups, this will increase the probability of a Type I error. The one-way analysis of variance (ANOVA) is employed to compare the means of three or more independent data sets that are normally distributed. Multiple measurements from the same set of subjects cannot be treated as separate, unrelated data sets. Comparison of means in such a situation requires repeated measures ANOVA. It is to be noted that while a multiple group comparison test such as ANOVA can point to a significant difference, it does not identify exactly between which two groups the difference lies. To do this, multiple group comparison needs to be followed up by an appropriate post hoc test. An example is the Tukey's honestly significant difference test following ANOVA. If the assumptions for parametric tests are not met, there are nonparametric alternatives for comparing data sets. These include Mann-Whitney U-test as the nonparametric counterpart of the unpaired Student's t-test, Wilcoxon signed-rank test as the counterpart of the paired Student's t-test, Kruskal-Wallis test as the nonparametric equivalent of ANOVA and the Friedman's test as the counterpart of repeated measures ANOVA.

  10. Control of stacking loads in final waste disposal according to the borehole technique

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Feuser, W.; Barnert, E.; Vijgen, H.

    1996-12-01

    The semihydrostatic model has been developed in order to assess the mechanical toads acting on heat-generating ILW(Q) and HTGR fuel element waste packages to be emplaced in vertical boreholes according to the borehole technique in underground rock salt formations. For the experimental validation of the theory, laboratory test stands reduced in scale are set up to simulate the bottom section of a repository borehole. A comparison of the measurement results with the data computed by the model, a correlation between the test stand results, and a systematic determination of material-typical crushed salt parameters in a separate research project will servemore » to derive a set of characteristic equations enabling a description of real conditions in a future repository.« less

  11. Cosmic Bell Test: Measurement Settings from Milky Way Stars

    DOE PAGES

    Handsteiner, Johannes; Friedman, Andrew S.; Rauch, Dominik; ...

    2017-02-07

    Bell’s theorem states that some predictions of quantum mechanics cannot be reproduced by a local-realist theory. That conflict is expressed by Bell’s inequality, which is usually derived under the assumption that there are no statistical correlations between the choices of measurement settings and anything else that can causally affect the measurement outcomes. In previous experiments, this “freedom of choice” was addressed by ensuring that selection of measurement settings via conventional “quantum random number generators” was spacelike separated from the entangled particle creation. This, however, left open the possibility that an unknown cause affected both the setting choices and measurement outcomesmore » as recently as mere microseconds before each experimental trial. Here in this paper we report on a new experimental test of Bell’s inequality that, for the first time, uses distant astronomical sources as “cosmic setting generators.” In our tests with polarization-entangled photons, measurement settings were chosen using real-time observations of Milky Way stars while simultaneously ensuring locality. Assuming fair sampling for all detected photons, and that each stellar photon’s color was set at emission, we observe statistically significant ≳7.31σ and ≳11.93σ violations of Bell’s inequality with estimated p values of ≲1.8 × 10 -13 and ≲4.0 × 10 -33, respectively, thereby pushing back by ~600 years the most recent time by which any local-realist influences could have engineered the observed Bell violation.« less

  12. Novel microfluidic device for the continuous separation of cancer cells using dielectrophoresis.

    PubMed

    Alazzam, Anas; Mathew, Bobby; Alhammadi, Falah

    2017-03-01

    We describe the design, microfabrication, and testing of a microfluidic device for the separation of cancer cells based on dielectrophoresis. Cancer cells, specifically green fluorescent protein-labeled MDA-MB-231, are successfully separated from a heterogeneous mixture of the same and normal blood cells. MDA-MB-231 cancer cells are separated with an accuracy that enables precise detection and counting of circulating tumor cells present among normal blood cells. The separation is performed using a set of planar interdigitated transducer electrodes that are deposited on the surface of a glass wafer and slightly protrude into the separation microchannel at one side. The device includes two parts, namely, a glass wafer and polydimethylsiloxane element. The device is fabricated using standard microfabrication techniques. All experiments are conducted with low conductivity sucrose-dextrose isotonic medium. The variation in response between MDA-MB-231 cancer cells and normal cells to a certain band of alternating-current frequencies is used for continuous separation of cells. The fabrication of the microfluidic device, preparation of cells and medium, and flow conditions are detailed. The proposed microdevice can be used to detect and separate malignant cells from heterogeneous mixture of cells for the purpose of early screening for cancer. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  13. LBA-ECO TG-07 Soil Trace Gas Flux and Root Mortality, Tapajos National Forest

    Treesearch

    R.K. Varner; M.M. Keller

    2009-01-01

    This data set reports the results of an experiment that tested the short-term effects of root mortality on the soil-atmosphere fluxes of nitrous oxide, nitric oxide, methane, and carbon dioxide in a tropical evergreen forest. Weekly trace gas fluxes are provided for treatment and control plots on sand and clay tropical forest soils in two comma separated ASCII files....

  14. AC-impedance measurements during thermal runaway process in several lithium/polymer batteries

    NASA Astrophysics Data System (ADS)

    Uchida, I.; Ishikawa, H.; Mohamedi, M.; Umeda, M.

    In this work, we present a set of thermal characterization experiments of charged prismatic polymer lithium-ion battery (PLB) comparatively with those of a lithium-ion battery (LIB). These cells at different state of charge (SOC) were tested inside an accelerated rate calorimeter (ARC) to determine the onset-of-thermal runaway (OTR) temperatures. In addition, the thermally activated components of these cells were followed by monitoring the impedance (at 1 kHz) and the open-circuit voltage (OCV) as a function of temperature. An increase in the impedance was observed at around 133 °C corresponding to the polyethylene separator shutdown. Above 140 °C, the OCV dropped to zero indicating an internal short-circuit due the separator meltdown suggesting that the pinholes created in the separator at meltdown are large enough to create an internal short-circuit.

  15. Genetic Architecture of the Delis-Kaplan Executive Function System Trail Making Test: Evidence for Distinct Genetic Influences on Executive Function

    PubMed Central

    Vasilopoulos, Terrie; Franz, Carol E.; Panizzon, Matthew S.; Xian, Hong; Grant, Michael D.; Lyons, Michael J; Toomey, Rosemary; Jacobson, Kristen C.; Kremen, William S.

    2012-01-01

    Objective To examine how genes and environments contribute to relationships among Trail Making test conditions and the extent to which these conditions have unique genetic and environmental influences. Method Participants included 1237 middle-aged male twins from the Vietnam-Era Twin Study of Aging (VESTA). The Delis-Kaplan Executive Function System Trail Making test included visual searching, number and letter sequencing, and set-shifting components. Results Phenotypic correlations among Trails conditions ranged from 0.29 – 0.60, and genes accounted for the majority (58–84%) of each correlation. Overall heritability ranged from 0.34 to 0.62 across conditions. Phenotypic factor analysis suggested a single factor. In contrast, genetic models revealed a single common genetic factor but also unique genetic influences separate from the common factor. Genetic variance (i.e., heritability) of number and letter sequencing was completely explained by the common genetic factor while unique genetic influences separate from the common factor accounted for 57% and 21% of the heritabilities of visual search and set-shifting, respectively. After accounting for general cognitive ability, unique genetic influences accounted for 64% and 31% of those heritabilities. Conclusions A common genetic factor, most likely representing a combination of speed and sequencing accounted for most of the correlation among Trails 1–4. Distinct genetic factors, however, accounted for a portion of variance in visual scanning and set-shifting. Thus, although traditional phenotypic shared variance analysis techniques suggest only one general factor underlying different neuropsychological functions in non-patient populations, examining the genetic underpinnings of cognitive processes with twin analysis can uncover more complex etiological processes. PMID:22201299

  16. Separable Roles for Attentional Control Sub-Systems in Reading Tasks: A Combined Behavioral and fMRI Study

    PubMed Central

    Ihnen, S.K.Z.; Petersen, Steven E.; Schlaggar, Bradley L.

    2015-01-01

    Attentional control is important both for learning to read and for performing difficult reading tasks. A previous study invoked 2 mechanisms to explain reaction time (RT) differences between reading tasks with variable attentional demands. The present study combined behavioral and neuroimaging measures to test the hypotheses that there are 2 mechanisms of interaction between attentional control and reading; that these mechanisms are dissociable both behaviorally and neuro-anatomically; and that the 2 mechanisms involve functionally separable control systems. First, RT evidence was found in support of the 2-mechanism model, corroborating the previous study. Next, 2 sets of brain regions were identified as showing functional magnetic resonance imaging blood oxygen level-dependent activity that maps onto the 2-mechanism distinction. One set included bilateral Cingulo-opercular regions and mostly right-lateralized Dorsal Attention regions (CO/DA+). This CO/DA+ region set showed response properties consistent with a role in reporting which processing pathway (phonological or lexical) was biased for a particular trial. A second set was composed primarily of left-lateralized Frontal-parietal (FP) regions. Its signal properties were consistent with a role in response checking. These results demonstrate how the subcomponents of attentional control interact with subcomponents of reading processes in healthy young adults. PMID:24275830

  17. CT and MRI slice separation evaluation by LabView developed software.

    PubMed

    Acri, Giuseppe; Testagrossa, Barbara; Sestito, Angela; Bonanno, Lilla; Vermiglio, Giuseppe

    2018-02-01

    The efficient use of Computed Tomography (CT) and Magnetic Resonance Imaging (MRI) equipment necessitates establishing adequate quality-control (QC) procedures. In particular, the accuracy of slice separation, during multislices acquisition, requires scan exploration of phantoms containing test objects. To simplify such procedures, a novel phantom and a computerised LabView-based procedure have been devised, enabling determination the midpoint of full width at half maximum (FWHM) in real time while the distance from the profile midpoint of two progressive images is evaluated and measured. The results were compared with those obtained by processing the same phantom images with commercial software. To validate the proposed methodology the Fisher test was conducted on the resulting data sets. In all cases, there was no statistically significant variation between the commercial procedure and the LabView one, which can be used on any CT and MRI diagnostic devices. Copyright © 2017. Published by Elsevier GmbH.

  18. [Comparison of techniques for coliform bacteria extraction from sediment of Xochimilco Lake, Mexico].

    PubMed

    Fernández-Rendón, Carlos L; Barrera-Escorcia, Guadalupe

    2013-01-01

    The need to separate bacteria from sediment in order to appropriately count them has led to test the efficacy of different techniques. In this research, traditional techniques such as manual shaking, homogenization, ultrasonication, and surfactant are compared. Moreover, the possibility of using a set of enzymes (pancreatine) and an antibiotic (ampicillin) for sediment coliform extraction is proposed. Samples were obtained from Xochimilco Lake in Mexico City. The most probable number of coliform bacteria was determined after applying the appropriate separation procedure. Most of the techniques tested led to numbers similar to those of the control (manual shaking). Only with the use of ampicillin, a greater total coliform concentration was observed (Mann-Whitney, z = 2.09; p = 0.03). It is possible to propose the use of ampicillin as a technique for total coliform extraction; however, it is necessary to consider sensitivity of bacteria to the antibiotic.

  19. Mach 10 Stage Separation Analysis for the X43-A

    NASA Technical Reports Server (NTRS)

    Tartabini, Paul V.; Bose, David M.; Thornblom, Mark N.; Lien, J. P.; Martin, John G.

    2007-01-01

    This paper describes the pre-flight stage separation analysis that was conducted in support of the final flight of the X-43A. In that flight, which occurred less than eight months after the successful Mach 7 flight, the X-43A Research Vehicle attained a peak speed of Mach 9.6. Details are provided on how the lessons learned from the Mach 7 flight affected separation modeling and how adjustments were made to account for the increased flight Mach number. Also, the procedure for defining the feedback loop closure and feed-forward parameters employed in the separation control logic are described, and their effect on separation performance is explained. In addition, the range and nominal values of these parameters, which were included in the Mission Data Load, are presented. Once updates were made, the nominal pre-flight trajectory and Monte Carlo statistical results were determined and stress tests were performed to ensure system robustness. During flight the vehicle performed within the uncertainty bounds predicted in the pre-flight analysis and ultimately set the world record for airbreathing powered flight.

  20. Development of an intensive care unit resource assessment survey for the care of critically ill patients in resource-limited settings.

    PubMed

    Leligdowicz, Aleksandra; Bhagwanjee, Satish; Diaz, Janet V; Xiong, Wei; Marshall, John C; Fowler, Robert A; Adhikari, Neill Kj

    2017-04-01

    Capacity to provide critical care in resource-limited settings is poorly understood because of lack of data about resources available to manage critically ill patients. Our objective was to develop a survey to address this issue. We developed and piloted a cross-sectional self-administered survey in 9 resource-limited countries. The survey consisted of 8 domains; specific items within domains were modified from previously developed survey tools. We distributed the survey by e-mail to a convenience sample of health care providers responsible for providing care to critically ill patients. We assessed clinical sensibility and test-retest reliability. Nine of 15 health care providers responded to the survey on 2 separate occasions, separated by 2 to 4 weeks. Clinical sensibility was high (3.9-4.9/5 on assessment tool). Test-retest reliability for questions related to resource availability was acceptable (intraclass correlation coefficient, 0.94; 95% confidence interval, 0.75-0.99; mean (SD) of weighted κ values = 0.67 [0.19]). The mean (SD) time for survey completion survey was 21 (16) minutes. A reliable cross-sectional survey of available resources to manage critically ill patients can be feasibly administered to health care providers in resource-limited settings. The survey will inform future research focusing on access to critical care where it is poorly described but urgently needed. Copyright © 2016 Elsevier Inc. All rights reserved.

  1. Feasibility of Home-Based Functional Status Assessment of Chronic Obstructive Pulmonary Disease Patients Recovering From an Exacerbation.

    PubMed

    Valeiro, Beatriz; Hernández, Carme; Barberán-Garcia, Anael; Rodríguez, Diego A; Aibar, Jesús; Llop, Lourdes; Vilaró, Jordi

    2016-05-01

    The Glittre Activities of Daily Living Test (ADL-Test) is a reliable functional status measurement for stable chronic obstructive pulmonary disease (COPD) patients in a laboratory setting. We aimed to adapt the test to the home setting (mADL-Test) and to follow-up the functional status recovery of post-exacerbation COPD patients included in a home hospitalization (HH) program. We assessed 17 exacerbated moderate-to-very-severe COPD patients in 3 home visits: at discharge to HH (V0), 10days (V10post) and 1month after discharge (V30post). Patients completed the mADL-Test (laps, VO2 and VE), COPD assessment test (CAT), London Chest ADL Test (LCADL), modified Medical Research Council (mMRC) and upper limb strength (handgrip). The number of laps of the mADL-Test (4, 5 and 5, P<.05), CAT (19, 12 and 12, P<.01), mMRC (2, 1.5 and 1, P<.01) and the self-care domain of the LCADL (6, 5 and 5, P<.01) improved during follow-up (V0, V10post and V30post, respectively). No significant changes were evidenced in VO2, VE or handgrip. Our results suggest that the mADL-test can be performed in the home setting after a COPD exacerbation, and that functional status continues to improve 10days after discharge to HH. Copyright © 2015 SEPAR. Published by Elsevier Espana. All rights reserved.

  2. The General Mission Analysis Tool (GMAT) System Test Plan

    NASA Technical Reports Server (NTRS)

    Conway, Darrel J.; Hughes, Steven P.

    2007-01-01

    This document serves as the System Test Approach for the GMAT Project. Preparation for system testing consists of three major stages: 1) The Test Approach sets the scope of system testing, the overall strategy to be adopted, the activities to be completed, the general resources required and the methods and processes to be used to test the release. It also details the activities, dependencies and effort required to conduct the System Test. 2) Test Planning details the activities, dependencies and effort required to conduct the System Test. 3) Test Cases documents the tests to be applied, the data to be processed, the automated testing coverage and the expected results. This document covers the first two of these items, and established the framework used for the GMAT test case development. The test cases themselves exist as separate components, and are managed outside of and concurrently with this System Test Plan.

  3. The role of service areas in the optimization of FSS orbital and frequency assignments

    NASA Technical Reports Server (NTRS)

    Levis, C. A.; Wang, C.-W.; Yamamura, Y.; Reilly, C. H.; Gonsalvez, D. J.

    1986-01-01

    An implicit relationship is derived which relates the topocentric separation of two satellites required for a given level of single-entry protection to the separation and orientation of their service areas. The results are presented explicitly for circular beams and topocentric angles. A computational approach is given for elliptical beams and for use with longitude and latitude variables. It is found that the geocentric separation depends primarily on the service area separation, secondarily on a parameter which characterizes the electrical design, and only slightly on the mean orbital position of the satellites. Both linear programming and mixed integer programming algorithms are implemented. Possible objective function choices are discussed, and explicit formulations are presented for the choice of the sum of the absolute deviations of the orbital locations from some prescribed 'ideal' location set. A test problem involving six service areas is examined with results that are encouraging with respect to applying the linear programming procedure to larger scenarios.

  4. Patient Education and Informed Consent for Preimplantation Genetic Diagnosis: Health Literacy for Genetics and Assisted Reproductive Technology

    PubMed Central

    McGowan, Michelle L.; Burant, Chris; Moran, Rocio; Farrell, Ruth

    2013-01-01

    Introduction Innovative applications of genetic testing have emerged within the field of assisted reproductive technology through preimplantation genetic diagnosis (PGD). As in all forms of genetic testing, adequate genetic counseling and informed consent are critical. Despite the growing recognition of the role of informed consent in genetic testing, there is little data available about how this process occurs in the setting of PGD. Methods A cross sectional study of IVF clinics offering PGD in the U.S. was conducted to assess patient education and informed consent practices. Descriptive data were collected with a self-administered survey instrument. Results More than half of the clinics offering PGD required genetic counseling prior to PGD (56%). Genetic counseling was typically performed by certified genetic counselors (84 %). Less than half (37%) of the clinics required a separate informed consent process for genetic testing of embryonic cells. At a majority of those clinics requiring a separate informed consent for genetic testing (54%), informed consent for PGD and genetic testing took place as a single event before beginning IVF procedures. Conclusions The results suggest that patient education and informed consent practices for PGD have yet to be standardized. These findings warrant the establishment of professional guidelines for patient education and informed consent specific to embryonic genetic testing. PMID:19652605

  5. Corona And Ultraviolet Equipment For Testing Materials

    NASA Technical Reports Server (NTRS)

    Laue, Eric G.

    1993-01-01

    Two assemblies of laboratory equipment developed for use in testing abilities of polymers, paints, and other materials to withstand ultraviolet radiation and charged particles. One is vacuum ultraviolet source built around commercial deuterium lamp. Other exposes specimen in partial vacuum to both ultraviolet radiation and brush corona discharge. Either or both assemblies used separately or together to simulate approximately combination of solar radiation and charged particles encountered by materials aboard spacecraft in orbit around Earth. Also used to provide rigorous environmental tests of materials exposed to artificial ultraviolet radiation and charged particles in industrial and scientific settings or to natural ultraviolet radiation and charged particles aboard aircraft at high altitudes.

  6. Husimi coordinates of multipartite separable states

    NASA Astrophysics Data System (ADS)

    Parfionov, Georges; Zapatrin, Romàn R.

    2010-12-01

    A parametrization of multipartite separable states in a finite-dimensional Hilbert space is suggested. It is proved to be a diffeomorphism between the set of zero-trace operators and the interior of the set of separable density operators. The result is applicable to any tensor product decomposition of the state space. An analytical criterion for separability of density operators is established in terms of the boundedness of a sequence of operators.

  7. Digital implementation of a neural network for imaging

    NASA Astrophysics Data System (ADS)

    Wood, Richard; McGlashan, Alex; Yatulis, Jay; Mascher, Peter; Bruce, Ian

    2012-10-01

    This paper outlines the design and testing of a digital imaging system that utilizes an artificial neural network with unsupervised and supervised learning to convert streaming input (real time) image space into parameter space. The primary objective of this work is to investigate the effectiveness of using a neural network to significantly reduce the information density of streaming images so that objects can be readily identified by a limited set of primary parameters and act as an enhanced human machine interface (HMI). Many applications are envisioned including use in biomedical imaging, anomaly detection and as an assistive device for the visually impaired. A digital circuit was designed and tested using a Field Programmable Gate Array (FPGA) and an off the shelf digital camera. Our results indicate that the networks can be readily trained when subject to limited sets of objects such as the alphabet. We can also separate limited object sets with rotational and positional invariance. The results also show that limited visual fields form with only local connectivity.

  8. Evaluation of differences in quality of experience features for test stimuli of good-only and bad-only overall audiovisual quality

    NASA Astrophysics Data System (ADS)

    Strohmeier, Dominik; Kunze, Kristina; Göbel, Klemens; Liebetrau, Judith

    2013-01-01

    Assessing audiovisual Quality of Experience (QoE) is a key element to ensure quality acceptance of today's multimedia products. The use of descriptive evaluation methods allows evaluating QoE preferences and the underlying QoE features jointly. From our previous evaluations on QoE for mobile 3D video we found that mainly one dimension, video quality, dominates the descriptive models. Large variations of the visual video quality in the tests may be the reason for these findings. A new study was conducted to investigate whether test sets of low QoE are described differently than those of high audiovisual QoE. Reanalysis of previous data sets seems to confirm this hypothesis. Our new study consists of a pre-test and a main test, using the Descriptive Sorted Napping method. Data sets of good-only and bad-only video quality were evaluated separately. The results show that the perception of bad QoE is mainly determined one-dimensionally by visual artifacts, whereas the perception of good quality shows multiple dimensions. Here, mainly semantic-related features of the content and affective descriptors are used by the naïve test participants. The results show that, with increasing QoE of audiovisual systems, content semantics and users' a_ective involvement will become important for assessing QoE differences.

  9. Exploring the Diagnostic Potential of Immune Biomarker Co-expression in Gulf War Illness.

    PubMed

    Broderick, Gordon; Fletcher, Mary Ann; Gallagher, Michael; Barnes, Zachary; Vernon, Suzanne D; Klimas, Nancy G

    2018-01-01

    Complex disorders like Gulf War illness (GWI) often defy diagnosis on the basis of a single biomarker and may only be distinguishable by considering the co-expression of multiple markers measured in response to a challenge. We demonstrate the practical application of such an approach using an example where blood was collected from 26 GWI, 13 healthy control subjects, and 9 unhealthy controls with chronic fatigue at three points during a graded exercise challenge. A 3-way multivariate projection model based on 12 markers of endocrine and immune function was constructed using a training set of n = 10 GWI and n = 11 healthy controls. These groups were separated almost completely on the basis of two co-expression patterns. In a separate test set these same features allowed for discrimination of new GWI subjects (n = 16) from unhealthy (n = 9) and healthy control subjects with a sensitivity of 70% and a specificity of 90%.

  10. Analysis and Application of European Genetic Substructure Using 300 K SNP Information

    PubMed Central

    Tian, Chao; Plenge, Robert M; Ransom, Michael; Lee, Annette; Villoslada, Pablo; Selmi, Carlo; Klareskog, Lars; Pulver, Ann E; Qi, Lihong; Gregersen, Peter K; Seldin, Michael F

    2008-01-01

    European population genetic substructure was examined in a diverse set of >1,000 individuals of European descent, each genotyped with >300 K SNPs. Both STRUCTURE and principal component analyses (PCA) showed the largest division/principal component (PC) differentiated northern from southern European ancestry. A second PC further separated Italian, Spanish, and Greek individuals from those of Ashkenazi Jewish ancestry as well as distinguishing among northern European populations. In separate analyses of northern European participants other substructure relationships were discerned showing a west to east gradient. Application of this substructure information was critical in examining a real dataset in whole genome association (WGA) analyses for rheumatoid arthritis in European Americans to reduce false positive signals. In addition, two sets of European substructure ancestry informative markers (ESAIMs) were identified that provide substantial substructure information. The results provide further insight into European population genetic substructure and show that this information can be used for improving error rates in association testing of candidate genes and in replication studies of WGA scans. PMID:18208329

  11. Estimating error rates for firearm evidence identifications in forensic science

    PubMed Central

    Song, John; Vorburger, Theodore V.; Chu, Wei; Yen, James; Soons, Johannes A.; Ott, Daniel B.; Zhang, Nien Fan

    2018-01-01

    Estimating error rates for firearm evidence identification is a fundamental challenge in forensic science. This paper describes the recently developed congruent matching cells (CMC) method for image comparisons, its application to firearm evidence identification, and its usage and initial tests for error rate estimation. The CMC method divides compared topography images into correlation cells. Four identification parameters are defined for quantifying both the topography similarity of the correlated cell pairs and the pattern congruency of the registered cell locations. A declared match requires a significant number of CMCs, i.e., cell pairs that meet all similarity and congruency requirements. Initial testing on breech face impressions of a set of 40 cartridge cases fired with consecutively manufactured pistol slides showed wide separation between the distributions of CMC numbers observed for known matching and known non-matching image pairs. Another test on 95 cartridge cases from a different set of slides manufactured by the same process also yielded widely separated distributions. The test results were used to develop two statistical models for the probability mass function of CMC correlation scores. The models were applied to develop a framework for estimating cumulative false positive and false negative error rates and individual error rates of declared matches and non-matches for this population of breech face impressions. The prospect for applying the models to large populations and realistic case work is also discussed. The CMC method can provide a statistical foundation for estimating error rates in firearm evidence identifications, thus emulating methods used for forensic identification of DNA evidence. PMID:29331680

  12. Estimating error rates for firearm evidence identifications in forensic science.

    PubMed

    Song, John; Vorburger, Theodore V; Chu, Wei; Yen, James; Soons, Johannes A; Ott, Daniel B; Zhang, Nien Fan

    2018-03-01

    Estimating error rates for firearm evidence identification is a fundamental challenge in forensic science. This paper describes the recently developed congruent matching cells (CMC) method for image comparisons, its application to firearm evidence identification, and its usage and initial tests for error rate estimation. The CMC method divides compared topography images into correlation cells. Four identification parameters are defined for quantifying both the topography similarity of the correlated cell pairs and the pattern congruency of the registered cell locations. A declared match requires a significant number of CMCs, i.e., cell pairs that meet all similarity and congruency requirements. Initial testing on breech face impressions of a set of 40 cartridge cases fired with consecutively manufactured pistol slides showed wide separation between the distributions of CMC numbers observed for known matching and known non-matching image pairs. Another test on 95 cartridge cases from a different set of slides manufactured by the same process also yielded widely separated distributions. The test results were used to develop two statistical models for the probability mass function of CMC correlation scores. The models were applied to develop a framework for estimating cumulative false positive and false negative error rates and individual error rates of declared matches and non-matches for this population of breech face impressions. The prospect for applying the models to large populations and realistic case work is also discussed. The CMC method can provide a statistical foundation for estimating error rates in firearm evidence identifications, thus emulating methods used for forensic identification of DNA evidence. Published by Elsevier B.V.

  13. VORSTAB: A computer program for calculating lateral-directional stability derivatives with vortex flow effect

    NASA Technical Reports Server (NTRS)

    Lan, C. Edward

    1985-01-01

    A computer program based on the Quasi-Vortex-Lattice Method of Lan is presented for calculating longitudinal and lateral-directional aerodynamic characteristics of nonplanar wing-body combination. The method is based on the assumption of inviscid subsonic flow. Both attached and vortex-separated flows are treated. For the vortex-separated flow, the calculation is based on the method of suction analogy. The effect of vortex breakdown is accounted for by an empirical method. A summary of the theoretical method, program capabilities, input format, output variables and program job control set-up are described. Three test cases are presented as guides for potential users of the code.

  14. Dealing with contaminated datasets: An approach to classifier training

    NASA Astrophysics Data System (ADS)

    Homenda, Wladyslaw; Jastrzebska, Agnieszka; Rybnik, Mariusz

    2016-06-01

    The paper presents a novel approach to classification reinforced with rejection mechanism. The method is based on a two-tier set of classifiers. First layer classifies elements, second layer separates native elements from foreign ones in each distinguished class. The key novelty presented here is rejection mechanism training scheme according to the philosophy "one-against-all-other-classes". Proposed method was tested in an empirical study of handwritten digits recognition.

  15. Trial type mixing substantially reduces the response set effect in the Stroop task.

    PubMed

    Hasshim, Nabil; Parris, Benjamin A

    2017-03-20

    The response set effect refers to the finding that an irrelevant incongruent colour-word produces greater interference when it is one of the response options (referred to as a response set trial), compared to when it is not (a non-response set trial). Despite being a key effect for models of selective attention, the magnitude of the effect varies considerably across studies. We report two within-subjects experiments that tested the hypothesis that presentation format modulates the magnitude of the response set effect. Trial types (e.g. response set, non-response set, neutral) were either presented in separate blocks (pure) or in blocks containing trials from all conditions presented randomly (mixed). In the first experiment we show that the response set effect is substantially reduced in the mixed block context as a result of a decrease in RTs to response set trials. By demonstrating the modulation of the response set effect under conditions of trial type mixing we present evidence that is difficult for models of the effect based on strategic, top-down biasing of attention to explain. In a second experiment we tested a stimulus-driven account of the response set effect by manipulating the number of colour-words that make up the non-response set of distractors. The results show that the greater the number of non-response set colour concepts, the smaller the response set effect. Alternative accounts of the data and its implications for research debating the automaticity of reading are discussed. Copyright © 2017 Elsevier B.V. All rights reserved.

  16. Baryonic effects in cosmic shear tomography: PCA parametrization and importance of extreme baryonic models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mohammed, Irshad; Gnedin, Nickolay Y.

    Baryonic effects are amongst the most severe systematics to the tomographic analysis of weak lensing data which is the principal probe in many future generations of cosmological surveys like LSST, Euclid etc.. Modeling or parameterizing these effects is essential in order to extract valuable constraints on cosmological parameters. In a recent paper, Eifler et al. (2015) suggested a reduction technique for baryonic effects by conducting a principal component analysis (PCA) and removing the largest baryonic eigenmodes from the data. In this article, we conducted the investigation further and addressed two critical aspects. Firstly, we performed the analysis by separating the simulations into training and test sets, computing a minimal set of principle components from the training set and examining the fits on the test set. We found that using only four parameters, corresponding to the four largest eigenmodes of the training set, the test sets can be fitted thoroughly with an RMSmore » $$\\sim 0.0011$$. Secondly, we explored the significance of outliers, the most exotic/extreme baryonic scenarios, in this method. We found that excluding the outliers from the training set results in a relatively bad fit and degraded the RMS by nearly a factor of 3. Therefore, for a direct employment of this method to the tomographic analysis of the weak lensing data, the principle components should be derived from a training set that comprises adequately exotic but reasonable models such that the reality is included inside the parameter domain sampled by the training set. The baryonic effects can be parameterized as the coefficients of these principle components and should be marginalized over the cosmological parameter space.« less

  17. An Efficient Data Partitioning to Improve Classification Performance While Keeping Parameters Interpretable.

    PubMed

    Korjus, Kristjan; Hebart, Martin N; Vicente, Raul

    2016-01-01

    Supervised machine learning methods typically require splitting data into multiple chunks for training, validating, and finally testing classifiers. For finding the best parameters of a classifier, training and validation are usually carried out with cross-validation. This is followed by application of the classifier with optimized parameters to a separate test set for estimating the classifier's generalization performance. With limited data, this separation of test data creates a difficult trade-off between having more statistical power in estimating generalization performance versus choosing better parameters and fitting a better model. We propose a novel approach that we term "Cross-validation and cross-testing" improving this trade-off by re-using test data without biasing classifier performance. The novel approach is validated using simulated data and electrophysiological recordings in humans and rodents. The results demonstrate that the approach has a higher probability of discovering significant results than the standard approach of cross-validation and testing, while maintaining the nominal alpha level. In contrast to nested cross-validation, which is maximally efficient in re-using data, the proposed approach additionally maintains the interpretability of individual parameters. Taken together, we suggest an addition to currently used machine learning approaches which may be particularly useful in cases where model weights do not require interpretation, but parameters do.

  18. Driving mechanism of unsteady separation shock motion in hypersonic interactive flow

    NASA Technical Reports Server (NTRS)

    Dolling, D. S.; Narlo, J. C., II

    1987-01-01

    Wall pressure fluctuations were measured under the steady separation shock waves in Mach 5 turbulent interactions induced by unswept circular cylinders on a flat plate. The wall temperature was adiabatic. A conditional sampling algorithm was developed to examine the statistics of the shock wave motion. The same algorithm was used to examine data taken in earlier studies in the Princeton University Mach 3 blowdown tunnel. In these earlier studies, hemicylindrically blunted fins of different leading-edge diameters were tested in boundary layers which developed on the tunnel floor and on a flat plate. A description of the algorithm, the reasons why it was developed and the sensitivity of the results to the threshold settings, are discussed. The results from the algorithm, together with cross correlations and power spectral density estimates suggests that the shock motion is driven by the low-frequency unsteadiness of the downstream separated, vortical flow.

  19. Classical-processing and quantum-processing signal separation methods for qubit uncoupling

    NASA Astrophysics Data System (ADS)

    Deville, Yannick; Deville, Alain

    2012-12-01

    The Blind Source Separation problem consists in estimating a set of unknown source signals from their measured combinations. It was only investigated in a non-quantum framework up to now. We propose its first quantum extensions. We thus introduce the Quantum Source Separation field, investigating both its blind and non-blind configurations. More precisely, we show how to retrieve individual quantum bits (qubits) only from the global state resulting from their undesired coupling. We consider cylindrical-symmetry Heisenberg coupling, which e.g. occurs when two electron spins interact through exchange. We first propose several qubit uncoupling methods which typically measure repeatedly the coupled quantum states resulting from individual qubits preparations, and which then statistically process the classical data provided by these measurements. Numerical tests prove the effectiveness of these methods. We then derive a combination of quantum gates for performing qubit uncoupling, thus avoiding repeated qubit preparations and irreversible measurements.

  20. Operationally Responsive Space Standard Bus Battery Thermal Balance Testing and Heat Dissipation Analysis

    NASA Technical Reports Server (NTRS)

    Marley, Mike

    2008-01-01

    The focus of this paper will be on the thermal balance testing for the Operationally Responsive Space Standard Bus Battery. The Standard Bus thermal design required that the battery be isolated from the bus itself. This required the battery to have its own thermal control, including heaters and a radiator surface. Since the battery was not ready for testing during the overall bus thermal balance testing, a separate test was conducted to verify the thermal design for the battery. This paper will discuss in detail, the test set up, test procedure, and results from this test. Additionally this paper will consider the methods taken to determine the heat dissipation of the battery during charge and discharge. It seems that the heat dissipation for Lithium Ion batteries is relatively unknown and hard to quantify. The methods used during test and the post test analysis to estimate the heat dissipation of the battery will be discussed.

  1. 28 CFR 51.20 - Form of submissions.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... set. A separate data dictionary file documenting the fields in the data set, the field separators or... data set. Proprietary or commercial software system data files (e.g., SAS, SPSS, dBase, Lotus 1-2-3... General will accept certain machine readable data in the following electronic media: 3.5 inch 1.4 megabyte...

  2. 28 CFR 51.20 - Form of submissions.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... set. A separate data dictionary file documenting the fields in the data set, the field separators or... data set. Proprietary or commercial software system data files (e.g., SAS, SPSS, dBase, Lotus 1-2-3... General will accept certain machine readable data in the following electronic media: 3.5 inch 1.4 megabyte...

  3. 28 CFR 51.20 - Form of submissions.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... set. A separate data dictionary file documenting the fields in the data set, the field separators or... data set. Proprietary or commercial software system data files (e.g., SAS, SPSS, dBase, Lotus 1-2-3... General will accept certain machine readable data in the following electronic media: 3.5 inch 1.4 megabyte...

  4. Use of Fourier-transform infrared spectroscopy to quantify immunoglobulin G concentrations in alpaca serum.

    PubMed

    Burns, J; Hou, S; Riley, C B; Shaw, R A; Jewett, N; McClure, J T

    2014-01-01

    Rapid, economical, and quantitative assays for measurement of camelid serum immunoglobulin G (IgG) are limited. In camelids, failure of transfer of maternal immunoglobulins has a reported prevalence of up to 20.5%. An accurate method for quantifying serum IgG concentrations is required. To develop an infrared spectroscopy-based assay for measurement of alpaca serum IgG and compare its performance to the reference standard radial immunodiffusion (RID) assay. One hundred and seventy-five privately owned, healthy alpacas. Eighty-two serum samples were collected as convenience samples during routine herd visits whereas 93 samples were recruited from a separate study. Serum IgG concentrations were determined by RID assays and midinfrared spectra were collected for each sample. Fifty samples were set aside as the test set and the remaining 125 training samples were employed to build a calibration model using partial least squares (PLS) regression with Monte Carlo cross validation to determine the optimum number of PLS factors. The predictive performance of the calibration model was evaluated by the test set. Correlation coefficients for the IR-based assay were 0.93 and 0.87, respectively, for the entire data set and test set. Sensitivity in the diagnosis of failure of transfer of passive immunity (FTPI) ([IgG] <1,000 mg/dL) was 71.4% and specificity was 100% for the IR-based method (test set) as gauged relative to the RID reference method assay. This study indicated that infrared spectroscopy, in combination with chemometrics, is an effective method for measurement of IgG in alpaca serum. Copyright © 2014 by the American College of Veterinary Internal Medicine.

  5. Separation of ion types in tandem mass spectrometry data interpretation -- a graph-theoretic approach.

    PubMed

    Yan, Bo; Pan, Chongle; Olman, Victor N; Hettich, Robert L; Xu, Ying

    2004-01-01

    Mass spectrometry is one of the most popular analytical techniques for identification of individual proteins in a protein mixture, one of the basic problems in proteomics. It identifies a protein through identifying its unique mass spectral pattern. While the problem is theoretically solvable, it remains a challenging problem computationally. One of the key challenges comes from the difficulty in distinguishing the N- and C-terminus ions, mostly b- and y-ions respectively. In this paper, we present a graph algorithm for solving the problem of separating bfrom y-ions in a set of mass spectra. We represent each spectral peak as a node and consider two types of edges: a type-1 edge connects two peaks possibly of the same ion types and a type-2 edge connects two peaks possibly of different ion types, predicted based on local information. The ion-separation problem is then formulated and solved as a graph partition problem, which is to partition the graph into three subgraphs, namely b-, y-ions and others respectively, so to maximize the total weight of type-1 edges while minimizing the total weight of type-2 edges within each subgraph. We have developed a dynamic programming algorithm for rigorously solving this graph partition problem and implemented it as a computer program PRIME. We have tested PRIME on 18 data sets of high accurate FT-ICR tandem mass spectra and found that it achieved ~90% accuracy for separation of b- and y- ions.

  6. Design parameters for the separation of fat from natural whole milk in an ultrasonic litre-scale vessel.

    PubMed

    Leong, Thomas; Johansson, Linda; Juliano, Pablo; Mawson, Raymond; McArthur, Sally; Manasseh, Richard

    2014-07-01

    The separation of milk fat from natural whole milk has been achieved by applying ultrasonic standing waves (1 MHz and/or 2 MHz) in a litre-scale (5L capacity) batch system. Various design parameters were tested such as power input level, process time, specific energy, transducer-reflector distance and the use of single and dual transducer set-ups. It was found that the efficacy of the treatment depended on the specific energy density input into the system. In this case, a plateau in fat concentration of ∼20% w/v was achieved in the creamed top layer after applying a minimum specific energy of 200 kJ/kg. In addition, the fat separation was enhanced by reducing the transducer reflector distance in the vessel, operating two transducers in a parallel set-up, or by increasing the duration of insonation, resulting in skimmed milk with a fat concentration as low as 1.7% (w/v) using raw milk after 20 min insonation. Dual mode operation with both transducers in parallel as close as 30 mm apart resulted in the fastest creaming and skimming in this study at ∼1.6 g fat/min. Copyright © 2014 Elsevier B.V. All rights reserved.

  7. Separation of Migration and Tomography Modes of Full-Waveform Inversion in the Plane Wave Domain

    NASA Astrophysics Data System (ADS)

    Yao, Gang; da Silva, Nuno V.; Warner, Michael; Kalinicheva, Tatiana

    2018-02-01

    Full-waveform inversion (FWI) includes both migration and tomography modes. The migration mode acts like a nonlinear least squares migration to map model interfaces with reflections, while the tomography mode behaves as tomography to build a background velocity model. The migration mode is the main response of inverting reflections, while the tomography mode exists in response to inverting both the reflections and refractions. To emphasize one of the two modes in FWI, especially for inverting reflections, the separation of the two modes in the gradient of FWI is required. Here we present a new method to achieve this separation with an angle-dependent filtering technique in the plane wave domain. We first transform the source and residual wavefields into the plane wave domain with the Fourier transform and then decompose them into the migration and tomography components using the opening angles between the transformed source and residual plane waves. The opening angles close to 180° contribute to the tomography component, while the others correspond to the migration component. We find that this approach is very effective and robust even when the medium is relatively complicated with strong lateral heterogeneities, highly dipping reflectors, and strong anisotropy. This is well demonstrated by theoretical analysis and numerical tests with a synthetic data set and a field data set.

  8. A test of Hořava gravity: the dark energy

    NASA Astrophysics Data System (ADS)

    Park, Mu-In

    2010-01-01

    Recently Hořava proposed a renormalizable gravity theory with higher spatial derivatives in four dimensions which reduces to Einstein gravity with a non-vanishing cosmological constant in IR but with improved UV behaviors. Here, I consider a non-trivial test of the new gravity theory in FRW universe by considering an IR modification which breaks ``softly'' the detailed balance condition in the original Hořava model. I separate the dark energy parts from the usual Einstein gravity parts in the Friedman equations and obtain the formula of the equations of state parameter. The IR modified Hořava gravity seems to be consistent with the current observational data but we need some more refined data sets to see whether the theory is really consistent with our universe. From the consistency of our theory, I obtain some constraints on the allowed values of w0 and wa in the Chevallier, Polarski, and Linder's parametrization and this may be tested in the near future, by sharpening the data sets.

  9. Development and validation of a Response Bias Scale (RBS) for the MMPI-2.

    PubMed

    Gervais, Roger O; Ben-Porath, Yossef S; Wygant, Dustin B; Green, Paul

    2007-06-01

    This study describes the development of a Minnesota Multiphasic Personality Inventory (MMPI-2) scale designed to detect negative response bias in forensic neuropsychological or disability assessment settings. The Response Bias Scale (RBS) consists of 28 MMPI-2 items that discriminated between persons who passed or failed the Word Memory Test (WMT), Computerized Assessment of Response Bias (CARB), and/or Test of Memory Malingering (TOMM) in a sample of 1,212 nonhead-injury disability claimants. Incremental validity of the RBS was evaluated by comparing its ability to detect poor performance on four separate symptom validity tests with that of the F and F(P) scales and the Fake Bad Scale (FBS). The RBS consistently outperformed F, F(P), and FBS. Study results suggest that the RBS may be a useful addition to existing MMPI-2 validity scales and indices in detecting symptom complaints predominantly associated with cognitive response bias and overreporting in forensic neuropsychological and disability assessment settings.

  10. Need for cognition moderates paranormal beliefs and magical ideation in inconsistent-handers.

    PubMed

    Prichard, Eric C; Christman, Stephen D

    2016-01-01

    A growing literature suggests that degree of handedness predicts gullibility and magical ideation. Inconsistent-handers (people who use their non-dominant hand for at least one common manual activity) report more magical ideation and are more gullible. The current study tested whether this effect is moderated by need for cognition. One hundred eighteen university students completed questionnaires assessing handedness, self-reported paranormal beliefs, and self-reported need for cognition. Handedness (Inconsistent vs. Consistent Right) and Need for Cognition (High vs. Low) were treated as categorical predictors. Both paranormal beliefs and magical ideation served as dependent variable's in separate analyses. Neither set of tests yielded main effects for handedness or need for cognition. However, there were a significant handedness by need for cognition interactions. Post-hoc comparisons revealed that low, but not high, need for cognition inconsistent-handers reported relatively elevated levels of paranormal belief and magical ideation. A secondary set of tests treating the predictor variables as continuous instead of categorical obtained the same overall pattern.

  11. Detection of Subsurface Material Separation in Shuttle Orbiter Slip-Side Joggle Region of the Wing Leading Edge using Infrared Imaging Data from Arc Jet Tests

    NASA Technical Reports Server (NTRS)

    Daryabeigi, Kamran; Walker, Sandra P.

    2009-01-01

    The objective of the present study was to determine whether infrared imaging (IR) surface temperature data obtained during arc-jet tests of Space Shuttle Orbiter s reinforced carbon-carbon (RCC) wing leading edge panel slip-side joggle region could be used to detect presence of subsurface material separation, and if so, to determine when separation occurs during the simulated entry profile. Recent thermostructural studies have indicated thermally induced interlaminar normal stress concentrations at the substrate/coating interface in the curved joggle region can result in local subsurface material separation, with the separation predicted to occur during approach to peak heating during reentry. The present study was an attempt to determine experimentally when subsurface material separations occur. A simplified thermal model of a flat RCC panel with subsurface material separation was developed and used to infer general surface temperature trends due to the presence of subsurface material separation. IR data from previously conducted arc-jet tests on three test specimens were analyzed: one without subsurface material separation either pre or post test, one with pre test separation, and one with separation developing during test. The simplified thermal model trend predictions along with comparison of experimental IR data of the three test specimens were used to successfully infer material separation from the arc-jet test data. Furthermore, for the test specimen that had developed subsurface material separation during the arc-jet tests, the initiation of separation appeared to occur during the ramp up to the peak heating condition, where test specimen temperature went from 2500 to 2800 F.

  12. Column Selection for Biomedical Analysis Supported by Column Classification Based on Four Test Parameters

    PubMed Central

    Plenis, Alina; Rekowska, Natalia; Bączek, Tomasz

    2016-01-01

    This article focuses on correlating the column classification obtained from the method created at the Katholieke Universiteit Leuven (KUL), with the chromatographic resolution attained in biomedical separation. In the KUL system, each column is described with four parameters, which enables estimation of the FKUL value characterising similarity of those parameters to the selected reference stationary phase. Thus, a ranking list based on the FKUL value can be calculated for the chosen reference column, then correlated with the results of the column performance test. In this study, the column performance test was based on analysis of moclobemide and its two metabolites in human plasma by liquid chromatography (LC), using 18 columns. The comparative study was performed using traditional correlation of the FKUL values with the retention parameters of the analytes describing the column performance test. In order to deepen the comparative assessment of both data sets, factor analysis (FA) was also used. The obtained results indicated that the stationary phase classes, closely related according to the KUL method, yielded comparable separation for the target substances. Therefore, the column ranking system based on the FKUL-values could be considered supportive in the choice of the appropriate column for biomedical analysis. PMID:26805819

  13. Column Selection for Biomedical Analysis Supported by Column Classification Based on Four Test Parameters.

    PubMed

    Plenis, Alina; Rekowska, Natalia; Bączek, Tomasz

    2016-01-21

    This article focuses on correlating the column classification obtained from the method created at the Katholieke Universiteit Leuven (KUL), with the chromatographic resolution attained in biomedical separation. In the KUL system, each column is described with four parameters, which enables estimation of the FKUL value characterising similarity of those parameters to the selected reference stationary phase. Thus, a ranking list based on the FKUL value can be calculated for the chosen reference column, then correlated with the results of the column performance test. In this study, the column performance test was based on analysis of moclobemide and its two metabolites in human plasma by liquid chromatography (LC), using 18 columns. The comparative study was performed using traditional correlation of the FKUL values with the retention parameters of the analytes describing the column performance test. In order to deepen the comparative assessment of both data sets, factor analysis (FA) was also used. The obtained results indicated that the stationary phase classes, closely related according to the KUL method, yielded comparable separation for the target substances. Therefore, the column ranking system based on the FKUL-values could be considered supportive in the choice of the appropriate column for biomedical analysis.

  14. Post-mortem genetic testing in a family with long-QT syndrome and hypertrophic cardiomyopathy.

    PubMed

    Kane, David A; Triedman, John

    2014-01-01

    Pediatric sudden unexplained deaths are rare and tragic events that should be evaluated with all the tools available to the medical community. The current state of genetic testing is an excellent resource that improves our ability to diagnose cardiovascular disorders that can lead to sudden cardiac arrest. Post-mortem genetic testing is not typically a covered benefit of health insurance and may not be offered to families in the setting of a negative autopsy. This unusual case includes two separate cardiovascular disorders that highlight the use of genetic testing and its role in diagnosis, screening, and risk stratification. The insurance company's decision to cover post-mortem testing demonstrated both compassion as well as an understanding of the long-term cost effectiveness. Copyright © 2014 Elsevier Inc. All rights reserved.

  15. Validation of US3D for Capsule Aerodynamics using 05-CA Wind Tunnel Test Data

    NASA Technical Reports Server (NTRS)

    Schwing, Alan

    2012-01-01

    Several comparisons of computational fluid dynamics to wind tunnel test data are shown for the purpose of code validation. The wind tunnel test, 05-CA, uses a 7.66% model of NASA's Multi-Purpose Crew Vehicle in the 11-foot test section of the Ames Unitary Plan Wind tunnel. A variety of freestream conditions over four Mach numbers and three angles of attack are considered. Test data comparisons include time-averaged integrated forces and moments, time-averaged static pressure ports on the surface, and Strouhal Number. The applicability of the US3D code to subsonic and transonic flow over a bluff body is assessed on a comprehensive data set. With close comparison, this work validates US3D for highly separated flows similar to those examined here.

  16. Scale-up protein separation on stainless steel wide bore toroidal columns in the type-J counter-current chromatography.

    PubMed

    Guan, Yue Hugh; Hewitson, Peter; van den Heuvel, Remco N A M; Zhao, Yan; Siebers, Rick P G; Zhuang, Ying-Ping; Sutherland, Ian

    2015-12-11

    Manufacturing high-value added biotech biopharmaceutical products (e.g. therapeutic proteins) requires quick-to-develop, GMP-compliant, easy-to-scale and cost effective preparatory chromatography technologies. In this work, we describe the construction and testing of a set of 5-mm inner diameter stainless steel toroidal columns for use on commercially available preparatory scale synchronous J-type counter-current chromatography (CCC) machinery. We used a 20.2m long column with an aqueous two-phase system containing 14% (w/w) PEG1000 and 14% (w/w) potassium phosphate at pH 7, and tested a sample loading of 5% column volume and a mobile phase flow rate of 20ml/min. We then satisfactorily demonstrated the potential for a weekly protein separation and preparation throughput of ca. 11g based on a normal weekly routine for separating a pair of model proteins by making five stacked injections on a single portion of stationary phase with no stripping. Compared to our previous 1.6mm bore PTFE toroidal column, the present columns enlarged the nominal column processing throughput by nearly 10. For an ideal model protein injection modality, we observed a scaling up factor of at least 21. The 2 scales of protein separation and purification steps were realized on the same commercial CCC device. Copyright © 2015 Elsevier B.V. All rights reserved.

  17. Development, refinement, and testing of a short term solar flare prediction algorithm

    NASA Technical Reports Server (NTRS)

    Smith, Jesse B., Jr.

    1993-01-01

    During the period included in this report, the expenditure of time and effort, and progress toward performance of the tasks and accomplishing the goals set forth in the two year research grant proposal, consisted primarily of calibration and analysis of selected data sets. The heliographic limits of 30 degrees from central meridian were continued. As previously reported, all analyses are interactive and are performed by the Principal Investigator. It should also be noted that the analysis time involved by the Principal Investigator during this reporting period was limited, partially due to illness and partially resulting from other uncontrollable factors. The calibration technique (as developed by MSFC solar scientists), incorporates sets of constants which vary according to the wave length of the observation data set. One input constant is then varied interactively to correct for observing conditions, etc., to result in a maximum magnetic field strength (in the calibrated data), based on a separate analysis. There is some insecurity in the methodology and the selection of variables to yield the most self-consistent results for variable maximum field strengths and for variable observing/atmospheric conditions. Several data sets were analyzed using differing constant sets, and separate analyses to differing maximum field strength - toward standardizing methodology and technique for the most self-consistent results for the large number of cases. It may be necessary to recalibrate some of the analyses, but the sc analyses are retained on the optical disks and can still be used with recalibration where necessary. Only the extracted parameters will be changed.

  18. Moisture content and gas sampling device

    NASA Technical Reports Server (NTRS)

    Krieg, H. C., Jr. (Inventor)

    1985-01-01

    An apparatus is described for measuring minute quantities of moisture and other contaminants within sealed enclosures such as electronic assemblies which may be subject to large external atmospheric pressure variations. An array of vacuum quality valves is arranged to permit cleansing of the test apparatus of residual atmospheric components from a vacuum source. This purging operation evacuates a gas sample bottle, which is then connected by valve settings to provide the drive for withdrawing a gas sample from the sealed enclosure under test into the sample bottle through a colometric detector tube (Drager tube) which indicates moisture content. The sample bottle may be disconnected and its contents (drawn from the test enclosure) separately subjected to mass spectrograph analysis.

  19. Retention strength of impression materials to a tray material using different adhesive methods: an in vitro study.

    PubMed

    Marafie, Yousef; Looney, Stephen; Nelson, Steven; Chan, Daniel; Browning, William; Rueggeberg, Frederick

    2008-12-01

    A new self-stick adhesive system has been purported to eliminate the need to use chemical adhesives with plastic impression trays; however, no testing has confirmed the claim. The purpose of this study was to compare the in vitro retentive strength of impression materials to plastic substrates having conventional adhesive (CA) or the self-stick adhesive system, with and without mechanical retention. Three types of impression materials (irreversible hydrocolloid (IH), vinyl polysiloxane (VPS), and polyether (PE)) were applied to polystyrene disc-shaped surfaces (33.68 cm(2)) that were held on the arms of a universal testing machine. The appropriate CA or the self-stick adhesive system (Self-Stick Dots) (SSD) was applied to the plates, which had either no mechanical retention, or equally spaced mechanical perforations (n=4). An in vivo pilot test determined the appropriate rate of plate separation. Plates with impression material were lowered to provide 4 mm of space, the material set, and plates were separated using the appropriate speed. Force at first separation was divided by plate area (peak stress). Five replications per test condition were made, and results were analyzed using ANOVA and Bonferroni-adjusted t tests (alpha=.05). Within each impression material/test combination, stress using SSD was significantly lower than CA (P<.05). Mechanical retention did not always provide significantly greater strength. The combination of mechanical retention and CA yielded the highest strength within each material type, except for PE, for which nonmechanical and CA strength did not differ from that of mechanical and CA. Use of the self-stick adhesive system provided significantly lower retentive strength to plastic tray material than chemical adhesives for irreversible hydrocolloid, vinyl polysiloxane, and polyether.

  20. Genetic architecture of the Delis-Kaplan Executive Function System Trail Making Test: evidence for distinct genetic influences on executive function.

    PubMed

    Vasilopoulos, Terrie; Franz, Carol E; Panizzon, Matthew S; Xian, Hong; Grant, Michael D; Lyons, Michael J; Toomey, Rosemary; Jacobson, Kristen C; Kremen, William S

    2012-03-01

    To examine how genes and environments contribute to relationships among Trail Making Test (TMT) conditions and the extent to which these conditions have unique genetic and environmental influences. Participants included 1,237 middle-aged male twins from the Vietnam Era Twin Study of Aging. The Delis-Kaplan Executive Function System TMT included visual searching, number and letter sequencing, and set-shifting components. Phenotypic correlations among TMT conditions ranged from 0.29 to 0.60, and genes accounted for the majority (58-84%) of each correlation. Overall heritability ranged from 0.34 to 0.62 across conditions. Phenotypic factor analysis suggested a single factor. In contrast, genetic models revealed a single common genetic factor but also unique genetic influences separate from the common factor. Genetic variance (i.e., heritability) of number and letter sequencing was completely explained by the common genetic factor while unique genetic influences separate from the common factor accounted for 57% and 21% of the heritabilities of visual search and set shifting, respectively. After accounting for general cognitive ability, unique genetic influences accounted for 64% and 31% of those heritabilities. A common genetic factor, most likely representing a combination of speed and sequencing, accounted for most of the correlation among TMT 1-4. Distinct genetic factors, however, accounted for a portion of variance in visual scanning and set shifting. Thus, although traditional phenotypic shared variance analysis techniques suggest only one general factor underlying different neuropsychological functions in nonpatient populations, examining the genetic underpinnings of cognitive processes with twin analysis can uncover more complex etiological processes.

  1. When ecosystem services interact: crop pollination benefits depend on the level of pest control

    PubMed Central

    Lundin, Ola; Smith, Henrik G.; Rundlöf, Maj; Bommarco, Riccardo

    2013-01-01

    Pollination is a key ecosystem service which most often has been studied in isolation although effects of pollination on seed set might depend on, and interact with, other services important for crop production. We tested three competing hypotheses on how insect pollination and pest control might jointly affect seed set: independent, compensatory or synergistic effects. For this, we performed a cage experiment with two levels of insect pollination and simulated pest control in red clover (Trifolium pratense L.) grown for seed. There was a synergistic interaction between the two services: the gain in seed set obtained when simultaneously increasing pollination and pest control outweighed the sum of seed set gains obtained when increasing each service separately. This study shows that interactions can alter the benefits obtained from service-providing organisms, and this needs to be considered to properly manage multiple ecosystem services. PMID:23269852

  2. Contact Prediction for Beta and Alpha-Beta Proteins Using Integer Linear Optimization and its Impact on the First Principles 3D Structure Prediction Method ASTRO-FOLD

    PubMed Central

    Rajgaria, R.; Wei, Y.; Floudas, C. A.

    2010-01-01

    An integer linear optimization model is presented to predict residue contacts in β, α + β, and α/β proteins. The total energy of a protein is expressed as sum of a Cα – Cα distance dependent contact energy contribution and a hydrophobic contribution. The model selects contacts that assign lowest energy to the protein structure while satisfying a set of constraints that are included to enforce certain physically observed topological information. A new method based on hydrophobicity is proposed to find the β-sheet alignments. These β-sheet alignments are used as constraints for contacts between residues of β-sheets. This model was tested on three independent protein test sets and CASP8 test proteins consisting of β, α + β, α/β proteins and was found to perform very well. The average accuracy of the predictions (separated by at least six residues) was approximately 61%. The average true positive and false positive distances were also calculated for each of the test sets and they are 7.58 Å and 15.88 Å, respectively. Residue contact prediction can be directly used to facilitate the protein tertiary structure prediction. This proposed residue contact prediction model is incorporated into the first principles protein tertiary structure prediction approach, ASTRO-FOLD. The effectiveness of the contact prediction model was further demonstrated by the improvement in the quality of the protein structure ensemble generated using the predicted residue contacts for a test set of 10 proteins. PMID:20225257

  3. Interaction of two glancing, crossing shock waves with a turbulent boundary-layer at various Mach numbers

    NASA Technical Reports Server (NTRS)

    Hingst, Warren R.; Williams, Kevin E.

    1991-01-01

    A preliminary experimental investigation was conducted to study two crossing, glancing shock waves of equal strengths, interacting with the boundary-layer developed on a supersonic wind tunnel wall. This study was performed at several Mach numbers between 2.5 and 4.0. The shock waves were created by fins (shock generators), spanning the tunnel test section, that were set at angles varying from 4 to 12 degrees. The data acquired are wall static pressure measurements, and qualitative information in the form of oil flow and schlieren visualizations. The principle aim is two-fold. First, a fundamental understanding of the physics underlying this flow phenomena is desired. Also, a comprehensive data set is needed for computational fluid dynamic code validation. Results indicate that for small shock generator angles, the boundary-layer remains attached throughout the flow field. However, with increasing shock strengths (increasing generator angles), boundary layer separation does occur and becomes progressively more severe as the generator angles are increased further. The location of the separation, which starts well downstream of the shock crossing point, moves upstream as shock strengths are increased. At the highest generator angles, the separation appears to begin coincident with the generator leading edges and engulfs most of the area between the generators. This phenomena occurs very near the 'unstart' limit for the generators. The wall pressures at the lower generator angles are nominally consistent with the flow geometries (i.e. shock patterns) although significantly affected by the boundary-layer upstream influence. As separation occurs, the wall pressures exhibit a gradient that is mainly axial in direction in the vicinity of the separation. At the limiting conditions the wall pressure gradients are primarily in the axial direction throughout.

  4. An Efficient Data Partitioning to Improve Classification Performance While Keeping Parameters Interpretable

    PubMed Central

    Korjus, Kristjan; Hebart, Martin N.; Vicente, Raul

    2016-01-01

    Supervised machine learning methods typically require splitting data into multiple chunks for training, validating, and finally testing classifiers. For finding the best parameters of a classifier, training and validation are usually carried out with cross-validation. This is followed by application of the classifier with optimized parameters to a separate test set for estimating the classifier’s generalization performance. With limited data, this separation of test data creates a difficult trade-off between having more statistical power in estimating generalization performance versus choosing better parameters and fitting a better model. We propose a novel approach that we term “Cross-validation and cross-testing” improving this trade-off by re-using test data without biasing classifier performance. The novel approach is validated using simulated data and electrophysiological recordings in humans and rodents. The results demonstrate that the approach has a higher probability of discovering significant results than the standard approach of cross-validation and testing, while maintaining the nominal alpha level. In contrast to nested cross-validation, which is maximally efficient in re-using data, the proposed approach additionally maintains the interpretability of individual parameters. Taken together, we suggest an addition to currently used machine learning approaches which may be particularly useful in cases where model weights do not require interpretation, but parameters do. PMID:27564393

  5. The study of nonlinear almost periodic differential equations without recourse to the H-classes of these equations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Slyusarchuk, V. E., E-mail: V.E.Slyusarchuk@gmail.com, E-mail: V.Ye.Slyusarchuk@NUWM.rv.ua

    2014-06-01

    The well-known theorems of Favard and Amerio on the existence of almost periodic solutions to linear and nonlinear almost periodic differential equations depend to a large extent on the H-classes and the requirement that the bounded solutions of these equations be separated. The present paper provides different conditions for the existence of almost periodic solutions. These conditions, which do not depend on the H-classes of the equations, are formulated in terms of a special functional on the set of bounded solutions of the equations under consideration. This functional is used, in particular, to test whether solutions are separated. Bibliography: 24more » titles. (paper)« less

  6. Assembling Appliances Standards from a Basket of Functions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Siderious, Hans-Paul; Meier, Alan

    2014-08-11

    Rapid innovation in product design challenges the current methodology for setting standards and labels, especially for electronics, software and networking. Major problems include defining the product, measuring its energy consumption, and choosing the appropriate metric and level for the standard. Most governments have tried to solve these problems by defining ever more specific product subcategories, along with their corresponding test methods and metrics. An alternative approach would treat each energy-using product as something that delivers a basket of functions. Then separate standards would be constructed for the individual functions that can be defined, tested, and evaluated. Case studies of thermostats,more » displays and network equipment are presented to illustrate the problems with the classical approach for setting standards and indicate the merits and drawbacks of the alternative. The functional approach appears best suited to products whose primary purpose is processing information and that have multiple functions.« less

  7. Are there two processes in reasoning? The dimensionality of inductive and deductive inferences.

    PubMed

    Stephens, Rachel G; Dunn, John C; Hayes, Brett K

    2018-03-01

    Single-process accounts of reasoning propose that the same cognitive mechanisms underlie inductive and deductive inferences. In contrast, dual-process accounts propose that these inferences depend upon 2 qualitatively different mechanisms. To distinguish between these accounts, we derived a set of single-process and dual-process models based on an overarching signal detection framework. We then used signed difference analysis to test each model against data from an argument evaluation task, in which induction and deduction judgments are elicited for sets of valid and invalid arguments. Three data sets were analyzed: data from Singmann and Klauer (2011), a database of argument evaluation studies, and the results of an experiment designed to test model predictions. Of the large set of testable models, we found that almost all could be rejected, including all 2-dimensional models. The only testable model able to account for all 3 data sets was a model with 1 dimension of argument strength and independent decision criteria for induction and deduction judgments. We conclude that despite the popularity of dual-process accounts, current results from the argument evaluation task are best explained by a single-process account that incorporates separate decision thresholds for inductive and deductive inferences. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  8. Improving Arterial Spin Labeling by Using Deep Learning.

    PubMed

    Kim, Ki Hwan; Choi, Seung Hong; Park, Sung-Hong

    2018-05-01

    Purpose To develop a deep learning algorithm that generates arterial spin labeling (ASL) perfusion images with higher accuracy and robustness by using a smaller number of subtraction images. Materials and Methods For ASL image generation from pair-wise subtraction, we used a convolutional neural network (CNN) as a deep learning algorithm. The ground truth perfusion images were generated by averaging six or seven pairwise subtraction images acquired with (a) conventional pseudocontinuous arterial spin labeling from seven healthy subjects or (b) Hadamard-encoded pseudocontinuous ASL from 114 patients with various diseases. CNNs were trained to generate perfusion images from a smaller number (two or three) of subtraction images and evaluated by means of cross-validation. CNNs from the patient data sets were also tested on 26 separate stroke data sets. CNNs were compared with the conventional averaging method in terms of mean square error and radiologic score by using a paired t test and/or Wilcoxon signed-rank test. Results Mean square errors were approximately 40% lower than those of the conventional averaging method for the cross-validation with the healthy subjects and patients and the separate test with the patients who had experienced a stroke (P < .001). Region-of-interest analysis in stroke regions showed that cerebral blood flow maps from CNN (mean ± standard deviation, 19.7 mL per 100 g/min ± 9.7) had smaller mean square errors than those determined with the conventional averaging method (43.2 ± 29.8) (P < .001). Radiologic scoring demonstrated that CNNs suppressed noise and motion and/or segmentation artifacts better than the conventional averaging method did (P < .001). Conclusion CNNs provided superior perfusion image quality and more accurate perfusion measurement compared with those of the conventional averaging method for generation of ASL images from pair-wise subtraction images. © RSNA, 2017.

  9. An externally validated model for predicting long-term survival after exercise treadmill testing in patients with suspected coronary artery disease and a normal electrocardiogram.

    PubMed

    Lauer, Michael S; Pothier, Claire E; Magid, David J; Smith, S Scott; Kattan, Michael W

    2007-12-18

    The exercise treadmill test is recommended for risk stratification among patients with intermediate to high pretest probability of coronary artery disease. Posttest risk stratification is based on the Duke treadmill score, which includes only functional capacity and measures of ischemia. To develop and externally validate a post-treadmill test, multivariable mortality prediction rule for adults with suspected coronary artery disease and normal electrocardiograms. Prospective cohort study conducted from September 1990 to May 2004. Exercise treadmill laboratories in a major medical center (derivation set) and a separate HMO (validation set). 33,268 patients in the derivation set and 5821 in the validation set. All patients had normal electrocardiograms and were referred for evaluation of suspected coronary artery disease. The derivation set patients were followed for a median of 6.2 years. A nomogram-illustrated model was derived on the basis of variables easily obtained in the stress laboratory, including age; sex; history of smoking, hypertension, diabetes, or typical angina; and exercise findings of functional capacity, ST-segment changes, symptoms, heart rate recovery, and frequent ventricular ectopy in recovery. The derivation data set included 1619 deaths. Although both the Duke treadmill score and our nomogram-illustrated model were significantly associated with death (P < 0.001), the nomogram was better at discrimination (concordance index for right-censored data, 0.83 vs. 0.73) and calibration. We reclassified many patients with intermediate- to high-risk Duke treadmill scores as low risk on the basis of the nomogram. The model also predicted 3-year mortality rates well in the validation set: Based on an optimal cut-point for a negative predictive value of 0.97, derivation and validation rates were, respectively, 1.7% and 2.5% below the cut-point and 25% and 29% above the cut-point. Blood test-based measures or left ventricular ejection fraction were not included. The nomogram can be applied only to patients with a normal electrocardiogram. Clinical utility remains to be tested. A simple nomogram based on easily obtained pretest and exercise test variables predicted all-cause mortality in adults with suspected coronary artery disease and normal electrocardiograms.

  10. Uncovering the hidden risk architecture of the schizophrenias: confirmation in three independent genome-wide association studies.

    PubMed

    Arnedo, Javier; Svrakic, Dragan M; Del Val, Coral; Romero-Zaliz, Rocío; Hernández-Cuervo, Helena; Fanous, Ayman H; Pato, Michele T; Pato, Carlos N; de Erausquin, Gabriel A; Cloninger, C Robert; Zwir, Igor

    2015-02-01

    The authors sought to demonstrate that schizophrenia is a heterogeneous group of heritable disorders caused by different genotypic networks that cause distinct clinical syndromes. In a large genome-wide association study of cases with schizophrenia and controls, the authors first identified sets of interacting single-nucleotide polymorphisms (SNPs) that cluster within particular individuals (SNP sets) regardless of clinical status. Second, they examined the risk of schizophrenia for each SNP set and tested replicability in two independent samples. Third, they identified genotypic networks composed of SNP sets sharing SNPs or subjects. Fourth, they identified sets of distinct clinical features that cluster in particular cases (phenotypic sets or clinical syndromes) without regard for their genetic background. Fifth, they tested whether SNP sets were associated with distinct phenotypic sets in a replicable manner across the three studies. The authors identified 42 SNP sets associated with a 70% or greater risk of schizophrenia, and confirmed 34 (81%) or more with similar high risk of schizophrenia in two independent samples. Seventeen networks of SNP sets did not share any SNP or subject. These disjoint genotypic networks were associated with distinct gene products and clinical syndromes (i.e., the schizophrenias) varying in symptoms and severity. Associations between genotypic networks and clinical syndromes were complex, showing multifinality and equifinality. The interactive networks explained the risk of schizophrenia more than the average effects of all SNPs (24%). Schizophrenia is a group of heritable disorders caused by a moderate number of separate genotypic networks associated with several distinct clinical syndromes.

  11. Marbles for the Imagination

    NASA Technical Reports Server (NTRS)

    Shue, Jack

    2004-01-01

    The end-to-end test would verify the complex sequence of events from lander separation to landing. Due to the large distances involved and the significant delay time in sending a command and receiving verification, the lander needed to operate autonomously after it separated from the orbiter. It had to sense conditions, make decisions, and act accordingly. We were flying into a relatively unknown set of conditions-a Martian atmosphere of unknown pressure, density, and consistency to land on a surface of unknown altitude, and one which had an unknown bearing strength. In order to touch down safely on Mars the lander had to orient itself for descent and entry, modulate itself to maintain proper lift, pop a parachute, jettison its aeroshell, deploy landing legs and radar, ignite a terminal descent engine, and fly a given trajectory to the surface. Once on the surface, it would determine its orientation, raise the high-gain antenna, perform a sweep to locate Earth, and begin transmitting information. It was this complicated, autonomous sequence that the end-to-end test was to simulate.

  12. Accurately Assessing the Risk of Schizophrenia Conferred by Rare Copy-Number Variation Affecting Genes with Brain Function

    PubMed Central

    Raychaudhuri, Soumya; Korn, Joshua M.; McCarroll, Steven A.; Altshuler, David; Sklar, Pamela; Purcell, Shaun; Daly, Mark J.

    2010-01-01

    Investigators have linked rare copy number variation (CNVs) to neuropsychiatric diseases, such as schizophrenia. One hypothesis is that CNV events cause disease by affecting genes with specific brain functions. Under these circumstances, we expect that CNV events in cases should impact brain-function genes more frequently than those events in controls. Previous publications have applied “pathway” analyses to genes within neuropsychiatric case CNVs to show enrichment for brain-functions. While such analyses have been suggestive, they often have not rigorously compared the rates of CNVs impacting genes with brain function in cases to controls, and therefore do not address important confounders such as the large size of brain genes and overall differences in rates and sizes of CNVs. To demonstrate the potential impact of confounders, we genotyped rare CNV events in 2,415 unaffected controls with Affymetrix 6.0; we then applied standard pathway analyses using four sets of brain-function genes and observed an apparently highly significant enrichment for each set. The enrichment is simply driven by the large size of brain-function genes. Instead, we propose a case-control statistical test, cnv-enrichment-test, to compare the rate of CNVs impacting specific gene sets in cases versus controls. With simulations, we demonstrate that cnv-enrichment-test is robust to case-control differences in CNV size, CNV rate, and systematic differences in gene size. Finally, we apply cnv-enrichment-test to rare CNV events published by the International Schizophrenia Consortium (ISC). This approach reveals nominal evidence of case-association in neuronal-activity and the learning gene sets, but not the other two examined gene sets. The neuronal-activity genes have been associated in a separate set of schizophrenia cases and controls; however, testing in independent samples is necessary to definitively confirm this association. Our method is implemented in the PLINK software package. PMID:20838587

  13. Test Reliability at the Individual Level

    PubMed Central

    Hu, Yueqin; Nesselroade, John R.; Erbacher, Monica K.; Boker, Steven M.; Burt, S. Alexandra; Keel, Pamela K.; Neale, Michael C.; Sisk, Cheryl L.; Klump, Kelly

    2016-01-01

    Reliability has a long history as one of the key psychometric properties of a test. However, a given test might not measure people equally reliably. Test scores from some individuals may have considerably greater error than others. This study proposed two approaches using intraindividual variation to estimate test reliability for each person. A simulation study suggested that the parallel tests approach and the structural equation modeling approach recovered the simulated reliability coefficients. Then in an empirical study, where forty-five females were measured daily on the Positive and Negative Affect Schedule (PANAS) for 45 consecutive days, separate estimates of reliability were generated for each person. Results showed that reliability estimates of the PANAS varied substantially from person to person. The methods provided in this article apply to tests measuring changeable attributes and require repeated measures across time on each individual. This article also provides a set of parallel forms of PANAS. PMID:28936107

  14. Separator development and testing of nickel-hydrogen cells

    NASA Technical Reports Server (NTRS)

    Gonzalez-Sanabria, O. D.; Manzo, M. A.

    1984-01-01

    The components, design, and operating characteristics of Ni-H2 cells batteries were improved. A separator development program was designed to develop a separator that is resistant to penetration by oxygen and loose active material from then nickel electrode, while retraining the required chemical and thermal stability, reservoir capability, and high ionic conductivity. The performance of the separators in terms of cell operating voltage was to at least match that of state-of-the-art separators while eliminating the separator problems. The separators were submitted to initial screening tests and those which successfully completed the tests were built into Ni-H2 cells for short term testing. The separators with the best performance are tested for long term performance and life.

  15. Nondestructive Detection and Quantification of Blueberry Bruising using Near-infrared (NIR) Hyperspectral Reflectance Imaging

    NASA Astrophysics Data System (ADS)

    Jiang, Yu; Li, Changying; Takeda, Fumiomi

    2016-10-01

    Currently, blueberry bruising is evaluated by either human visual/tactile inspection or firmness measurement instruments. These methods are destructive, time-consuming, and subjective. The goal of this paper was to develop a non-destructive approach for blueberry bruising detection and quantification. Experiments were conducted on 300 samples of southern highbush blueberry (Camellia, Rebel, and Star) and on 1500 samples of northern highbush blueberry (Bluecrop, Jersey, and Liberty) for hyperspectral imaging analysis, firmness measurement, and human evaluation. An algorithm was developed to automatically calculate a bruise ratio index (ratio of bruised to whole fruit area) for bruise quantification. The spectra of bruised and healthy tissues were statistically separated and the separation was independent of cultivars. Support vector machine (SVM) classification of the spectra from the regions of interest (ROIs) achieved over 94%, 92%, and 96% accuracy on the training set, independent testing set, and combined set, respectively. The statistical results showed that the bruise ratio index was equivalent to the measured firmness but better than the predicted firmness in regard to effectiveness of bruise quantification, and the bruise ratio index had a strong correlation with human assessment (R2 = 0.78 - 0.83). Therefore, the proposed approach and the bruise ratio index are effective to non-destructively detect and quantify blueberry bruising.

  16. Nondestructive Detection and Quantification of Blueberry Bruising using Near-infrared (NIR) Hyperspectral Reflectance Imaging.

    PubMed

    Jiang, Yu; Li, Changying; Takeda, Fumiomi

    2016-10-21

    Currently, blueberry bruising is evaluated by either human visual/tactile inspection or firmness measurement instruments. These methods are destructive, time-consuming, and subjective. The goal of this paper was to develop a non-destructive approach for blueberry bruising detection and quantification. Experiments were conducted on 300 samples of southern highbush blueberry (Camellia, Rebel, and Star) and on 1500 samples of northern highbush blueberry (Bluecrop, Jersey, and Liberty) for hyperspectral imaging analysis, firmness measurement, and human evaluation. An algorithm was developed to automatically calculate a bruise ratio index (ratio of bruised to whole fruit area) for bruise quantification. The spectra of bruised and healthy tissues were statistically separated and the separation was independent of cultivars. Support vector machine (SVM) classification of the spectra from the regions of interest (ROIs) achieved over 94%, 92%, and 96% accuracy on the training set, independent testing set, and combined set, respectively. The statistical results showed that the bruise ratio index was equivalent to the measured firmness but better than the predicted firmness in regard to effectiveness of bruise quantification, and the bruise ratio index had a strong correlation with human assessment (R2 = 0.78 - 0.83). Therefore, the proposed approach and the bruise ratio index are effective to non-destructively detect and quantify blueberry bruising.

  17. Data layer integration for the national map of the united states

    USGS Publications Warehouse

    Usery, E.L.; Finn, M.P.; Starbuck, M.

    2009-01-01

    The integration of geographic data layers in multiple raster and vector formats, from many different organizations and at a variety of resolutions and scales, is a significant problem for The National Map of the United States being developed by the U.S. Geological Survey. Our research has examined data integration from a layer-based approach for five of The National Map data layers: digital orthoimages, elevation, land cover, hydrography, and transportation. An empirical approach has included visual assessment by a set of respondents with statistical analysis to establish the meaning of various types of integration. A separate theoretical approach with established hypotheses tested against actual data sets has resulted in an automated procedure for integration of specific layers and is being tested. The empirical analysis has established resolution bounds on meanings of integration with raster datasets and distance bounds for vector data. The theoretical approach has used a combination of theories on cartographic transformation and generalization, such as T??pfer's radical law, and additional research concerning optimum viewing scales for digital images to establish a set of guiding principles for integrating data of different resolutions.

  18. Interidentity amnesia in dissociative identity disorder.

    PubMed

    Morton, John

    2017-07-01

    Patients diagnosed with dissociative identity disorder (DID) usually present with alternative personality states (alters) who take separate control of consciousness. Commonly, one alter will claim they have no awareness of events which took place when another alter was in control. However, some kinds of material are transferred across the alter boundary. Huntjens et al. devised an objective method of demonstrating such transfer. In the main study, following Huntjens et al., for three patients, two alters were taught different sets of nouns. The following week, one of the alters was given a recognition memory test including both sets plus distractor words. The patients in the Huntjens experiment responded in the same way to words in both sets. In the present experiemnt, two of the patients tested had pairs of alters where there was no interference from the material which was presented to the other alter. In one of these cases, there was breakthrough with one pairing of alters, a pattern matched in a subsidiary experiment. The population of individuals with DID are not homogeneous with respect to the depth of the blocking of episodic material from one alter to another.

  19. Prostate-specific antigen velocity is not better than total prostate-specific antigen in predicting prostate biopsy diagnosis.

    PubMed

    Gorday, William; Sadrzadeh, Hossein; de Koning, Lawrence; Naugler, Christopher T

    2015-12-01

    1.) Identify whether prostate-specific antigen velocity improves the ability to predict prostate biopsy diagnosis. 2.) Test whether there is an increase in the predictive capability of models when Gleason 7 prostate cancers are separated into a 3+4 and a 4+3 group. Calgary Laboratory Services' Clinical Laboratory Information System was searched for prostate biopsies reported between January 1, 2009 and December 31, 2013. Total prostate-specific antigen tests were recorded for each patient from January 1, 2007 to the most recent test before their recorded prostate biopsy. The data set was divided into the following three groups for comparison; benign, all prostate cancer and Gleason 7-10. The Gleason grade 7-10 group was further divided into 4+3 and 3+4 Gleason 7 prostate cancers. Prostate-specific antigen velocity was calculated using four different methods found in the literature. Receiver operator curves were used to assess operational characteristics of the tests. 4622 men between the ages of 40-89 with a prostate biopsy were included for analysis. Combining prostate-specific antigen velocity with total prostate-specific antigen (AUC=0.570-0.712) resulted in small non-statistically significant changes to the area under the curve compared to the area under the curve of total prostate-specific antigen alone (AUC=0.572-0.699). There were marked increases in the area under curves when 3+4 and 4+3 Gleason 7 cancers were separated. Prostate-specific antigen velocity does not add predictive value for prostate biopsy diagnosis. The clinical significance of the prostate specific antigen test can be improved by separating Gleason 7 prostate cancers into a 3+4 and 4+3 group. Copyright © 2015 The Canadian Society of Clinical Chemists. Published by Elsevier Inc. All rights reserved.

  20. Inferring Short-Range Linkage Information from Sequencing Chromatograms

    PubMed Central

    Beggel, Bastian; Neumann-Fraune, Maria; Kaiser, Rolf; Verheyen, Jens; Lengauer, Thomas

    2013-01-01

    Direct Sanger sequencing of viral genome populations yields multiple ambiguous sequence positions. It is not straightforward to derive linkage information from sequencing chromatograms, which in turn hampers the correct interpretation of the sequence data. We present a method for determining the variants existing in a viral quasispecies in the case of two nearby ambiguous sequence positions by exploiting the effect of sequence context-dependent incorporation of dideoxynucleotides. The computational model was trained on data from sequencing chromatograms of clonal variants and was evaluated on two test sets of in vitro mixtures. The approach achieved high accuracies in identifying the mixture components of 97.4% on a test set in which the positions to be analyzed are only one base apart from each other, and of 84.5% on a test set in which the ambiguous positions are separated by three bases. In silico experiments suggest two major limitations of our approach in terms of accuracy. First, due to a basic limitation of Sanger sequencing, it is not possible to reliably detect minor variants with a relative frequency of no more than 10%. Second, the model cannot distinguish between mixtures of two or four clonal variants, if one of two sets of linear constraints is fulfilled. Furthermore, the approach requires repetitive sequencing of all variants that might be present in the mixture to be analyzed. Nevertheless, the effectiveness of our method on the two in vitro test sets shows that short-range linkage information of two ambiguous sequence positions can be inferred from Sanger sequencing chromatograms without any further assumptions on the mixture composition. Additionally, our model provides new insights into the established and widely used Sanger sequencing technology. The source code of our method is made available at http://bioinf.mpi-inf.mpg.de/publications/beggel/linkageinformation.zip. PMID:24376502

  1. Threshold setting by the surround of cat retinal ganglion cells.

    PubMed

    Barlow, H B; Levick, W R

    1976-08-01

    1. The slope of curves relating the log increment threshold to log background luminance in cat retinal ganglion cells is affected by the area and duration of the test stimulus, as it is in human pyschophysical experiments. 2. Using large area, long duration stimuli the slopes average 0-82 and approach close to 1 (Weber's Law) in the steepest cases. Small stimuli gave an average of 0-53 for on-centre units using brief stimuli, and 0-56 for off-centre units, using long stimuli. Slopes under 0-5 (square root law) were not found over an extended range of luminances. 3. On individual units the slope was generally greater for larger and longer test stimulus, but no unit showed the full extent of change from slope of 0-5 to slope of 1. 4. The above differences hold for objective measures of quantum/spike ratio, as well as for thresholds either judged by ear or assessed by calculation. 5. The steeper slope of the curves for large area, long duration test stimuli compared with small, long duration stimuli, is associated with the increased effectiveness of antagonism from the surround at high backgrounds. This change may be less pronounced in off-centre units, one of which (probably transient Y-type) showed no difference of slope, and gave parallel area-threshold curves at widely separated background luminances, confirming the importance of differential surround effectiveness in changing the slope of the curves. 6. In on-centre units, the increased relative effectiveness of the surround is associated with the part of the raised background light that falls on the receptive field centre. 7. It is suggested that the variable surround functions as a zero-offset control that sets the threshold excitation required for generating impulses, and that this is separate from gain-setting adaptive mechanisms. This may be how ganglion cells maintain high incremental sensitivity in spite of a strong maintained excitatory drive that would otherwise cause compressive response non-linearities.

  2. [Forced spirometry procedure].

    PubMed

    Cortés Aguilera, Antonio Javier

    2008-11-01

    Forced spirometry consists in a complementary test which is carried out in a health office in a workplace in order to determine the lung capacity of workers exposed to determined professional risks or those susceptible to determined working conditions which could lead to the development of respiratory problems. This test has been developed based on health vigilance laws under Article 22 of the Law for Prevention of Risks in the Workplace and requires that the technician, a nurse in a workplace, who performs it have some knowledge and skills regarding its use, following the norms for forced spirometry set by the Spanish Association for Pneumatology and Thoracic Surgery (SEPAR).

  3. An evaluation of open set recognition for FLIR images

    NASA Astrophysics Data System (ADS)

    Scherreik, Matthew; Rigling, Brian

    2015-05-01

    Typical supervised classification algorithms label inputs according to what was learned in a training phase. Thus, test inputs that were not seen in training are always given incorrect labels. Open set recognition algorithms address this issue by accounting for inputs that are not present in training and providing the classifier with an option to reject" unknown samples. A number of such techniques have been developed in the literature, many of which are based on support vector machines (SVMs). One approach, the 1-vs-set machine, constructs a slab" in feature space using the SVM hyperplane. Inputs falling on one side of the slab or within the slab belong to a training class, while inputs falling on the far side of the slab are rejected. We note that rejection of unknown inputs can be achieved by thresholding class posterior probabilities. Another recently developed approach, the Probabilistic Open Set SVM (POS-SVM), empirically determines good probability thresholds. We apply the 1-vs-set machine, POS-SVM, and closed set SVMs to FLIR images taken from the Comanche SIG dataset. Vehicles in the dataset are divided into three general classes: wheeled, armored personnel carrier (APC), and tank. For each class, a coarse pose estimate (front, rear, left, right) is taken. In a closed set sense, we analyze these algorithms for prediction of vehicle class and pose. To test open set performance, one or more vehicle classes are held out from training. By considering closed and open set performance separately, we may closely analyze both inter-class discrimination and threshold effectiveness.

  4. Multiple wavelength X-ray monochromators

    DOEpatents

    Steinmeyer, P.A.

    1992-11-17

    An improved apparatus and method is provided for separating input x-ray radiation containing first and second x-ray wavelengths into spatially separate first and second output radiation which contain the first and second x-ray wavelengths, respectively. The apparatus includes a crystalline diffractor which includes a first set of parallel crystal planes, where each of the planes is spaced a predetermined first distance from one another. The crystalline diffractor also includes a second set of parallel crystal planes inclined at an angle with respect to the first set of crystal planes where each of the planes of the second set of parallel crystal planes is spaced a predetermined second distance from one another. In one embodiment, the crystalline diffractor is comprised of a single crystal. In a second embodiment, the crystalline diffractor is comprised of a stack of two crystals. In a third embodiment, the crystalline diffractor includes a single crystal that is bent for focusing the separate first and second output x-ray radiation wavelengths into separate focal points. 3 figs.

  5. Multiple wavelength X-ray monochromators

    DOEpatents

    Steinmeyer, Peter A.

    1992-11-17

    An improved apparatus and method is provided for separating input x-ray radiation containing first and second x-ray wavelengths into spatially separate first and second output radiation which contain the first and second x-ray wavelengths, respectively. The apparatus includes a crystalline diffractor which includes a first set of parallel crystal planes, where each of the planes is spaced a predetermined first distance from one another. The crystalline diffractor also includes a second set of parallel crystal planes inclined at an angle with respect to the first set of crystal planes where each of the planes of the second set of parallel crystal planes is spaced a predetermined second distance from one another. In one embodiment, the crystalline diffractor is comprised of a single crystal. In a second embodiment, the crystalline diffractor is comprised of a stack of two crystals. In a third embodiment, the crystalline diffractor includes a single crystal that is bent for focussing the separate first and second output x-ray radiation wavelengths into separate focal points.

  6. Bio-electrochemical characterization of air-cathode microbial fuel cells with microporous polyethylene/silica membrane as separator.

    PubMed

    Kircheva, Nina; Outin, Jonathan; Perrier, Gérard; Ramousse, Julien; Merlin, Gérard; Lyautey, Emilie

    2015-12-01

    The aim of this work was to study the behavior over time of a separator made of a low-cost and non-selective microporous polyethylene membrane (RhinoHide®) in an air-cathode microbial fuel cell with a reticulated vitreous carbon foam bioanode. Performances of the microporous polyethylene membrane (RhinoHide®) were compared with Nafion®-117 as a cationic exchange membrane. A non-parametric test (Mann-Whitney) done on the different sets of coulombic or energy efficiency data showed no significant difference between the two types of tested membrane (p<0.05). Volumetric power densities were ranging from 30 to 90 W·m(-3) of RVC foam for both membranes. Similar amounts of biomass were observed on both sides of the polyethylene membrane illustrating bacterial permeability of this type of separator. A monospecific denitrifying population on cathodic side of RhinoHide® membrane has been identified. Electrochemical impedance spectroscopy (EIS) was used at OCV conditions to characterize electrochemical behavior of MFCs by equivalent electrical circuit fitted on both Nyquist and Bode plots. Resistances and pseudo-capacitances from EIS analyses do not differ in such a way that the nature of the membrane could be considered as responsible. Copyright © 2015 Elsevier B.V. All rights reserved.

  7. Separation and reconstruction of BCG and EEG signals during continuous EEG and fMRI recordings

    PubMed Central

    Xia, Hongjing; Ruan, Dan; Cohen, Mark S.

    2014-01-01

    Despite considerable effort to remove it, the ballistocardiogram (BCG) remains a major artifact in electroencephalographic data (EEG) acquired inside magnetic resonance imaging (MRI) scanners, particularly in continuous (as opposed to event-related) recordings. In this study, we have developed a new Direct Recording Prior Encoding (DRPE) method to extract and separate the BCG and EEG components from contaminated signals, and have demonstrated its performance by comparing it quantitatively to the popular Optimal Basis Set (OBS) method. Our modified recording configuration allows us to obtain representative bases of the BCG- and EEG-only signals. Further, we have developed an optimization-based reconstruction approach to maximally incorporate prior knowledge of the BCG/EEG subspaces, and of the signal characteristics within them. Both OBS and DRPE methods were tested with experimental data, and compared quantitatively using cross-validation. In the challenging continuous EEG studies, DRPE outperforms the OBS method by nearly sevenfold in separating the continuous BCG and EEG signals. PMID:25002836

  8. OPTIMIZATION AND VALIDATION OF HPLC METHOD FOR TETRAMETHRIN DETERMINATION IN HUMAN SHAMPOO FORMULATION.

    PubMed

    Zeric Stosic, Marina Z; Jaksic, Sandra M; Stojanov, Igor M; Apic, Jelena B; Ratajac, Radomir D

    2016-11-01

    High-performance liquid chromatography (HPLC) method with diode array detection (DAD) were optimized and validated for separation and determination of tetramethrin in an antiparasitic human shampoo. In order to optimize separation conditions, two different columns, different column oven temperatures, as well as mobile phase composition and ratio, were tested. Best separation was achieved on the Supelcosil TM LC-18- DB column (4.6 x 250 mm), particle size 5 jim, with mobile phase methanol : water (78 : 22, v/v) at a flow rate of 0.8 mL/min and at temperature of 30⁰C. The detection wavelength of the detector was set at 220 nm. Under the optimum chromatographic conditions, standard calibration curve was measured with good linearity [r2 = 0.9997]. Accuracy of the method defined as a mean recovery of tetramethrin from shampoo matrix was 100.09%. The advantages of this method are that it can easily be used for the routine analysis of drug tetramethrin in pharmaceutical formulas and in all pharmaceutical researches involving tetramethrin.

  9. Long-range corrected density functional through the density matrix expansion based semilocal exchange hole.

    PubMed

    Patra, Bikash; Jana, Subrata; Samal, Prasanjit

    2018-03-28

    The exchange hole, which is one of the principal constituents of the density functional formalism, can be used to design accurate range-separated hybrid functionals in association with appropriate correlation. In this regard, the exchange hole derived from the density matrix expansion has gained attention due to its fulfillment of some of the desired exact constraints. Thus, the new long-range corrected density functional proposed here combines the meta generalized gradient approximation level exchange functional designed from the density matrix expansion based exchange hole coupled with the ab initio Hartree-Fock exchange through the range separation of the Coulomb interaction operator using the standard error function technique. Then, in association with the Lee-Yang-Parr correlation functional, the assessment and benchmarking of the above newly constructed range-separated functional with various well-known test sets shows its reasonable performance for a broad range of molecular properties, such as thermochemistry, non-covalent interaction and barrier heights of the chemical reactions.

  10. Enzymatic production and in situ separation of natural β-ionone from β-carotene.

    PubMed

    Nacke, Christoph; Hüttmann, Sonja; Etschmann, Maria M W; Schrader, Jens

    2012-12-01

    A biotechnological process concept for generation and in situ separation of natural β-ionone from β-carotene is presented. The process employs carotenoid cleavage dioxygenases (CCDs), a plant-derived iron-containing nonheme enzyme family requiring only dissolved oxygen as cosubstrate and no additional cofactors. Organophilic pervaporation was found to be very well suited for continuous in situ separation of β-ionone. Its application led to a highly pure product despite the complexity of the reaction solution containing cell homogenates. Among three different pervaporation membrane types tested, a polyoctylmethylsiloxane active layer on a porous polyetherimide support led to the best results. A laboratory-scale demonstration plant was set up, and a highly pure aqueous-ethanolic solution of β-ionone was produced from β-carotene. The described process permits generation of high-value flavor and fragrance compounds bearing the desired label "natural" according to US and European food and safety regulations and demonstrates the potential of CCD enzymes for selective oxidative cleavage of carotenoids.

  11. FINAL REPORT INTEGRATED DM1200 MELTER TESTING OF BUBBLER CONFIGURATIONS USING HLW AZ-101 SIMULANTS VSL-04R4800-4 REV 0 10/5/04

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    KRUGER AA; MATLACK KS; GONG W

    2011-12-29

    This report documents melter and off-gas performance results obtained on the DM1200 HLW Pilot Melter during processing of AZ-101 HLW simulants. The tests reported herein are a subset of six tests from a larger series of tests described in the Test Plan for the work; results from the other tests have been reported separately. The solids contents of the melter feeds were based on the WTP baseline value for the solids content of the feeds from pretreatment which changed during these tests from 20% to 15% undissolved solids resulting in tests conducted at two feed solids contents. Based on themore » results of earlier tests with single outlet 'J' bubblers, initial tests were performed with a total bubbling rate of 651 pm. The first set of tests (Tests 1A-1E) addressed the effects of skewing this total air flow rate back and forth between the two installed bubblers in comparison to a fixed equal division of flow between them. The second set of tests (2A-2D) addressed the effects of bubbler depth. Subsequently, as the location, type and number of bubbling outlets were varied, the optimum bubbling rate for each was determined. A third (3A-3C) and fourth (8A-8C) set of tests evaluated the effects of alternative bubbler designs with two gas outlets per bubbler instead of one by placing four bubblers in positions simulating multiple-outlet bubblers. Data from the simulated multiple outlet bubblers were used to design bubblers with two outlets for an additional set of tests (9A-9C). Test 9 was also used to determine the effect of small sugar additions to the feed on ruthenium volatility. Another set of tests (10A-10D) evaluated the effects on production rate of spiking the feed with chloride and sulfate. Variables held constant to the extent possible included melt temperature, plenum temperature, cold cap coverage, the waste simulant composition, and the target glass composition. The feed rate was increased to the point that a constant, essentially complete, cold cap was achieved, which was used as an indicator of a maximized feed rate for each test. The first day of each test was used to build the cold cap and decrease the plenum temperature. The remainder of each test was split into two- to six-day segments, each with a different bubbling rate, bubbler orientation, or feed concentration of chloride and sulfur.« less

  12. Ultra-performance liquid chromatography/tandem mass spectrometric quantification of structurally diverse drug mixtures using an ESI-APCI multimode ionization source.

    PubMed

    Yu, Kate; Di, Li; Kerns, Edward; Li, Susan Q; Alden, Peter; Plumb, Robert S

    2007-01-01

    We report in this paper an ultra-performance liquid chromatography/tandem mass spectrometric (UPLC(R)/MS/MS) method utilizing an ESI-APCI multimode ionization source to quantify structurally diverse analytes. Eight commercial drugs were used as test compounds. Each LC injection was completed in 1 min using a UPLC system coupled with MS/MS multiple reaction monitoring (MRM) detection. Results from three separate sets of experiments are reported. In the first set of experiments, the eight test compounds were analyzed as a single mixture. The mass spectrometer was switching rapidly among four ionization modes (ESI+, ESI-, APCI-, and APCI+) during an LC run. Approximately 8-10 data points were collected across each LC peak. This was insufficient for a quantitative analysis. In the second set of experiments, four compounds were analyzed as a single mixture. The mass spectrometer was switching rapidly among four ionization modes during an LC run. Approximately 15 data points were obtained for each LC peak. Quantification results were obtained with a limit of detection (LOD) as low as 0.01 ng/mL. For the third set of experiments, the eight test compounds were analyzed as a batch. During each LC injection, a single compound was analyzed. The mass spectrometer was detecting at a particular ionization mode during each LC injection. More than 20 data points were obtained for each LC peak. Quantification results were also obtained. This single-compound analytical method was applied to a microsomal stability test. Compared with a typical HPLC method currently used for the microsomal stability test, the injection-to-injection cycle time was reduced to 1.5 min (UPLC method) from 3.5 min (HPLC method). The microsome stability results were comparable with those obtained by traditional HPLC/MS/MS.

  13. Chaotic attractors with separated scrolls

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bouallegue, Kais, E-mail: kais-bouallegue@yahoo.fr

    2015-07-15

    This paper proposes a new behavior of chaotic attractors with separated scrolls while combining Julia's process with Chua's attractor and Lorenz's attractor. The main motivation of this work is the ability to generate a set of separated scrolls with different behaviors, which in turn allows us to choose one or many scrolls combined with modulation (amplitude and frequency) for secure communication or synchronization. This set seems a new class of hyperchaos because each element of this set looks like a simple chaotic attractor with one positive Lyapunov exponent, so the cardinal of this set is greater than one. This newmore » approach could be used to generate more general higher-dimensional hyperchaotic attractor for more potential application. Numerical simulations are given to show the effectiveness of the proposed theoretical results.« less

  14. Quantitative estimation of global patterns of surface ocean biological productivity and its seasonal variation on timescales from centuries to millennia

    NASA Astrophysics Data System (ADS)

    Loubere, Paul; Fariduddin, Mohammad

    1999-03-01

    We present a quantitative method, based on the relative abundances of benthic foraminifera in deep-sea sediments, for estimating surface ocean biological productivity over the timescale of centuries to millennia. We calibrate the method using a global data set composed of 207 samples from the Atlantic, Pacific, and Indian Oceans from a water depth range between 2300 and 3600 m. The sample set was developed so that other, potentially significant, environmental variables would be uncorrelated to overlying surface ocean productivity. A regression of assemblages against productivity yielded an r2 = 0.89 demonstrating a strong productivity signal in the faunal data. In addition, we examined assemblage response to annual variability in biological productivity (seasonality). Our data set included a range of seasonalities which we quantified into a seasonality index using the pigment color bands from the coastal zone color scanner (CZCS). The response of benthic foraminiferal assemblage composition to our seasonality index was tested with regression analysis. We obtained a statistically highly significant r2 = 0.75. Further, discriminant function analysis revealed a clear separation among sample groups based on surface ocean productivity and our seasonality index. Finally, we tested the response of benthic foraminiferal assemblages to three different modes of seasonality. We observed a distinct separation of our samples into groups representing low seasonal variability, strong seasonality with a single main productivity event in the year, and strong seasonality with multiple productivity events in the year. Reconstructing surface ocean biological productivity with benthic foraminifera will aid in modeling marine biogeochemical cycles. Also, estimating mode and range of annual seasonality will provide insight to changing oceanic processes, allowing the examination of the mechanisms causing changes in the marine biotic system over time. This article contains supplementary material.

  15. Rapid detection of frozen-then-thawed minced beef using multispectral imaging and Fourier transform infrared spectroscopy.

    PubMed

    Ropodi, Athina I; Panagou, Efstathios Z; Nychas, George-John E

    2018-01-01

    In recent years, fraud detection has become a major priority for food authorities, as fraudulent practices can have various economic and safety consequences. This work explores ways of identifying frozen-then-thawed minced beef labeled as fresh in a rapid, large-scale and cost-effective way. For this reason, freshly-ground beef was purchased from seven separate shops at different times, divided in fifteen portions and placed in Petri dishes. Multi-spectral images and FTIR spectra of the first five were immediately acquired while the remaining were frozen (-20°C) and stored for 7 and 32days (5 samples for each time interval). Samples were thawed and subsequently subjected to similar data acquisition. In total, 105 multispectral images and FTIR spectra were collected which were further analyzed using partial least-squares discriminant analysis and support vector machines. Two meat batches (30 samples) were reserved for independent validation and the remaining five batches were divided in training and test set (75 samples). Results showed 100% overall correct classification for test and external validation MSI data, while FTIR data yielded 93.3 and 96.7% overall correct classification for FTIR test set and external validation set respectively. Copyright © 2017 Elsevier Ltd. All rights reserved.

  16. Hot-Film and Hot-Wire Anemometry for a Boundary Layer Active Flow Control Test

    NASA Technical Reports Server (NTRS)

    Lenahan, Keven C.; Schatzman, David M.; Wilson, Jacob Samuel

    2013-01-01

    Unsteady active flow control (AFC) has been used experimentally for many years to minimize bluff-body drag. This technology could significantly improve performance of rotorcraft by cleaning up flow separation. It is important, then, that new actuator technologies be studied for application to future vehicles. A boundary layer wind tunnel was constructed with a 1ft-x-3ft test section and unsteady measurement instrumentation to study how AFC manipulates the boundary layer to overcome adverse pressure gradients and flow separation. This unsteady flow control research requires unsteady measurement methods. In order to measure the boundary layer characteristics, both hot-wire and hot-film Constant Temperature Anemometry is used. A hot-wire probe is mounted in the flow to measure velocity while a hot-film array lays on the test surface to measure skin friction. Hot-film sensors are connected to an anemometer, a Wheatstone bridge circuit with an output that corresponds to the dynamic flow response. From this output, the time varying flow field, turbulence, and flow reversal can be characterized. Tuning the anemometers requires a fan test on the hot-film sensors to adjust each output. This is a delicate process as several variables drastically affect the data, including control resistance, signal input, trim, and gain settings.

  17. The QTP family of consistent functionals and potentials in Kohn-Sham density functional theory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jin, Yifan; Bartlett, Rodney J., E-mail: bartlett@qtp.ufl.edu

    This manuscript presents the second, consistent density functional in the QTP (Quantum Theory Project) family, that is, the CAM-QTP(01). It is a new range-separated exchange-correlation functional in which the non-local exchange contribution is 100% at large separation. It follows the same basic principles of this family that the Kohn-Sham eigenvalues of the occupied orbitals approximately equal the vertical ionization energies, which is not fulfilled by most of the traditional density functional methods. This new CAM-QTP(01) functional significantly improves the accuracy of the vertical excitation energies especially for the Rydberg states in the test set. It also reproduces many other propertiesmore » such as geometries, reaction barrier heights, and atomization energies.« less

  18. Extraction of azadirachtin A from neem seed kernels by supercritical fluid and its evaluation by HPLC and LC/MS.

    PubMed

    Ambrosino, P; Fresa, R; Fogliano, V; Monti, S M; Ritieni, A

    1999-12-01

    A new supercritical extraction methodology was applied to extract azadirachtin A (AZA-A) from neem seed kernels. Supercritical and liquid carbon dioxide (CO(2)) were used as extractive agents in a three-separation-stage supercritical pilot plant. Subcritical conditions were tested too. Comparisons were carried out by calculating the efficiency of the pilot plant with respect to the milligrams per kilogram of seeds (ms/mo) of AZA-A extracted. The most convenient extraction was gained using an ms/mo ratio of 119 rather than 64. For supercritical extraction, a separation of cuticular waxes from oil was set up in the pilot plant. HPLC and electrospray mass spectroscopy were used to monitor the yield of AZA-A extraction.

  19. RF assisted switching in magnetic Josephson junctions

    NASA Astrophysics Data System (ADS)

    Caruso, R.; Massarotti, D.; Bolginov, V. V.; Ben Hamida, A.; Karelina, L. N.; Miano, A.; Vernik, I. V.; Tafuri, F.; Ryazanov, V. V.; Mukhanov, O. A.; Pepe, G. P.

    2018-04-01

    We test the effect of an external RF field on the switching processes of magnetic Josephson junctions (MJJs) suitable for the realization of fast, scalable cryogenic memories compatible with Single Flux Quantum logic. We show that the combined application of microwaves and magnetic field pulses can improve the performances of the device, increasing the separation between the critical current levels corresponding to logical "0" and "1." The enhancement of the current level separation can be as high as 80% using an optimal set of parameters. We demonstrate that external RF fields can be used as an additional tool to manipulate the memory states, and we expect that this approach may lead to the development of new methods of selecting MJJs and manipulating their states in memory arrays for various applications.

  20. Fear and anxiety as separable emotions: an investigation of the revised reinforcement sensitivity theory of personality.

    PubMed

    Perkins, Adam M; Kemp, Samantha E; Corr, Philip J

    2007-05-01

    The Gray and McNaughton (2000) theory draws on a wide range of animal data to hypothesize that the emotions of fear and anxiety are separable. The authors tested their hypothesis in two studies. The first study examined associations between scores on questionnaire measures of fear, anxiety, and neuroticism; correlational analysis revealed that fear and anxiety are not interchangeable constructs. The second study examined associations between scores on questionnaire measures of fear/anxiety and performance in a military training setting; regression analysis revealed that fear captured significant variance in performance that was not shared with anxiety. These results imply that hypotheses derived from nonhuman animal data may hold important implications for understanding human emotion and motivation, especially in relation to fear and anxiety.

  1. Fluorescence-based classification of Caribbean coral reef organisms and substrates

    USGS Publications Warehouse

    Zawada, David G.; Mazel, Charles H.

    2014-01-01

    A diverse group of coral reef organisms, representing several phyla, possess fluorescent pigments. We investigated the potential of using the characteristic fluorescence emission spectra of these pigments to enable unsupervised, optical classification of coral reef habitats. We compiled a library of characteristic fluorescence spectra through in situ and laboratory measurements from a variety of specimens throughout the Caribbean. Because fluorescent pigments are not species-specific, the spectral library is organized in terms of 15 functional groups. We investigated the spectral separability of the functional groups in terms of the number of wavebands required to distinguish between them, using the similarity measures Spectral Angle Mapper (SAM), Spectral Information Divergence (SID), SID-SAM mixed measure, and Mahalanobis distance. This set of measures represents geometric, stochastic, joint geometric-stochastic, and statistical approaches to classifying spectra. Our hyperspectral fluorescence data were used to generate sets of 4-, 6-, and 8-waveband spectra, including random variations in relative signal amplitude, spectral peak shifts, and water-column attenuation. Each set consisted of 2 different band definitions: ‘optimally-picked’ and ‘evenly-spaced.’ The optimally-picked wavebands were chosen to coincide with as many peaks as possible in the functional group spectra. Reference libraries were formed from half of the spectra in each set and used for training purposes. Average classification accuracies ranged from 76.3% for SAM with 4 evenly-spaced wavebands to 93.8% for Mahalanobis distance with 8 evenly-spaced wavebands. The Mahalanobis distance consistently outperformed the other measures. In a second test, empirically-measured spectra were classified using the same reference libraries and the Mahalanobis distance for just the 8 evenly-spaced waveband case. Average classification accuracies were 84% and 87%, corresponding to the extremes in modeled water-column attenuation. The classification results from both tests indicate that a high degree of separability among the 15 fluorescent-spectra functional groups is possible using only a modest number of spectral bands.

  2. Investigations of detail design issues for the high speed acoustic wind tunnel using a 60th scale model tunnel. Part 2: Tests with the closed circuit

    NASA Technical Reports Server (NTRS)

    Barna, P. Stephen

    1991-01-01

    This report summarizes the tests on the 1:60 scale model of the High Speed Acoustic Wind Tunnel (HSAWT) performed during the period June - August 1991. Throughout the testing the tunnel was operated in the 'closed circuit mode,' that is when the airflow was set up by an axial flow fan, which was located inside the tunnel circuit and was directly driven by a motor. The tests were first performed with the closed test section and were subsequently repeated with the open test section, the latter operating with the nozzle-diffuser at its optimum setting. On this subject, reference is made to the report (1) issued January 1991, under contract 17-GFY900125, which summarizes the result obtained with the tunnel operating in the 'open circuit mode.' The tests confirmed the viability of the tunnel design, and the flow distributions in most of the tunnel components were considered acceptable. There were found, however, some locations where the flow distribution requires improvement. This applies to the flow upstream of the fan where the flow was found skewed, thus affecting the flow downstream. As a result of this, the flow appeared separated at the end of the large diffuser at the outer side. All tests were performed at NASA LaRC.

  3. Dairy manure nutrient analysis using quick tests.

    PubMed

    Singh, A; Bicudo, J R

    2005-05-01

    Rapid on-farm assessment of manure nutrient content can be achieved with the use of quick tests. These tests can be used to indirectly measure the nutrient content in animal slurries immediately before manure is applied on agricultural fields. The objective of this study was to assess the reliability of hydrometers, electrical conductivity meter and pens, and Agros N meter against standard laboratory methods. Manure samples were collected from 34 dairy farms in the Mammoth Cave area in central Kentucky. Regression equations were developed for combined and individual counties located In the area (Barren, Hart and Monroe). Our results indicated that accuracy in nutrient estimation could be improved if separate linear regressions were developed for farms with similar facilities in a county. Direct hydrometer estimates of total nitrogen were among the most accurate when separate regression equations were developed for each county (R2 = 0.61, 0.93, and 0.74 for Barren, Hart and Monroe county, respectively). Reasonably accurate estimates (R2 > 0.70) were also obtained for total nitrogen and total phosphorus using hydrometers, either by relating specific gravity to nutrient content or to total solids content. Estimation of ammoniacal nitrogen with Agros N meter and electrical conductivity meter/pens correlated well with standard laboratory determinations, especially while using the individual data sets from Hart County (R2 = 0.70 to 0.87). This study indicates that the use of quick test calibration equations developed for a small area or region where farms are similar in terms of manure handling and management, housing, and feed ration are more appropriate than using "universal" equations usually developed with combined data sets. Accuracy is expected to improve if individual farms develop their own calibration curves. Nevertheless, we suggest confidence intervals always be specified for nutrients estimated through quick testing for any specific region, county, or farm.

  4. Supporting Employers in the Reserve Operational Forces Era: Appendixes

    DTIC Science & Technology

    2013-01-01

    volunteer (to replace the draftee) and to induce separating members to con- tinue to serve in the reserve forces” ( Manson , 1999, p. 57). VEVRAA was...but not as an adjunct to, the “Total Force.” ( Manson , 1999, p. 58) Although not a legislative development, the change in DoD policy set the stage for...test America’s new defense pos- ture and, consequently, the first significant chance to see its effects on the personnel” ( Manson , 1999, p. 58

  5. Use of New Industrial Coatings for the U.S. Navy Waterfront Structures

    DTIC Science & Technology

    2008-12-01

    utilized as a coating for the interior and exterior of piping systems, which either are located in harsh environments or are transporting substances with...typical application process, a separate set of test Table 7. MCU Coating Systems (SSPC SP 10 Surfaces) (5). SystelD CoatiIli System A Zinc -rich urethane...urethane/MID & AI-filled Urethane/MIO-filled urethane 315/315/314 336/336/336 340/340/336 ~ Micaceous iron oxide. \\) Aluminum. C Zinc . 12 as well as an

  6. Improved definition of crustal anomalies for Magsat data

    NASA Technical Reports Server (NTRS)

    1981-01-01

    A scheme was developed for separating the portions of the magnetic field measured by the Magsat 1 satellite that arise from internal and external sources. To test this method, a set of sample coefficients were used to compute the field values along a simulated satellite orbit. This data was then used to try to recover the original coefficients. Matrix inversion and recursive least squares methods were used to solve for the input coefficients. The accuracy of the two methods are compared.

  7. 49 CFR 173.121 - Class 3-Assignment of packing group.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ...-cup) Initial boiling point I ≤35°C (95 °F) II 35 °C (95 °F) III ≥23 °C, ≤60 °C (≥73 °F, ≤140 °F) >35... determined at 23 °C (73.4 °F) using the ISO standard cup with a 4 mm (0.16 inch) jet as set forth in ISO 2431... carried out using the ISO standard cup with a 6 mm (0.24 inch) jet. (ii) Solvent Separation Test. This...

  8. Analysis of NASA Common Research Model Dynamic Data

    NASA Technical Reports Server (NTRS)

    Balakrishna, S.; Acheson, Michael J.

    2011-01-01

    Recent NASA Common Research Model (CRM) tests at the Langley National Transonic Facility (NTF) and Ames 11-foot Transonic Wind Tunnel (11-foot TWT) have generated an experimental database for CFD code validation. The database consists of force and moment, surface pressures and wideband wing-root dynamic strain/wing Kulite data from continuous sweep pitch polars. The dynamic data sets, acquired at 12,800 Hz sampling rate, are analyzed in this study to evaluate CRM wing buffet onset and potential CRM wing flow separation.

  9. KC-46 Tanker Modernization: Delivery of First Fully Capable Aircraft Has Been Delayed over One Year and Additional Delays Are Possible

    DTIC Science & Technology

    2017-03-01

    delays. As shown, the remaining schedule was modified to allow Boeing to deliver the first 18 aircraft and pods separately by October 2018, 14...testing. Among other things, Boeing is contractually required to deliver a total of 18 aircraft and 9 wing air refueling pod sets by August 2017...Parameters and System Attributes and Status of Technical Performance Capabilities 18 Appendix II GAO Contact and Staff Acknowledgments 20 Related

  10. Effects of programming threshold and maplaw settings on acoustic thresholds and speech discrimination with the MED-EL COMBI 40+ cochlear implant.

    PubMed

    Boyd, Paul J

    2006-12-01

    The principal task in the programming of a cochlear implant (CI) speech processor is the setting of the electrical dynamic range (output) for each electrode, to ensure that a comfortable loudness percept is obtained for a range of input levels. This typically involves separate psychophysical measurement of electrical threshold ([theta] e) and upper tolerance levels using short current bursts generated by the fitting software. Anecdotal clinical experience and some experimental studies suggest that the measurement of [theta]e is relatively unimportant and that the setting of upper tolerance limits is more critical for processor programming. The present study aims to test this hypothesis and examines in detail how acoustic thresholds and speech recognition are affected by setting of the lower limit of the output ("Programming threshold" or "PT") to understand better the influence of this parameter and how it interacts with certain other programming parameters. Test programs (maps) were generated with PT set to artificially high and low values and tested on users of the MED-EL COMBI 40+ CI system. Acoustic thresholds and speech recognition scores (sentence tests) were measured for each of the test maps. Acoustic thresholds were also measured using maps with a range of output compression functions ("maplaws"). In addition, subjective reports were recorded regarding the presence of "background threshold stimulation" which is occasionally reported by CI users if PT is set to relatively high values when using the CIS strategy. Manipulation of PT was found to have very little effect. Setting PT to minimum produced a mean 5 dB (S.D. = 6.25) increase in acoustic thresholds, relative to thresholds with PT set normally, and had no statistically significant effect on speech recognition scores on a sentence test. On the other hand, maplaw setting was found to have a significant effect on acoustic thresholds (raised as maplaw is made more linear), which provides some theoretical explanation as to why PT has little effect when using the default maplaw of c = 500. Subjective reports of background threshold stimulation showed that most users could perceive a relatively loud auditory percept, in the absence of microphone input, when PT was set to double the behaviorally measured electrical thresholds ([theta]e), but that this produced little intrusion when microphone input was present. The results of these investigations have direct clinical relevance, showing that setting of PT is indeed relatively unimportant in terms of speech discrimination, but that it is worth ensuring that PT is not set excessively high, as this can produce distracting background stimulation. Indeed, it may even be set to minimum values without deleterious effect.

  11. Field-induced exciton dissociation in PTB7-based organic solar cells

    NASA Astrophysics Data System (ADS)

    Gerhard, Marina; Arndt, Andreas P.; Bilal, Mühenad; Lemmer, Uli; Koch, Martin; Howard, Ian A.

    2017-05-01

    The physics of charge separation in organic semiconductors is a topic of ongoing research of relevance to material and device engineering. Herein, we present experimental observations of the field and temperature dependence of charge separation from singlet excitons in PTB7 and PC71BM , and from charge-transfer states created across interfaces in PTB 7 /PC71BM bulk heterojunction solar cells. We obtain this experimental data by time-resolving the near infrared emission of the states from 10 K to room temperature and electric fields from 0 to 2.5 MVcm -1 . Examining how the luminescence is quenched by field and temperature gives direct insight into the underlying physics. We observe that singlet excitons can be split by high fields, and that disorder broadens the high threshold fields needed to split the excitons. Charge-transfer (CT) states, on the other hand, can be separated by both field and temperature. Also, the data imply a strong reduction of the activation barrier for charge splitting from the CT state relative to the exciton state. The observations provided herein of the field-dependent separation of CT states as a function of temperature offer a rich data set against which theoretical models of charge separation can be rigorously tested; it should be useful for developing the more advanced theoretical models of charge separation.

  12. 20 CFR 416.1231 - Burial spaces and certain funds set aside for burial expenses.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... children and step-children; an individual's brothers, sisters, parents, adoptive parents, and the spouses... are set aside for the burial arrangements of the eligible child's ineligible parent or parent's spouse... separation; i.e., a circumstance beyond an individual's control which makes conversion/separation impossible...

  13. Development of the Attitudes to Domestic Violence Questionnaire for Children and Adolescents.

    PubMed

    Fox, Claire L; Gadd, David; Sim, Julius

    2015-09-01

    To provide a more robust assessment of the effectiveness of a domestic abuse prevention education program, a questionnaire was developed to measure children's attitudes to domestic violence. The aim was to develop a short questionnaire that would be easy to use for practitioners but, at the same time, sensitive enough to pick up on subtle changes in young people's attitudes. We therefore chose to ask children about different situations in which they might be willing to condone domestic violence. In Study 1, we tested a set of 20 items, which we reduced by half to a set of 10 items. The factor structure of the scale was explored and its internal consistency was calculated. In Study 2, we tested the factor structure of the 10-item Attitudes to Domestic Violence (ADV) Scale in a separate calibration sample. Finally, in Study 3, we then assessed the test-retest reliability of the 10-item scale. The ADV Questionnaire is a promising tool to evaluate the effectiveness of domestic abuse education prevention programs. However, further development work is necessary. © The Author(s) 2014.

  14. Ranking Bias in Association Studies

    PubMed Central

    Jeffries, Neal O.

    2009-01-01

    Background It is widely appreciated that genomewide association studies often yield overestimates of the association of a marker with disease when attention focuses upon the marker showing the strongest relationship. For example, in a case-control setting the largest (in absolute value) estimated odds ratio has been found to typically overstate the association as measured in a second, independent set of data. The most common reason given for this observation is that the choice of the most extreme test statistic is often conditional upon first observing a significant p value associated with the marker. A second, less appreciated reason is described here. Under common circumstances it is the multiple testing of many markers and subsequent focus upon those with most extreme test statistics (i.e. highly ranked results) that leads to bias in the estimated effect sizes. Conclusions This bias, termed ranking bias, is separate from that arising from conditioning on a significant p value and may often be a more important factor in generating bias. An analytic description of this bias, simulations demonstrating its extent, and identification of some factors leading to its exacerbation are presented. PMID:19172085

  15. Exploring the general motor ability construct.

    PubMed

    Ibrahim, Halijah; Heard, N Paul; Blanksby, Brian

    2011-10-01

    Malaysian students ages 12 to 15 years (N = 330; 165 girls, 165 boys) took the Australian Institute of Sport Talent Identification Test (AIST) and the Balance and Movement Coordination Test (BMC), developed specifically to identify sport talent in Malaysian adolescents. To investigate evidence for general aptitude ("g") in motor ability, a higher-order factor analysis was applied to the motor skills subtests from the AIST and BMC. First-order principal components analysis indicated that scores for the adolescent boys and girls could be described by similar sets of specific motor abilities. In particular, sets of skills identified as Movement Coordination and Postural Control were found, with Balancing Ability also emerging. For the girls, a factor labeled Static Balance was indicated. However, for the boys a more general balance ability labeled Kinesthetic Integration was found, along with an ability labeled Explosive Power. These first-order analyses accounted for 45% to 60% of the variance in the scores on the motor skills tests for the boys and girls, respectively. Separate second-order factor analyses for the boys and girls extracted a single higher-order factor, which was consistent with the existence of a motoric "g".

  16. Rank estimation and the multivariate analysis of in vivo fast-scan cyclic voltammetric data

    PubMed Central

    Keithley, Richard B.; Carelli, Regina M.; Wightman, R. Mark

    2010-01-01

    Principal component regression has been used in the past to separate current contributions from different neuromodulators measured with in vivo fast-scan cyclic voltammetry. Traditionally, a percent cumulative variance approach has been used to determine the rank of the training set voltammetric matrix during model development, however this approach suffers from several disadvantages including the use of arbitrary percentages and the requirement of extreme precision of training sets. Here we propose that Malinowski’s F-test, a method based on a statistical analysis of the variance contained within the training set, can be used to improve factor selection for the analysis of in vivo fast-scan cyclic voltammetric data. These two methods of rank estimation were compared at all steps in the calibration protocol including the number of principal components retained, overall noise levels, model validation as determined using a residual analysis procedure, and predicted concentration information. By analyzing 119 training sets from two different laboratories amassed over several years, we were able to gain insight into the heterogeneity of in vivo fast-scan cyclic voltammetric data and study how differences in factor selection propagate throughout the entire principal component regression analysis procedure. Visualizing cyclic voltammetric representations of the data contained in the retained and discarded principal components showed that using Malinowski’s F-test for rank estimation of in vivo training sets allowed for noise to be more accurately removed. Malinowski’s F-test also improved the robustness of our criterion for judging multivariate model validity, even though signal-to-noise ratios of the data varied. In addition, pH change was the majority noise carrier of in vivo training sets while dopamine prediction was more sensitive to noise. PMID:20527815

  17. Multi-annual modes in the 20th century temperature variability in reanalyses and CMIP5 models

    NASA Astrophysics Data System (ADS)

    Järvinen, Heikki; Seitola, Teija; Silén, Johan; Räisänen, Jouni

    2016-11-01

    A performance expectation is that Earth system models simulate well the climate mean state and the climate variability. To test this expectation, we decompose two 20th century reanalysis data sets and 12 CMIP5 model simulations for the years 1901-2005 of the monthly mean near-surface air temperature using randomised multi-channel singular spectrum analysis (RMSSA). Due to the relatively short time span, we concentrate on the representation of multi-annual variability which the RMSSA method effectively captures as separate and mutually orthogonal spatio-temporal components. This decomposition is a unique way to separate statistically significant quasi-periodic oscillations from one another in high-dimensional data sets.The main results are as follows. First, the total spectra for the two reanalysis data sets are remarkably similar in all timescales, except that the spectral power in ERA-20C is systematically slightly higher than in 20CR. Apart from the slow components related to multi-decadal periodicities, ENSO oscillations with approximately 3.5- and 5-year periods are the most prominent forms of variability in both reanalyses. In 20CR, these are relatively slightly more pronounced than in ERA-20C. Since about the 1970s, the amplitudes of the 3.5- and 5-year oscillations have increased, presumably due to some combination of forced climate change, intrinsic low-frequency climate variability, or change in global observing network. Second, none of the 12 coupled climate models closely reproduce all aspects of the reanalysis spectra, although some models represent many aspects well. For instance, the GFDL-ESM2M model has two nicely separated ENSO periods although they are relatively too prominent as compared with the reanalyses. There is an extensive Supplement and YouTube videos to illustrate the multi-annual variability of the data sets.

  18. Separation of plastic waste via the hydraulic separator Multidune under different geometric configurations.

    PubMed

    La Marca, Floriana; Moroni, Monica; Cherubini, Lorenzo; Lupo, Emanuela; Cenedese, Antonio

    2012-07-01

    The recovery of high-quality plastic materials is becoming an increasingly challenging issue for the recycling sector. Technologies for plastic recycling have to guarantee high-quality secondary raw material, complying with specific standards, for use in industrial applications. The variability in waste plastics does not always correspond to evident differences in physical characteristics, making traditional methodologies ineffective for plastic separation. The Multidune separator is a hydraulic channel allowing the sorting of solid particles on the basis of differential transport mechanisms by generating particular fluid dynamic conditions due to its geometric configuration and operational settings. In this paper, the fluid dynamic conditions were investigated by an image analysis technique, allowing the reconstruction of velocity fields generated inside the Multidune, considering two different geometric configurations of the device, Configuration A and Configuration B. Furthermore, tests on mono- and bi-material samples were completed with varying operational conditions under both configurations. In both series of experiments, the bi-material samples were composed of differing proportions (85% vs. 15%) to simulate real conditions in an industrial plant for the purifying of a useful fraction from a contaminating fraction. The separation results were evaluated in terms of grade and recovery of the useful fraction. Copyright © 2012 Elsevier Ltd. All rights reserved.

  19. Multiphasic Health Testing in the Clinic Setting

    PubMed Central

    LaDou, Joseph

    1971-01-01

    The economy of automated multiphasic health testing (amht) activities patterned after the high-volume Kaiser program can be realized in low-volume settings. amht units have been operated at daily volumes of 20 patients in three separate clinical environments. These programs have displayed economics entirely compatible with cost figures published by the established high-volume centers. This experience, plus the expanding capability of small, general purpose, digital computers (minicomputers) indicates that a group of six or more physicians generating 20 laboratory appraisals per day can economically justify a completely automated multiphasic health testing facility. This system would reside in the clinic or hospital where it is used and can be configured to do analyses such as electrocardiography and generate laboratory reports, and communicate with large computer systems in university medical centers. Experience indicates that the most effective means of implementing these benefits of automation is to make them directly available to the medical community with the physician playing the central role. Economic justification of a dedicated computer through low-volume health testing then allows, as a side benefit, automation of administrative as well as other diagnostic activities—for example, patient billing, computer-aided diagnosis, and computer-aided therapeutics. PMID:4935771

  20. An adaptive toolkit for image quality evaluation in system performance test of digital breast tomosynthesis

    NASA Astrophysics Data System (ADS)

    Zhang, Guozhi; Petrov, Dimitar; Marshall, Nicholas; Bosmans, Hilde

    2017-03-01

    Digital breast tomosynthesis (DBT) is a relatively new diagnostic imaging modality for women. Currently, various models of DBT systems are available on the market and the number of installations is rapidly increasing. EUREF, the European Reference Organization for Quality Assured Breast Screening and Diagnostic Services, has proposed a preliminary Guideline - protocol for the quality control of the physical and technical aspects of digital breast tomosynthesis systems, with an ultimate aim of providing limiting values guaranteeing proper performance for different applications of DBT. In this work, we introduce an adaptive toolkit developed in accordance with this guideline to facilitate the process of image quality evaluation in DBT performance test. This toolkit implements robust algorithms to quantify various technical parameters of DBT images and provides a convenient user interface in practice. Each test is built into a separate module with configurations set corresponding to the European guideline, which can be easily adapted to different settings and extended with additional tests. This toolkit largely improves the efficiency for image quality evaluation of DBT. It is also going to evolve with the development of protocols in quality control of DBT systems.

  1. Application of digital mapping technology to the display of hydrologic information; a proof-of-concept test in the Fox-Wolf River Basin, Wisconsin

    USGS Publications Warehouse

    Moore, G.K.; Baten, L.G.; Allord, G.J.; Robinove, C.J.

    1983-01-01

    The Fox-Wolf River basin in east-central Wisconsin was selected to test concepts for a water-resources information system using digital mapping technology. This basin of 16,800 sq km is typical of many areas in the country. Fifty digital data sets were included in the Fox-Wolf information system. Many data sets were digitized from 1:500,000 scale maps and overlays. Some thematic data were acquired from WATSTORE and other digital data files. All data were geometrically transformed into a Lambert Conformal Conic map projection and converted to a raster format with a 1-km resolution. The result of this preliminary processing was a group of spatially registered, digital data sets in map form. Parameter evaluation, areal stratification, data merging, and data integration were used to achieve the processing objectives and to obtain analysis results for the Fox-Wolf basin. Parameter evaluation includes the visual interpretation of single data sets and digital processing to obtain new derived data sets. In the areal stratification stage, masks were used to extract from one data set all features that are within a selected area on another data set. Most processing results were obtained by data merging. Merging is the combination of two or more data sets into a composite product, in which the contribution of each original data set is apparent and can be extracted from the composite. One processing result was also obtained by data integration. Integration is the combination of two or more data sets into a single new product, from which the original data cannot be separated or calculated. (USGS)

  2. Faithful Squashed Entanglement

    NASA Astrophysics Data System (ADS)

    Brandão, Fernando G. S. L.; Christandl, Matthias; Yard, Jon

    2011-09-01

    Squashed entanglement is a measure for the entanglement of bipartite quantum states. In this paper we present a lower bound for squashed entanglement in terms of a distance to the set of separable states. This implies that squashed entanglement is faithful, that is, it is strictly positive if and only if the state is entangled. We derive the lower bound on squashed entanglement from a lower bound on the quantum conditional mutual information which is used to define squashed entanglement. The quantum conditional mutual information corresponds to the amount by which strong subadditivity of von Neumann entropy fails to be saturated. Our result therefore sheds light on the structure of states that almost satisfy strong subadditivity with equality. The proof is based on two recent results from quantum information theory: the operational interpretation of the quantum mutual information as the optimal rate for state redistribution and the interpretation of the regularised relative entropy of entanglement as an error exponent in hypothesis testing. The distance to the set of separable states is measured in terms of the LOCC norm, an operationally motivated norm giving the optimal probability of distinguishing two bipartite quantum states, each shared by two parties, using any protocol formed by local quantum operations and classical communication (LOCC) between the parties. A similar result for the Frobenius or Euclidean norm follows as an immediate consequence. The result has two applications in complexity theory. The first application is a quasipolynomial-time algorithm solving the weak membership problem for the set of separable states in LOCC or Euclidean norm. The second application concerns quantum Merlin-Arthur games. Here we show that multiple provers are not more powerful than a single prover when the verifier is restricted to LOCC operations thereby providing a new characterisation of the complexity class QMA.

  3. Very high pressure liquid chromatography using core-shell particles: quantitative analysis of fast gradient separations without post-run times.

    PubMed

    Stankovich, Joseph J; Gritti, Fabrice; Stevenson, Paul G; Beaver, Lois A; Guiochon, Georges

    2014-01-17

    Five methods for controlling the mobile phase flow rate for gradient elution analyses using very high pressure liquid chromatography (VHPLC) were tested to determine thermal stability of the column during rapid gradient separations. To obtain rapid separations, instruments are operated at high flow rates and high inlet pressure leading to uneven thermal effects across columns and additional time needed to restore thermal equilibrium between successive analyses. The purpose of this study is to investigate means to minimize thermal instability and obtain reliable results by measuring the reproducibility of the results of six replicate gradient separations of a nine component RPLC standard mixture under various experimental conditions with no post-run times. Gradient separations under different conditions were performed: constant flow rates, two sets of constant pressure operation, programmed flow constant pressure operation, and conditions which theoretically should yield a constant net heat loss at the column's wall. The results show that using constant flow rates, programmed flow constant pressures, and constant heat loss at the column's wall all provide reproducible separations. However, performing separations using a high constant pressure with programmed flow reduces the analysis time by 16% compared to constant flow rate methods. For the constant flow rate, programmed flow constant pressure, and constant wall heat experiments no equilibration time (post-run time) was required to obtain highly reproducible data. Copyright © 2013 Elsevier B.V. All rights reserved.

  4. Thermal testing results of an electroformed nickel secondary (M2) mirror

    NASA Astrophysics Data System (ADS)

    Smith, David R.; Gale, David M.; Cabrera Cuevas, Lizeth; Lucero Álvarez, Maribel; Castro Santos, David; Olmos Tapia, Arak

    2016-07-01

    To support higher-frequency operation, the Large Millimeter Telescope/Gran Telescopio Milimetrico (or LMT/GTM) is replacing its existing monolithic aluminum secondary mirror (M2). The new mirror is a segmented design based on the same electroformed nickel reflector panel technology that is already in use for the primary reflector segments. While the new M2 is lighter and has better surface accuracy than the original mirror, the electroformed panels are more sensitive to high temperatures. During the design phase, concerns were raised over the level of temperature increase that could occur at M2 during daytime observations. Although the panel surface is designed to scatter visible light, the LMT primary mirror is large enough to cause substantial solar heating, even at significant angular separation from the Sun. To address these concerns, the project conducted a series of field tests, within the constraint of having minimum impact on night time observations. The supplier sent two coupon samples of a reflector panel prepared identically to their proposed M2 surface. Temperature sensors were mounted on the samples and they were temporarily secured to the existing M2 mirror at different distances from the center. The goal was to obtain direct monitoring of the surface temperature under site thermal conditions and the concentration effects from the primary reflector. With the sensors installed, the telescope was then commanded to track the Sun with an elevation offset. Initially, elevation offsets from as far as 40 degrees to as close as 6 degrees were tested. The 6 degree separation test quickly passed the target maximum temperature and the telescope was returned to a safer separation. Based on these initial results, a second set of tests was performed using elevation separations from 30 degrees to 8 degrees. To account for the variability of site conditions, the temperature data were analyzed using multiple metrics. These metrics included maximum temperature, final time average temperature, and an curve fit for heating/ cooling. The results indicate that a solar separation angle of 20 degrees should be suitable for full performance operation of the LMT/GTM. This separation not only is sufficient to avoid high temperatures at the mirror, but also provides time to respond to any emergency conditions that could occur (e.g., switching to a generator after a power failure) for observations that are ahead of the motion of the Sun. Additionally, even approaches of 10 to 15 degrees of angular separation on the sky may be achievable for longer wavelength observations, though these would likely be limited to positions that are behind the position of the Sun along its motion.

  5. Effect of Self-Adhesive and Separate Etch Adhesive Dual Cure Resin Cements on the Bond Strength of Fiber Post to Dentin at Different Parts of the Root.

    PubMed

    Amiri, Ehsan Mohamadian; Balouch, Fariba; Atri, Faezeh

    2017-05-01

    Bonding of fiber posts to intracanal dentin is challenging in the clinical setting. This study aimed to compare the effect of self-adhesive and separate etch adhesive dual cure resin cements on the bond strength of fiber post to dentin at different parts of the root. This in-vitro experimental study was conducted on 20 single-rooted premolars. The teeth were decoronated at 1mm coronal to the cementoenamel junction (CEJ), and the roots underwent root canal treatment. Post space was prepared in the roots. Afterwards, the samples were randomly divided into two groups. In group 1, the fiber posts were cemented using Rely X Unicem cement, while in group 2, the fiber posts were cemented using Duo-Link cement, according to the manufacturer's instructions. The intracanal post in each root was sectioned into three segments of coronal, middle, and apical, and each cross-section was subjected to push-out bond strength test at a crosshead speed of 1mm/minute until failure. Push-out bond strength data were analyzed using independent t-test and repeated measures ANOVA. The bond strength at the middle and coronal segments in separate etch adhesive cement group was higher than that in self-adhesive cement group. However, the bond strength at the apical segment was higher in self-adhesive cement group compared to that in the other group. Overall, the bond strength in separate etch adhesive cement group was significantly higher than that in self-adhesive cement group (P<0.001). Bond strength of fiber post to intracanal dentin is higher after the use of separate etch adhesive cement compared to self-adhesive cement.

  6. Physician consideration of patients' out-of-pocket costs in making common clinical decisions.

    PubMed

    Pham, Hoangmai H; Alexander, G Caleb; O'Malley, Ann S

    2007-04-09

    Patients face growing cost-sharing through higher deductibles and other out-of-pocket (OP) expenses, with uncertain effects on clinical decision making. We analyzed data on 6628 respondents to the nationally representative 2004-2005 Community Tracking Study Physician Survey to examine how frequently physicians report considering their insured patients' OP expenses when prescribing drugs, selecting diagnostic tests, and choosing inpatient vs outpatient care settings. Responses were dichotomized as always/usually vs sometimes/rarely/never. In separate multivariate logistic regressions, we examined associations between physicians' reported frequency of considering OP costs for each type of decision and characteristics of individual physicians and their practices. Seventy-eight percent of physicians reported routinely considering OP costs when prescribing drugs, while 51.2% reported doing so when selecting care settings, and 40.2% when selecting diagnostic tests. In adjusted analyses, primary care physicians were more likely than medical specialists to consider patients' OP costs in choosing prescription drugs (85.3% vs 74.5%) (P<.001), care settings (53.9% vs 43.1%) (P<.001), and diagnostic tests (46.3% vs 29.9%) (P<.001). Physicians working in large groups or health maintenance organizations were more likely to consider OP costs in prescribing generic drugs (P<.001 for comparisons with solo and 2-person practices), but those in solo or 2-person practices were more likely to do so in choosing tests and care settings (P<.05 for all comparisons with other practice types). Physicians providing at least 10 hours of charity care a month were more likely than those not providing any to consider OP costs in both diagnostic testing (40.7% vs 35.8%) (P<.001) and care setting decisions (51.4% vs 47.6%) (P<.005). Cost-sharing arrangements targeting patients are likely to have limited effects in safely reducing health care spending because physicians do not routinely consider patients' OP costs when making decisions regarding more expensive medical services.

  7. Development of automatic body condition scoring using a low-cost 3-dimensional Kinect camera.

    PubMed

    Spoliansky, Roii; Edan, Yael; Parmet, Yisrael; Halachmi, Ilan

    2016-09-01

    Body condition scoring (BCS) is a farm-management tool for estimating dairy cows' energy reserves. Today, BCS is performed manually by experts. This paper presents a 3-dimensional algorithm that provides a topographical understanding of the cow's body to estimate BCS. An automatic BCS system consisting of a Kinect camera (Microsoft Corp., Redmond, WA) triggered by a passive infrared motion detector was designed and implemented. Image processing and regression algorithms were developed and included the following steps: (1) image restoration, the removal of noise; (2) object recognition and separation, identification and separation of the cows; (3) movie and image selection, selection of movies and frames that include the relevant data; (4) image rotation, alignment of the cow parallel to the x-axis; and (5) image cropping and normalization, removal of irrelevant data, setting the image size to 150×200 pixels, and normalizing image values. All steps were performed automatically, including image selection and classification. Fourteen individual features per cow, derived from the cows' topography, were automatically extracted from the movies and from the farm's herd-management records. These features appear to be measurable in a commercial farm. Manual BCS was performed by a trained expert and compared with the output of the training set. A regression model was developed, correlating the features with the manual BCS references. Data were acquired for 4 d, resulting in a database of 422 movies of 101 cows. Movies containing cows' back ends were automatically selected (389 movies). The data were divided into a training set of 81 cows and a test set of 20 cows; both sets included the identical full range of BCS classes. Accuracy tests gave a mean absolute error of 0.26, median absolute error of 0.19, and coefficient of determination of 0.75, with 100% correct classification within 1 step and 91% correct classification within a half step for BCS classes. Results indicated good repeatability, with all standard deviations under 0.33. The algorithm is independent of the background and requires 10 cows for training with approximately 30 movies of 4 s each. Copyright © 2016 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.

  8. ImmunoRatio: a publicly available web application for quantitative image analysis of estrogen receptor (ER), progesterone receptor (PR), and Ki-67

    PubMed Central

    2010-01-01

    Introduction Accurate assessment of estrogen receptor (ER), progesterone receptor (PR), and Ki-67 is essential in the histopathologic diagnostics of breast cancer. Commercially available image analysis systems are usually bundled with dedicated analysis hardware and, to our knowledge, no easily installable, free software for immunostained slide scoring has been described. In this study, we describe a free, Internet-based web application for quantitative image analysis of ER, PR, and Ki-67 immunohistochemistry in breast cancer tissue sections. Methods The application, named ImmunoRatio, calculates the percentage of positively stained nuclear area (labeling index) by using a color deconvolution algorithm for separating the staining components (diaminobenzidine and hematoxylin) and adaptive thresholding for nuclear area segmentation. ImmunoRatio was calibrated using cell counts defined visually as the gold standard (training set, n = 50). Validation was done using a separate set of 50 ER, PR, and Ki-67 stained slides (test set, n = 50). In addition, Ki-67 labeling indexes determined by ImmunoRatio were studied for their prognostic value in a retrospective cohort of 123 breast cancer patients. Results The labeling indexes by calibrated ImmunoRatio analyses correlated well with those defined visually in the test set (correlation coefficient r = 0.98). Using the median Ki-67 labeling index (20%) as a cutoff, a hazard ratio of 2.2 was obtained in the survival analysis (n = 123, P = 0.01). ImmunoRatio was shown to adapt to various staining protocols, microscope setups, digital camera models, and image acquisition settings. The application can be used directly with web browsers running on modern operating systems (e.g., Microsoft Windows, Linux distributions, and Mac OS). No software downloads or installations are required. ImmunoRatio is open source software, and the web application is publicly accessible on our website. Conclusions We anticipate that free web applications, such as ImmunoRatio, will make the quantitative image analysis of ER, PR, and Ki-67 easy and straightforward in the diagnostic assessment of breast cancer specimens. PMID:20663194

  9. ImmunoRatio: a publicly available web application for quantitative image analysis of estrogen receptor (ER), progesterone receptor (PR), and Ki-67.

    PubMed

    Tuominen, Vilppu J; Ruotoistenmäki, Sanna; Viitanen, Arttu; Jumppanen, Mervi; Isola, Jorma

    2010-01-01

    Accurate assessment of estrogen receptor (ER), progesterone receptor (PR), and Ki-67 is essential in the histopathologic diagnostics of breast cancer. Commercially available image analysis systems are usually bundled with dedicated analysis hardware and, to our knowledge, no easily installable, free software for immunostained slide scoring has been described. In this study, we describe a free, Internet-based web application for quantitative image analysis of ER, PR, and Ki-67 immunohistochemistry in breast cancer tissue sections. The application, named ImmunoRatio, calculates the percentage of positively stained nuclear area (labeling index) by using a color deconvolution algorithm for separating the staining components (diaminobenzidine and hematoxylin) and adaptive thresholding for nuclear area segmentation. ImmunoRatio was calibrated using cell counts defined visually as the gold standard (training set, n = 50). Validation was done using a separate set of 50 ER, PR, and Ki-67 stained slides (test set, n = 50). In addition, Ki-67 labeling indexes determined by ImmunoRatio were studied for their prognostic value in a retrospective cohort of 123 breast cancer patients. The labeling indexes by calibrated ImmunoRatio analyses correlated well with those defined visually in the test set (correlation coefficient r = 0.98). Using the median Ki-67 labeling index (20%) as a cutoff, a hazard ratio of 2.2 was obtained in the survival analysis (n = 123, P = 0.01). ImmunoRatio was shown to adapt to various staining protocols, microscope setups, digital camera models, and image acquisition settings. The application can be used directly with web browsers running on modern operating systems (e.g., Microsoft Windows, Linux distributions, and Mac OS). No software downloads or installations are required. ImmunoRatio is open source software, and the web application is publicly accessible on our website. We anticipate that free web applications, such as ImmunoRatio, will make the quantitative image analysis of ER, PR, and Ki-67 easy and straightforward in the diagnostic assessment of breast cancer specimens.

  10. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Handsteiner, Johannes; Friedman, Andrew S.; Rauch, Dominik

    Bell’s theorem states that some predictions of quantum mechanics cannot be reproduced by a local-realist theory. That conflict is expressed by Bell’s inequality, which is usually derived under the assumption that there are no statistical correlations between the choices of measurement settings and anything else that can causally affect the measurement outcomes. In previous experiments, this “freedom of choice” was addressed by ensuring that selection of measurement settings via conventional “quantum random number generators” was spacelike separated from the entangled particle creation. This, however, left open the possibility that an unknown cause affected both the setting choices and measurement outcomesmore » as recently as mere microseconds before each experimental trial. Here in this paper we report on a new experimental test of Bell’s inequality that, for the first time, uses distant astronomical sources as “cosmic setting generators.” In our tests with polarization-entangled photons, measurement settings were chosen using real-time observations of Milky Way stars while simultaneously ensuring locality. Assuming fair sampling for all detected photons, and that each stellar photon’s color was set at emission, we observe statistically significant ≳7.31σ and ≳11.93σ violations of Bell’s inequality with estimated p values of ≲1.8 × 10 -13 and ≲4.0 × 10 -33, respectively, thereby pushing back by ~600 years the most recent time by which any local-realist influences could have engineered the observed Bell violation.« less

  11. Impact of left ventricular assist device speed adjustment on exercise tolerance and markers of wall stress.

    PubMed

    Hayward, Christopher S; Salamonsen, Robert; Keogh, Anne M; Woodard, John; Ayre, Peter; Prichard, Roslyn; Kotlyar, Eugene; Macdonald, Peter S; Jansz, Paul; Spratt, Phillip

    2015-09-01

    Left ventricular assist devices are crucial in rehabilitation of patients with end-stage heart failure. Whether cardiopulmonary function is enhanced with higher pump output is unknown. 10 patients (aged 39±16 years, mean±SD) underwent monitored adjustment of pump speed to determine minimum safe low speed and maximum safe high speed at rest. Patients were then randomized to these speed settings and underwent three 6-minute walk tests (6MWT) and symptom-limited cardiopulmonary stress tests (CPX) on separate days. Pump speed settings (low, normal and high) resulted in significantly different resting pump flows of 4.43±0.6, 5.03±0.94, and 5.72±1.2 l/min (P<.001). There was a significant enhancement of pump flows (greater at higher speed settings) with exercise (P<0.05). Increased pump speed was associated with a trend to increased 6MWT distance (P=.10); and CPX exercise time (p=.27). Maximum workload achieved and peak oxygen consumption were significantly different comparing low to high pump speed settings only (P<.05). N-terminal-pro-B-type natriuretic peptide release was significantly reduced at higher pump speed with exercise (P<.01). We have found that alteration of pump speed setting resulted in significant variation in estimated pump flow. The high-speed setting was associated with lower natriuretic hormone release consistent with lower myocardial wall stress. This did not, however, improve exercise tolerance.

  12. Dirac equation in Kerr-NUT-(A)dS spacetimes: Intrinsic characterization of separability in all dimensions

    NASA Astrophysics Data System (ADS)

    Cariglia, Marco; Krtouš, Pavel; Kubizňák, David

    2011-07-01

    We intrinsically characterize separability of the Dirac equation in Kerr-NUT-(A)dS spacetimes in all dimensions. Namely, we explicitly demonstrate that, in such spacetimes, there exists a complete set of first-order mutually commuting operators, one of which is the Dirac operator, that allows for common eigenfunctions which can be found in a separated form and correspond precisely to the general solution of the Dirac equation found by Oota and Yasui [Phys. Lett. BPYLBAJ0370-2693 659, 688 (2008)10.1016/j.physletb.2007.11.057]. Since all the operators in the set can be generated from the principal conformal Killing-Yano tensor, this establishes the (up-to-now) missing link among the existence of hidden symmetry, presence of a complete set of commuting operators, and separability of the Dirac equation in these spacetimes.

  13. Unsupervised classification of remote multispectral sensing data

    NASA Technical Reports Server (NTRS)

    Su, M. Y.

    1972-01-01

    The new unsupervised classification technique for classifying multispectral remote sensing data which can be either from the multispectral scanner or digitized color-separation aerial photographs consists of two parts: (a) a sequential statistical clustering which is a one-pass sequential variance analysis and (b) a generalized K-means clustering. In this composite clustering technique, the output of (a) is a set of initial clusters which are input to (b) for further improvement by an iterative scheme. Applications of the technique using an IBM-7094 computer on multispectral data sets over Purdue's Flight Line C-1 and the Yellowstone National Park test site have been accomplished. Comparisons between the classification maps by the unsupervised technique and the supervised maximum liklihood technique indicate that the classification accuracies are in agreement.

  14. Real-Time, Digital Pulse-Shape Discrimination in Non-Hazardous Fast Liquid Scintillation Detectors: Prospects for Safety and Security

    NASA Astrophysics Data System (ADS)

    Joyce, Malcolm J.; Aspinall, Michael D.; Cave, Francis D.; Lavietes, Anthony D.

    2012-08-01

    Pulse-shape discrimination (PSD) in fast, organic scintillation detectors is a long-established technique used to separate neutrons and γ rays in mixed radiation fields. In the analogue domain the method can achieve separation in real time, but all knowledge of the pulses themselves is lost thereby preventing the possibility of any post- or repeated analysis. Also, it is typically reliant on electronic systems that are largely obsolete and which require significant experience to set up. In the digital domain, PSD is often more flexible but significant post-processing has usually been necessary to obtain neutron/γ-ray separation. Moreover, the scintillation media on which the technique relies usually have a low flashpoint and are thus deemed hazardous. This complicates the ease with which they are used in industrial applications. In this paper, results obtained with a new portable digital pulse-shape discrimination instrument are described. This instrument provides real-time, digital neutron/γ-ray separation whilst preserving the synchronization with the time-of-arrival for each event, and realizing throughputs of 3 × 106 events per second. Furthermore, this system has been tested with a scintillation medium that is non-flammable and not hazardous.

  15. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Joyce, M. J.; Aspinall, M. D.; Cave, F. D.

    Pulse-shape discrimination (PSD) in fast, organic scintillation detectors is a long-established technique used to separate neutrons and {gamma} rays in mixed radiation fields. In the analogue domain the method can achieve separation in real time, but all knowledge of the pulses themselves is lost thereby preventing the possibility of any post- or repeated analysis. Also, it is typically reliant on electronic systems that are largely obsolete and which require significant experience to set up. In the digital domain, PSD is often more flexible but significant post-processing has usually been necessary to obtain neutron/{gamma}-ray separation. Moreover, the scintillation media on whichmore » the technique relies usually have a low flash point and are thus deemed hazardous. This complicates the ease with which they are used in industrial applications. In this paper, results obtained with a new portable digital pulse-shape discrimination instrument are described. This instrument provides real-time, digital neutron/{gamma} separation whilst preserving the synchronization with the time-of-arrival for each event, and realizing throughputs of 3 x 10{sup 6} events per second. Furthermore, this system has been tested with a scintillation medium that is non-flammable and not hazardous. (authors)« less

  16. Fabrication and test of inorganic/organic separators. [for silver zinc batteries

    NASA Technical Reports Server (NTRS)

    Smatko, J. S.

    1974-01-01

    Completion of testing and failure analysis of MDC 40 Ahr silver zinc cells containing largely inorganic separators was accomplished. The results showed that the wet stand and cycle life objectives of the silver zinc cell development program were accomplished. Building, testing and failure analysis of two plate cells employing three optimum separators selected on the basis of extensive screening tests, was performed. The best separator material as a result of these tests was doped calcium zirconate.

  17. Stereo particle image velocimetry set up for measurements in the wake of scaled wind turbines

    NASA Astrophysics Data System (ADS)

    Campanardi, Gabriele; Grassi, Donato; Zanotti, Alex; Nanos, Emmanouil M.; Campagnolo, Filippo; Croce, Alessandro; Bottasso, Carlo L.

    2017-08-01

    Stereo particle image velocimetry measurements were carried out in the boundary layer test section of Politecnico di Milano large wind tunnel to survey the wake of a scaled wind turbine model designed and developed by Technische Universität München. The stereo PIV instrumentation was set up to survey the three velocity components on cross-flow planes at different longitudinal locations. The area of investigation covered the entire extent of the wind turbines wake that was scanned by the use of two separate traversing systems for both the laser and the cameras. Such instrumentation set up enabled to gain rapidly high quality results suitable to characterise the behaviour of the flow field in the wake of the scaled wind turbine. This would be very useful for the evaluation of the performance of wind farm control methodologies based on wake redirection and for the validation of CFD tools.

  18. Testing Orions Fairing Separation System

    NASA Technical Reports Server (NTRS)

    Martinez, Henry; Cloutier, Chris; Lemmon, Heber; Rakes, Daniel; Oldham, Joe; Schlagel, Keith

    2014-01-01

    Traditional fairing systems are designed to fully encapsulate and protect their payload from the harsh ascent environment including acoustic vibrations, aerodynamic forces and heating. The Orion fairing separation system performs this function and more by also sharing approximately half of the vehicle structural load during ascent. This load-share condition through launch and during jettison allows for a substantial increase in mass to orbit. A series of component-level development tests were completed to evaluate and characterize each component within Orion's unique fairing separation system. Two full-scale separation tests were performed to verify system-level functionality and provide verification data. This paper summarizes the fairing spring, Pyramidal Separation Mechanism and forward seal system component-level development tests, system-level separation tests, and lessons learned.

  19. Facial recognition using simulated prosthetic pixelized vision.

    PubMed

    Thompson, Robert W; Barnett, G David; Humayun, Mark S; Dagnelie, Gislin

    2003-11-01

    To evaluate a model of simulated pixelized prosthetic vision using noncontiguous circular phosphenes, to test the effects of phosphene and grid parameters on facial recognition. A video headset was used to view a reference set of four faces, followed by a partially averted image of one of those faces viewed through a square pixelizing grid that contained 10x10 to 32x32 dots separated by gaps. The grid size, dot size, gap width, dot dropout rate, and gray-scale resolution were varied separately about a standard test condition, for a total of 16 conditions. All tests were first performed at 99% contrast and then repeated at 12.5% contrast. Discrimination speed and performance were influenced by all stimulus parameters. The subjects achieved highly significant facial recognition accuracy for all high-contrast tests except for grids with 70% random dot dropout and two gray levels. In low-contrast tests, significant facial recognition accuracy was achieved for all but the most adverse grid parameters: total grid area less than 17% of the target image, 70% dropout, four or fewer gray levels, and a gap of 40.5 arcmin. For difficult test conditions, a pronounced learning effect was noticed during high-contrast trials, and a more subtle practice effect on timing was evident during subsequent low-contrast trials. These findings suggest that reliable face recognition with crude pixelized grids can be learned and may be possible, even with a crude visual prosthesis.

  20. V/STOL Tandem Fan transition section model test. [in the Lewis Research Center 10-by-10 foot wind tunnel

    NASA Technical Reports Server (NTRS)

    Simpkin, W. E.

    1982-01-01

    An approximately 0.25 scale model of the transition section of a tandem fan variable cycle engine nacelle was tested in the NASA Lewis Research Center 10-by-10 foot wind tunnel. Two 12-inch, tip-turbine driven fans were used to simulate a tandem fan engine. Three testing modes simulated a V/STOL tandem fan airplane. Parallel mode has two separate propulsion streams for maximum low speed performance. A front inlet, fan, and downward vectorable nozzle forms one stream. An auxilliary top inlet provides air to the aft fan - supplying the core engine and aft vectorable nozzle. Front nozzle and top inlet closure, and removal of a blocker door separating the two streams configures the tandem fan for series mode operations as a typical aircraft propulsion system. Transition mode operation is formed by intermediate settings of the front nozzle, blocker door, and top inlet. Emphasis was on the total pressure recovery and flow distortion at the aft fan face. A range of fan flow rates were tested at tunnel airspeeds from 0 to 240 knots, and angles-of-attack from -10 to 40 deg for all three modes. In addition to the model variables for the three modes, model variants of the top inlet were tested in the parallel mode only. These lip variables were: aft lip boundary layer bleed holes, and Three position turning vane. Also a bellmouth extension of the top inlet side lips was tested in parallel mode.

  1. Pressure distributions from subsonic tests of an advanced laminar-flow-control wing with leading- and trailing-edge flaps

    NASA Technical Reports Server (NTRS)

    Applin, Zachary T.; Gentry, Garl L., Jr.

    1988-01-01

    An unswept, semispan wing model equipped with full-span leading- and trailing-edge flaps was tested in the Langley 14- by 22-Foot Subsonic Tunnel to determine the effect of high-lift components on the aerodynamics of an advanced laminar-flow-control (LFC) airfoil section. Chordwise pressure distributions near the midsemispan were measured for four configurations: cruise, trailing-edge flap only, and trailing-edge flap with a leading-edge Krueger flap of either 0.10 or 0.12 chord. Part 1 of this report (under separate cover) presents a representative sample of the plotted pressure distribution data for each configuration tested. Part 2 presents the entire set of plotted and tabulated pressure distribution data. The data are presented without analysis.

  2. Sequoia Messaging Rate Benchmark

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Friedley, Andrew

    2008-01-22

    The purpose of this benchmark is to measure the maximal message rate of a single compute node. The first num_cores ranks are expected to reside on the 'core' compute node for which message rate is being tested. After that, the next num_nbors ranks are neighbors for the first core rank, the next set of num_nbors ranks are neighbors for the second core rank, and so on. For example, testing an 8-core node (num_cores = 8) with 4 neighbors (num_nbors = 4) requires 8 + 8 * 4 - 40 ranks. The first 8 of those 40 ranks are expected tomore » be on the 'core' node being benchmarked, while the rest of the ranks are on separate nodes.« less

  3. Experimental test of the irreducible four-qubit Greenberger-Horne-Zeilinger paradox

    NASA Astrophysics Data System (ADS)

    Su, Zu-En; Tang, Wei-Dong; Wu, Dian; Cai, Xin-Dong; Yang, Tao; Li, Li; Liu, Nai-Le; Lu, Chao-Yang; Żukowski, Marek; Pan, Jian-Wei

    2017-03-01

    The paradox of Greenberger-Horne-Zeilinger (GHZ) disproves directly the concept of EPR elements of reality, based on the EPR correlations, in an all-versus-nothing way. A three-qubit experimental demonstration of the GHZ paradox was achieved nearly 20 years ago, followed by demonstrations for more qubits. Still, the GHZ contradictions underlying the tests can be reduced to a three-qubit one. We show an irreducible four-qubit GHZ paradox, and report its experimental demonstration. The bound of a three-setting-per-party Bell-GHZ inequality is violated by 7 σ . The fidelity of the GHZ state was around 81 % , and an entanglement witness reveals a violation of the separability threshold by 19 σ .

  4. Stagewise cognitive development: an application of catastrophe theory.

    PubMed

    van der Maas, H L; Molenaar, P C

    1992-07-01

    In this article an overview is given of traditional methodological approaches to stagewise cognitive developmental research. These approaches are evaluated and integrated on the basis of catastrophe theory. In particular, catastrophe theory specifies a set of common criteria for testing the discontinuity hypothesis proposed by Piaget. Separate criteria correspond to distinct methods used in cognitive developmental research. Such criteria are, for instance, the detection of spurts in development, bimodality of test scores, and increased variability of responses during transitional periods. When a genuine stage transition is present, these criteria are expected to be satisfied. A revised catastrophe model accommodating these criteria is proposed for the stage transition in cognitive development from the preoperational to the concrete operational stage.

  5. Nondestructive Detection and Quantification of Blueberry Bruising using Near-infrared (NIR) Hyperspectral Reflectance Imaging

    PubMed Central

    Jiang, Yu; Li, Changying; Takeda, Fumiomi

    2016-01-01

    Currently, blueberry bruising is evaluated by either human visual/tactile inspection or firmness measurement instruments. These methods are destructive, time-consuming, and subjective. The goal of this paper was to develop a non-destructive approach for blueberry bruising detection and quantification. Experiments were conducted on 300 samples of southern highbush blueberry (Camellia, Rebel, and Star) and on 1500 samples of northern highbush blueberry (Bluecrop, Jersey, and Liberty) for hyperspectral imaging analysis, firmness measurement, and human evaluation. An algorithm was developed to automatically calculate a bruise ratio index (ratio of bruised to whole fruit area) for bruise quantification. The spectra of bruised and healthy tissues were statistically separated and the separation was independent of cultivars. Support vector machine (SVM) classification of the spectra from the regions of interest (ROIs) achieved over 94%, 92%, and 96% accuracy on the training set, independent testing set, and combined set, respectively. The statistical results showed that the bruise ratio index was equivalent to the measured firmness but better than the predicted firmness in regard to effectiveness of bruise quantification, and the bruise ratio index had a strong correlation with human assessment (R2 = 0.78 − 0.83). Therefore, the proposed approach and the bruise ratio index are effective to non-destructively detect and quantify blueberry bruising. PMID:27767050

  6. Rate distortion optimal bit allocation methods for volumetric data using JPEG 2000.

    PubMed

    Kosheleva, Olga M; Usevitch, Bryan E; Cabrera, Sergio D; Vidal, Edward

    2006-08-01

    Computer modeling programs that generate three-dimensional (3-D) data on fine grids are capable of generating very large amounts of information. These data sets, as well as 3-D sensor/measured data sets, are prime candidates for the application of data compression algorithms. A very flexible and powerful compression algorithm for imagery data is the newly released JPEG 2000 standard. JPEG 2000 also has the capability to compress volumetric data, as described in Part 2 of the standard, by treating the 3-D data as separate slices. As a decoder standard, JPEG 2000 does not describe any specific method to allocate bits among the separate slices. This paper proposes two new bit allocation algorithms for accomplishing this task. The first procedure is rate distortion optimal (for mean squared error), and is conceptually similar to postcompression rate distortion optimization used for coding codeblocks within JPEG 2000. The disadvantage of this approach is its high computational complexity. The second bit allocation algorithm, here called the mixed model (MM) approach, mathematically models each slice's rate distortion curve using two distinct regions to get more accurate modeling at low bit rates. These two bit allocation algorithms are applied to a 3-D Meteorological data set. Test results show that the MM approach gives distortion results that are nearly identical to the optimal approach, while significantly reducing computational complexity.

  7. Hydrochemical analysis of groundwater using a tree-based model

    NASA Astrophysics Data System (ADS)

    Litaor, M. Iggy; Brielmann, H.; Reichmann, O.; Shenker, M.

    2010-06-01

    SummaryHydrochemical indices are commonly used to ascertain aquifer characteristics, salinity problems, anthropogenic inputs and resource management, among others. This study was conducted to test the applicability of a binary decision tree model to aquifer evaluation using hydrochemical indices as input. The main advantage of the tree-based model compared to other commonly used statistical procedures such as cluster and factor analyses is the ability to classify groundwater samples with assigned probability and the reduction of a large data set into a few significant variables without creating new factors. We tested the model using data sets collected from headwater springs of the Jordan River, Israel. The model evaluation consisted of several levels of complexity, from simple separation between the calcium-magnesium-bicarbonate water type of karstic aquifers to the more challenging separation of calcium-sodium-bicarbonate water type flowing through perched and regional basaltic aquifers. In all cases, the model assigned measures for goodness of fit in the form of misclassification errors and singled out the most significant variable in the analysis. The model proceeded through a sequence of partitions providing insight into different possible pathways and changing lithology. The model results were extremely useful in constraining the interpretation of geological heterogeneity and constructing a conceptual flow model for a given aquifer. The tree model clearly identified the hydrochemical indices that were excluded from the analysis, thus providing information that can lead to a decrease in the number of routinely analyzed variables and a significant reduction in laboratory cost.

  8. Towards non- and minimally instrumented, microfluidics-based diagnostic devices†

    PubMed Central

    Weigl, Bernhard; Domingo, Gonzalo; LaBarre, Paul; Gerlach, Jay

    2009-01-01

    In many health care settings, it is uneconomical, impractical, or unaffordable to maintain and access a fully equipped diagnostics laboratory. Examples include home health care, developing-country health care, and emergency situations in which first responders are dealing with pandemics or biowarfare agent release. In those settings, fully disposable diagnostic devices that require no instrument support, reagent, or significant training are well suited. Although the only such technology to have found widespread adoption so far is the immunochromatographic rapid assay strip test, microfluidics holds promise to expand the range of assay technologies that can be performed in formats similar to that of a strip test. In this paper, we review progress toward development of disposable, low-cost, easy-to-use microfluidics-based diagnostics that require no instrument at all. We also present examples of microfluidic functional elements—including mixers, separators, and detectors—as well as complete microfluidic devices that function entirely without any moving parts and external power sources. PMID:19023463

  9. Implementing standard setting into the Conjoint MAFP/FRACGP Part 1 examination - Process and issues.

    PubMed

    Chan, S C; Mohd Amin, S; Lee, T W

    2016-01-01

    The College of General Practitioners of Malaysia and the Royal Australian College of General Practitioners held the first Conjoint Member of the College of General Practitioners (MCGP)/Fellow of Royal Australian College of General Practitioners (FRACGP) examination in 1982, later renamed the Conjoint MAFP/FRACGP examinations. The examination assesses competency for safe independent general practice and as family medicine specialists in Malaysia. Therefore, a defensible standard set pass mark is imperative to separate the competent from the incompetent. This paper discusses the process and issues encountered in implementing standard setting to the Conjoint Part 1 examination. Critical to success in standard setting were judges' understanding of the process of the modified Angoff method, defining the borderline candidate's characteristics and the composition of judges. These were overcome by repeated hands-on training, provision of detailed guidelines and careful selection of judges. In December 2013, 16 judges successfully standard set the Part 1 Conjoint examinations, with high inter-rater reliability: Cronbach's alpha coefficient 0.926 (Applied Knowledge Test), 0.921 (Key Feature Problems).

  10. Studies of dispersion energy in hydrogen-bonded systems. H2O-HOH, H2O-HF, H3N-HF, HF-HF

    NASA Astrophysics Data System (ADS)

    Szcześniak, M. M.; Scheiner, Steve

    1984-02-01

    Dispersion energy is calculated in the systems H2O-HOH, H2O-HF, H3N-HF, and HF-HF as a function of the intermolecular separation using a variety of methods. M≂ller-Plesset perturbation theory to second and third orders is applied in conjunction with polarized basis sets of 6-311G** type and with an extended basis set including a second set of polarization functions (DZ+2P). These results are compared to a multipole expansion of the dispersion energy, based on the Unsöld approximation, carried out to the inverse tenth power of the intermolecular distance. Pairwise evaluation is also carried out using both atom-atom and bond-bond formulations. The MP3/6-311G** results are in generally excellent accord with the leading R-6 term of the multipole expansion. This expansion, if carried out to the R-10 term, reproduces extremely well previously reported dispersion energies calculated via variation-perturbation theory. Little damping of the expansion is required for intermolecular distances equal to or greater than the equilibrium separation. Although the asymptotic behavior of the MP2 dispersion energy is somewhat different than that of the other methods, augmentation of the basis set by a second diffuse set of d functions leads to quite good agreement in the vicinity of the minima. Both the atom-atom and bond-bond parametrization schemes are in good qualitative agreement with the other methods tested. All approaches produce similar dependence of the dispersion energy upon the angular orientation between the two molecules involved in the H bond.

  11. Comparative Evaluation of Tubex TF (Inhibition Magnetic Binding Immunoassay) for Typhoid Fever in Endemic Area.

    PubMed

    Khanna, Ashish; Khanna, Menka; Gill, Karamjit Singh

    2015-11-01

    Typhoid fever remains a significant health problem in endemic countries like India. Various serological tests for the diagnosis of typhoid fever are available commercially. We assessed the usefulness of rapid test based on magnetic particle separation to detect Immunoglobulin against Salmonella typhi O9 lipopolysaccharide. Aim of this study was to compare the sensitivity and specificity of widal test, typhidot and tubex TF test for the diagnosis of typhoid fever in an endemic country like India. Serum samples collected from 50 patients of typhoid fever, 50 patients of non typhoid fever and 100 normal healthy individuals residing in Amritsar were subjected to widal test, typhidot test and tubex TF test as per manufacturer's instructions. Data collected was assessed to find sensitivity and specificity of these tests in an endemic area. Significant widal test results were found positive in 68% of patients of typhoid fever and only 4% of non typhoid fever patients. Typhidot (IgM or IgG) was positive in 72% of typhoid fever patients and 10% and 6% in non typhoid fever and normal healthy individuals respectively. Tubex TF showed higher sensitivity of 76% and specificity of 96-99% which was higher than typhidot and comparable to widal test. This was the first evaluation of rapid tubex TF test in northern India. In countries which can afford high cost of test, tubex TF should be recommended for the diagnosis in acute stage of the disease in clinical setting. However, there is urgent need for a highly specific and sensitive test for the diagnosis of typhoid fever in clinical settings in endemic areas.

  12. An integrative typology of personality assessment for aggression: implications for predicting counterproductive workplace behavior.

    PubMed

    Bing, Mark N; Stewart, Susan M; Davison, H Kristl; Green, Philip D; McIntyre, Michael D; James, Lawrence R

    2007-05-01

    This study presents an integrative typology of personality assessment for aggression. In this typology, self-report and conditional reasoning (L. R. James, 1998) methodologies are used to assess 2 separate, yet often congruent, components of aggressive personalities. Specifically, self-report is used to assess explicit components of aggressive tendencies, such as self-perceived aggression, whereas conditional reasoning is used to assess implicit components, in particular, the unconscious biases in reasoning that are used to justify aggressive acts. These 2 separate components are then integrated to form a new theoretical typology of personality assessment for aggression. Empirical tests of the typology were subsequently conducted using data gathered across 3 samples in laboratory and field settings and reveal that explicit and implicit components of aggression can interact in the prediction of counterproductive, deviant, and prosocial behaviors. These empirical tests also reveal that when either the self-report or conditional reasoning methodology is used in isolation, the resulting assessment of aggression may be incomplete. Implications for personnel selection, team composition, and executive coaching are discussed. 2007 APA, all rights reserved

  13. Kinetic rate constant prediction supports the conformational selection mechanism of protein binding.

    PubMed

    Moal, Iain H; Bates, Paul A

    2012-01-01

    The prediction of protein-protein kinetic rate constants provides a fundamental test of our understanding of molecular recognition, and will play an important role in the modeling of complex biological systems. In this paper, a feature selection and regression algorithm is applied to mine a large set of molecular descriptors and construct simple models for association and dissociation rate constants using empirical data. Using separate test data for validation, the predicted rate constants can be combined to calculate binding affinity with accuracy matching that of state of the art empirical free energy functions. The models show that the rate of association is linearly related to the proportion of unbound proteins in the bound conformational ensemble relative to the unbound conformational ensemble, indicating that the binding partners must adopt a geometry near to that of the bound prior to binding. Mirroring the conformational selection and population shift mechanism of protein binding, the models provide a strong separate line of evidence for the preponderance of this mechanism in protein-protein binding, complementing structural and theoretical studies.

  14. Evaluation of a panel of 28 biomarkers for the non-invasive diagnosis of endometriosis.

    PubMed

    Vodolazkaia, A; El-Aalamat, Y; Popovic, D; Mihalyi, A; Bossuyt, X; Kyama, C M; Fassbender, A; Bokor, A; Schols, D; Huskens, D; Meuleman, C; Peeraer, K; Tomassetti, C; Gevaert, O; Waelkens, E; Kasran, A; De Moor, B; D'Hooghe, T M

    2012-09-01

    At present, the only way to conclusively diagnose endometriosis is laparoscopic inspection, preferably with histological confirmation. This contributes to the delay in the diagnosis of endometriosis which is 6-11 years. So far non-invasive diagnostic approaches such as ultrasound (US), MRI or blood tests do not have sufficient diagnostic power. Our aim was to develop and validate a non-invasive diagnostic test with a high sensitivity (80% or more) for symptomatic endometriosis patients, without US evidence of endometriosis, since this is the group most in need of a non-invasive test. A total of 28 inflammatory and non-inflammatory plasma biomarkers were measured in 353 EDTA plasma samples collected at surgery from 121 controls without endometriosis at laparoscopy and from 232 women with endometriosis (minimal-mild n = 148; moderate-severe n = 84), including 175 women without preoperative US evidence of endometriosis. Surgery was done during menstrual (n = 83), follicular (n = 135) and luteal (n = 135) phases of the menstrual cycle. For analysis, the data were randomly divided into an independent training (n = 235) and a test (n = 118) data set. Statistical analysis was done using univariate and multivariate (logistic regression and least squares support vector machines (LS-SVM) approaches in training- and test data set separately to validate our findings. In the training set, two models of four biomarkers (Model 1: annexin V, VEGF, CA-125 and glycodelin; Model 2: annexin V, VEGF, CA-125 and sICAM-1) analysed in plasma, obtained during the menstrual phase, could predict US-negative endometriosis with a high sensitivity (81-90%) and an acceptable specificity (68-81%). The same two models predicted US-negative endometriosis in the independent validation test set with a high sensitivity (82%) and an acceptable specificity (63-75%). In plasma samples obtained during menstruation, multivariate analysis of four biomarkers (annexin V, VEGF, CA-125 and sICAM-1/or glycodelin) enabled the diagnosis of endometriosis undetectable by US with a sensitivity of 81-90% and a specificity of 63-81% in independent training- and test data set. The next step is to apply these models for preoperative prediction of endometriosis in an independent set of patients with infertility and/or pain without US evidence of endometriosis, scheduled for laparoscopy.

  15. Evaluation of prototype air/fluid separator for Space Station Freedom Health Maintenance Facility

    NASA Technical Reports Server (NTRS)

    Billica, Roger; Smith, Maureen; Murphy, Linda; Kizzee, Victor D.

    1991-01-01

    A prototype air/fluid separator suction apparatus proposed as a possible design for use with the Health Maintenance Facility aboard Space Station Freedom (SSF) was evaluated. A KC-135 parabolic flight test was performed for this purpose. The flights followed the standard 40 parabola profile with 20 to 25 seconds of near-zero gravity in each parabola. A protocol was prepared to evaluate the prototype device in several regulator modes (or suction force), using three fluids of varying viscosity, and using either continuous or intermittent suction. It was felt that a matrixed approach would best approximate the range of utilization anticipated for medical suction on SSF. The protocols were performed in one-gravity in a lab setting to familiarize the team with procedures and techniques. Identical steps were performed aboard the KC-135 during parabolic flight.

  16. Investigating the effect of traditional Persian music on ECG signals in young women using wavelet transform and neural networks.

    PubMed

    Abedi, Behzad; Abbasi, Ataollah; Goshvarpour, Atefeh

    2017-05-01

    In the past few decades, several studies have reported the physiological effects of listening to music. The physiological effects of different music types on different people are different. In the present study, we aimed to examine the effects of listening to traditional Persian music on electrocardiogram (ECG) signals in young women. Twenty-two healthy females participated in this study. ECG signals were recorded under two conditions: rest and music. For each ECG signal, 20 morphological and wavelet-based features were selected. Artificial neural network (ANN) and probabilistic neural network (PNN) classifiers were used for the classification of ECG signals during and before listening to music. Collected data were separated into two data sets: train and test. Classification accuracies of 88% and 97% were achieved in train data sets using ANN and PNN, respectively. In addition, the test data set was employed for evaluating the classifiers, and classification rates of 84% and 93% were obtained using ANN and PNN, respectively. The present study investigated the effect of music on ECG signals based on wavelet transform and morphological features. The results obtained here can provide a good understanding on the effects of music on ECG signals to researchers.

  17. Coordinated platooning with multiple speeds

    DOE PAGES

    Luo, Fengqiao; Larson, Jeffrey; Munson, Todd

    2018-03-22

    In a platoon, vehicles travel one after another with small intervehicle distances; trailing vehicles in a platoon save fuel because they experience less aerodynamic drag. This work presents a coordinated platooning model with multiple speed options that integrates scheduling, routing, speed selection, and platoon formation/dissolution in a mixed-integer linear program that minimizes the total fuel consumed by a set of vehicles while traveling between their respective origins and destinations. The performance of this model is numerically tested on a grid network and the Chicago-area highway network. We find that the fuel-savings factor of a multivehicle system significantly depends on themore » time each vehicle is allowed to stay in the network; this time affects vehicles’ available speed choices, possible routes, and the amount of time for coordinating platoon formation. For problem instances with a large number of vehicles, we propose and test a heuristic decomposed approach that applies a clustering algorithm to partition the set of vehicles and then routes each group separately. When the set of vehicles is large and the available computational time is small, the decomposed approach finds significantly better solutions than does the full model.« less

  18. Coordinated platooning with multiple speeds

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Luo, Fengqiao; Larson, Jeffrey; Munson, Todd

    In a platoon, vehicles travel one after another with small intervehicle distances; trailing vehicles in a platoon save fuel because they experience less aerodynamic drag. This work presents a coordinated platooning model with multiple speed options that integrates scheduling, routing, speed selection, and platoon formation/dissolution in a mixed-integer linear program that minimizes the total fuel consumed by a set of vehicles while traveling between their respective origins and destinations. The performance of this model is numerically tested on a grid network and the Chicago-area highway network. We find that the fuel-savings factor of a multivehicle system significantly depends on themore » time each vehicle is allowed to stay in the network; this time affects vehicles’ available speed choices, possible routes, and the amount of time for coordinating platoon formation. For problem instances with a large number of vehicles, we propose and test a heuristic decomposed approach that applies a clustering algorithm to partition the set of vehicles and then routes each group separately. When the set of vehicles is large and the available computational time is small, the decomposed approach finds significantly better solutions than does the full model.« less

  19. Evaluation of ground calcite/water heavy media cyclone suspensions for production of residual plastic concentrates.

    PubMed

    Gent, Malcolm; Sierra, Héctor Muñiz; Menéndez, Mario; de Cos Juez, Francisco Javier

    2018-01-01

    Viable recycled residual plastic (RP) product(s) must be of sufficient quality to be reusable as a plastic or source of hydrocarbons or fuel. The varied composition and large volumes of such wastes usually requires a low cost, high through-put recycling method(s) to eliminate contaminants. Cyclone separation of plastics by density is proposed as a potential method of achieving separations of specific types of plastics. Three ground calcite separation medias of different grain size distributions were tested in a cylindrical cyclone to evaluate density separations at 1.09, 1.18 and 1.27 g/cm 3 . The differences in separation recoveries obtained with these medias by density offsets produced due to displacement of separation media solid particles within the cyclone caused by centrifugal settling is evaluated. The separation density at which 50% of the material of that density is recovered was found to increase from 0.010 to 0.026 g/cm 3 as the separation media density increased from 1.09 to 1.27 g/cm 3 . All separation medias were found to have significantly low Ep 95 values of 0.012-0.033 g/cm 3 . It is also demonstrated that the presence of an excess content of <10 µm calcite media particles (>75%) resulted in reduced separation efficiencies. It is shown that the optimum separations were achieved when the media density offset was 0.03-0.04 g/cm 3 . It is shown that effective heavy media cyclone separations of RP denser than 1.0 g/cm 3 can produce three sets of mixed plastics containing: PS and ABS/SAN at densities of >1.0-1.09 g/cm 3 ; PC, PMMA at a density of 1.09-1.18 g/cm 3 ; and PVC and PET at a density of >1.27 g/cm 3 . Copyright © 2017 Elsevier Ltd. All rights reserved.

  20. A Low Dose Caffeine and Carbohydrate Supplement does not Improve Athletic Performance during Volleyball Competition

    PubMed Central

    PFEIFER, DAVID R.; ARVIN, KELSEY M.; HERSCHBERGER, COURTNEY N.; HAYNES, NICHOLAS J.; RENFROW, MATTHEW S.

    2017-01-01

    Dietary supplements are widely used to enhance sport performance and the combination of carbohydrate and caffeine (CHO+CAF) has yielded particularly high performance gains. Though the effects of a CHO+CAF supplement have been studied in a laboratory environment, little research exists on the effects of supplementation during competition. Therefore, the purpose of this study was to determine the effects of a CHO+CAF supplement on athletic performance in competition. Eight female collegiate volleyball players completed three testing sessions under three different conditions separated by approximately one week each: CHO+CAF supplement, placebo (PBO), and control (CTL) using a randomized, cross-over design. Blood glucose (BG) was assessed prior to supplementation and immediately after set three. The supplement and PBO were administered prior to play and between sets two and three. Following three sets of play, three performance tests were completed: vertical jump (VJ), agility (AGL), and repeated 30-m sprint ability (RSA). While CHO+CAF supplementation significantly increased BG, the performance tests were not different (p>.05) among the testing conditions. These findings suggest that the amount of the supplement used in this study is not beneficial to VJ, AGL, and RSA in female volleyball players. As these performance tests were largely anaerobic and non-glycolytic in nature, the ergogenicity of the supplement may have been underutilized. Additionally, coaches and athletes should not only be aware of what ingredients are in the supplements they choose, but the amount of those ingredients as they may modify the efficacy of the supplement to impact performance. PMID:28515832

  1. Booster Separation Motor (BSM) Test Fire

    NASA Technical Reports Server (NTRS)

    2007-01-01

    This photograph depicts a hot fire test of the Shuttle Booster Separation Motor (BSM) at the Marshall Space Flight Center (MSFC) test stand 116. The objective of the test was to test the aft heat seal in flight configuration. The function of the motor is to separate the Shuttle vehicle from the boosters that carry it into space.

  2. Comparative Evaluation of Physical Surface Changes and Incidence of Separation in Rotary Nickel-Titanium Instruments: An in Vitro SEM Study

    PubMed Central

    Kaul, Rudra; Farooq, Riyaz; Kaul, Vibhuti; Khateeb, Shafayat Ullah; Purra, Aamir Rashid; Mahajan, Roopali

    2014-01-01

    Introduction: The aim of the present study was to comparatively evaluate the physical surface changes and incidence of separation in rotary nickel-titanium (NiTi) instruments using scanning electron microscope (SEM). Methods and Materials: A total number of 210 freshly extracted human maxillary and mandibular first molars were selected and distributed between three groups. Three different systems of rotary NiTi instruments, namely ProFile (PF), RaCe (RC) and Twisted File (TF), were used to prepare the canals using crown-down technique. All instruments were evaluated by means of SEM with 500× and 1500× magnifications, at four different stages; before use, after preparation of 7 and 14 canals and after instrument separation. Photomicrographs were also taken. The data was analyzed using the Kruskal-Wallis test and the level of significance was set at 0.001. It was found that H (HAT matrix) was 15.316 with 2 degrees of freedom. Moreover the various groups were compared using the Student-Newman-Keuls test with P<0.05 and it was found that all groups were significantly different. Results: RC showed the maximum wear of the surface followed by TF (P<0.05). PF showed the minimum wear except for its tip. There was no correlation between electropolishing and file fracture. Insignificant difference was observed in the mean number of canals shaped by PF and TF before their separation. Conclusion: Clinically, TF performance was superior, followed by PF then RC. RC fracture rate was the greatest after preparing the least number of canals. PMID:25031595

  3. Design of Phoneme MIDI Codes Using the MIDI Encoding Tool “Auto-F” and Realizing Voice Synthesizing Functions Based on Musical Sounds

    NASA Astrophysics Data System (ADS)

    Modegi, Toshio

    Using our previously developed audio to MIDI code converter tool “Auto-F”, from given vocal acoustic signals we can create MIDI data, which enable to playback the voice-like signals with a standard MIDI synthesizer. Applying this tool, we are constructing a MIDI database, which consists of previously converted simple harmonic structured MIDI codes from a set of 71 Japanese male and female syllable recorded signals. And we are developing a novel voice synthesizing system based on harmonically synthesizing musical sounds, which can generate MIDI data and playback voice signals with a MIDI synthesizer by giving Japanese plain (kana) texts, referring to the syllable MIDI code database. In this paper, we propose an improved MIDI converter tool, which can produce temporally higher-resolution MIDI codes. Then we propose an algorithm separating a set of 20 consonant and vowel phoneme MIDI codes from 71 syllable MIDI converted codes in order to construct a voice synthesizing system. And, we present the evaluation results of voice synthesizing quality between these separated phoneme MIDI codes and their original syllable MIDI codes by our developed 4-syllable word listening tests.

  4. 46 CFR 162.050-20 - Separator and bilge alarm test fluids.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 46 Shipping 6 2010-10-01 2010-10-01 false Separator and bilge alarm test fluids. 162.050-20... § 162.050-20 Separator and bilge alarm test fluids. (a) Tests required in §§ 162.050-23 and 162.050-35 must be performed using the following three types of test fluids: (1) Test Fluid A, which is a marine...

  5. Global discrimination of land cover types from metrics derived from AVHRR pathfinder data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    DeFries, R.; Hansen, M.; Townshend, J.

    1995-12-01

    Global data sets of land cover are a significant requirement for global biogeochemical and climate models. Remotely sensed satellite data is an increasingly attractive source for deriving these data sets due to the resulting internal consistency, reproducibility, and coverage in locations where ground knowledge is sparse. Seasonal changes in the greenness of vegetation, described in remotely sensed data as changes in the normalized difference vegetation index (NDVI) throughout the year, have been the basis for discriminating between cover types in previous attempts to derive land cover from AVHRR data at global and continental scales. This study examines the use ofmore » metrics derived from the NDVI temporal profile, as well as metrics derived from observations in red, infrared, and thermal bands, to improve discrimination between 12 cover types on a global scale. According to separability measures calculated from Bhattacharya distances, average separabilities improved by using 12 of the 16 metrics tested (1.97) compared to separabilities using 12 monthly NDVI values alone (1.88). Overall, the most robust metrics for discriminating between cover types were: mean NDVI, maximum NDVI, NDVI amplitude, AVHRR Band 2 (near-infrared reflectance) and Band 1 (red reflectance) corresponding to the time of maximum NDVI, and maximum land surface temperature. Deciduous and evergreen vegetation can be distinguished by mean NDVI, maximum NDVI, NDVI amplitude, and maximum land surface temperature. Needleleaf and broadleaf vegetation can be distinguished by either mean NDVI and NDVI amplitude or maximum NDVI and NDVI amplitude.« less

  6. Application of the amniotic fluid metabolome to the study of fetal malformations, using Down syndrome as a specific model.

    PubMed

    Huang, Jun; Mo, Jinhua; Zhao, Guili; Lin, Qiyin; Wei, Guanhui; Deng, Weinan; Chen, Dunjin; Yu, Bolan

    2017-11-01

    Although monitoring and diagnosis of fetal diseases in utero remains a challenge, metabolomics may provide an additional tool to study the etiology and pathophysiology of fetal diseases at a functional level. In order to explore specific markers of fetal disease, metabolites were analyzed in two separate sets of experiments using amniotic fluid from fetuses with Down syndrome (DS) as a model. Both sets included 10‑15 pairs of controls and cases, and amniotic fluid samples were processed separately; metabolomic fingerprinting was then conducted using UPLC‑MS. Significantly altered metabolites involved in respective metabolic pathways were compared in the two experimental sets. In addition, significantly altered metabolic pathways were further compared with the genomic characters of the DS fetuses. The data suggested that metabolic profiles varied across different experiments, however alterations in the 4 metabolic pathways of the porphyrin metabolism, bile acid metabolism, hormone metabolism and amino acid metabolism, were validated for the two experimental sets. Significant changes in metabolites of coproporphyrin III, glycocholic acid, taurochenodeoxycholate, taurocholate, hydrocortisone, pregnenolone sulfate, L‑histidine, L‑arginine, L‑glutamate and L‑glutamine were further confirmed. Analysis of these metabolic alterations was linked to aberrant gene expression at chromosome 21 of the DS fetus. The decrease in coproporphyrin III in the DS fetus may portend abnormal erythropoiesis, and unbalanced glutamine‑glutamate concentration was observed to be closely associated with abnormal brain development in the DS fetus. Therefore, alterations in amniotic fluid metabolites may provide important clues to understanding the etiology of fetal disease and help to develop diagnostic testing for clinical applications.

  7. Methodology, status and plans for development and assessment of the code ATHLET

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Teschendorff, V.; Austregesilo, H.; Lerchl, G.

    1997-07-01

    The thermal-hydraulic computer code ATHLET (Analysis of THermal-hydraulics of LEaks and Transients) is being developed by the Gesellschaft fuer Anlagen- und Reaktorsicherheit (GRS) for the analysis of anticipated and abnormal plant transients, small and intermediate leaks as well as large breaks in light water reactors. The aim of the code development is to cover the whole spectrum of design basis and beyond design basis accidents (without core degradation) for PWRs and BWRs with only one code. The main code features are: advanced thermal-hydraulics; modular code architecture; separation between physical models and numerical methods; pre- and post-processing tools; portability. The codemore » has features that are of special interest for applications to small leaks and transients with accident management, e.g. initialization by a steady-state calculation, full-range drift-flux model, dynamic mixture level tracking. The General Control Simulation Module of ATHLET is a flexible tool for the simulation of the balance-of-plant and control systems including the various operator actions in the course of accident sequences with AM measures. The code development is accompained by a systematic and comprehensive validation program. A large number of integral experiments and separate effect tests, including the major International Standard Problems, have been calculated by GRS and by independent organizations. The ATHLET validation matrix is a well balanced set of integral and separate effects tests derived from the CSNI proposal emphasizing, however, the German combined ECC injection system which was investigated in the UPTF, PKL and LOBI test facilities.« less

  8. Plant growth using EMCS hardware on the ISS.

    PubMed

    Iversen, Tor-Henning; Fossum, Knut R; Svare, Hakon; Johnsson, Anders; Schiller, Peter

    2002-07-01

    Under separate contracts with ESA (FUMO and ERM Study) and as a link in the development of the European Modular Cultivation System's (EMCS) functionality and biocompatibility, plant studies have been performed at The Plant Biocentre in Trondheim, Norway. The main goal was to test whether the breadboards containing the major components planned for use in the EMCS would be optimal for space experiments with plant material. The test plans and the experimental set-up for the verification of biocompatibility and biological functionality included the use of a few model plant species including cress (Lepidium sativum L.) and Arabidopsis thaliana. The plants were tested at different developmental levels of morphological and physiological complexity (illumination, life support, humidity control, water supply, observation, short- and long-term plant growth experiments and contamination prevention). Results from the tests show that the EMCS concept is useful for long duration plant growth on the ISS.

  9. Simple Approaches to Minimally-Instrumented, Microfluidic-Based Point-of-Care Nucleic Acid Amplification Tests

    PubMed Central

    Mauk, Michael G.; Song, Jinzhao; Liu, Changchun; Bau, Haim H.

    2018-01-01

    Designs and applications of microfluidics-based devices for molecular diagnostics (Nucleic Acid Amplification Tests, NAATs) in infectious disease testing are reviewed, with emphasis on minimally instrumented, point-of-care (POC) tests for resource-limited settings. Microfluidic cartridges (‘chips’) that combine solid-phase nucleic acid extraction; isothermal enzymatic nucleic acid amplification; pre-stored, paraffin-encapsulated lyophilized reagents; and real-time or endpoint optical detection are described. These chips can be used with a companion module for separating plasma from blood through a combined sedimentation-filtration effect. Three reporter types: Fluorescence, colorimetric dyes, and bioluminescence; and a new paradigm for end-point detection based on a diffusion-reaction column are compared. Multiplexing (parallel amplification and detection of multiple targets) is demonstrated. Low-cost detection and added functionality (data analysis, control, communication) can be realized using a cellphone platform with the chip. Some related and similar-purposed approaches by others are surveyed. PMID:29495424

  10. Pharmacophore Based Virtual Screening Approach to Identify Selective PDE4B Inhibitors

    PubMed Central

    Gaurav, Anand; Gautam, Vertika

    2017-01-01

    Phosphodiesterase 4 (PDE4) has been established as a promising target in asthma and chronic obstructive pulmonary disease. PDE4B subtype selective inhibitors are known to reduce the dose limiting adverse effect associated with non-selective PDE4B inhibitors. This makes the development of PDE4B subtype selective inhibitors a desirable research goal. To achieve this goal, ligand based pharmacophore modeling approach is employed. Separate pharmacophore hypotheses for PDE4B and PDE4D inhibitors were generated using HypoGen algorithm and 106 PDE4 inhibitors from literature having thiopyrano [3,2-d] Pyrimidines, 2-arylpyrimidines, and triazines skeleton. Suitable training and test sets were created using the molecules as per the guidelines available for HypoGen program. Training set was used for hypothesis development while test set was used for validation purpose. Fisher validation was also used to test the significance of the developed hypothesis. The validated pharmacophore hypotheses for PDE4B and PDE4D inhibitors were used in sequential virtual screening of zinc database of drug like molecules to identify selective PDE4B inhibitors. The hits were screened for their estimated activity and fit value. The top hit was subjected to docking into the active sites of PDE4B and PDE4D to confirm its selectivity for PDE4B. The hits are proposed to be evaluated further using in-vitro assays. PMID:29201082

  11. A one-stage, high-load capacity separation actuator using anti-friction rollers and redundant shape memory alloy wires.

    PubMed

    Xiaojun, Yan; Dawei, Huang; Xiaoyong, Zhang; Ying, Liu; Qiaolong, Yang

    2015-12-01

    This paper proposes a SMA (shape memory alloy) wire-based separation actuator with high-load capacity and simple structure. The novel actuator is based on a one-stage locking mechanism, which means that the separation is directly driven by the SMA wire. To release a large preload, a group of anti-friction rollers are adopted to reduce the force for triggering. In addition, two SMA wires are used redundantly to ensure a high reliability. After separation, the actuator can be reset automatically without any auxiliary tool or manual operation. Three prototypes of the separation actuator are fabricated and tested. According to the performance test results, the actuator can release a maximum preload of 40 kN. The separation time tends to decrease as the operation current increases and it can be as short as 0.5 s under a 7.5 A (the voltage is 5.8 V) current. Lifetime test indicates that the actuator has a lifetime of more than 50 cycles. The environmental tests demonstrate that the actuator can endure the typical thermal and vibration environment tests without unexpected separation or structure damage, and separate normally after these environment tests.

  12. Separate Medication Preparation Rooms Reduce Interruptions and Medication Errors in the Hospital Setting: A Prospective Observational Study.

    PubMed

    Huckels-Baumgart, Saskia; Baumgart, André; Buschmann, Ute; Schüpfer, Guido; Manser, Tanja

    2016-12-21

    Interruptions and errors during the medication process are common, but published literature shows no evidence supporting whether separate medication rooms are an effective single intervention in reducing interruptions and errors during medication preparation in hospitals. We tested the hypothesis that the rate of interruptions and reported medication errors would decrease as a result of the introduction of separate medication rooms. Our aim was to evaluate the effect of separate medication rooms on interruptions during medication preparation and on self-reported medication error rates. We performed a preintervention and postintervention study using direct structured observation of nurses during medication preparation and daily structured medication error self-reporting of nurses by questionnaires in 2 wards at a major teaching hospital in Switzerland. A volunteer sample of 42 nurses was observed preparing 1498 medications for 366 patients over 17 hours preintervention and postintervention on both wards. During 122 days, nurses completed 694 reporting sheets containing 208 medication errors. After the introduction of the separate medication room, the mean interruption rate decreased significantly from 51.8 to 30 interruptions per hour (P < 0.01), and the interruption-free preparation time increased significantly from 1.4 to 2.5 minutes (P < 0.05). Overall, the mean medication error rate per day was also significantly reduced after implementation of the separate medication room from 1.3 to 0.9 errors per day (P < 0.05). The present study showed the positive effect of a hospital-based intervention; after the introduction of the separate medication room, the interruption and medication error rates decreased significantly.

  13. Prediction of HPLC retention times of tebipenem pivoxyl and its degradation products in solid state by applying adaptive artificial neural network with recursive features elimination.

    PubMed

    Mizera, Mikołaj; Talaczyńska, Alicja; Zalewski, Przemysław; Skibiński, Robert; Cielecka-Piontek, Judyta

    2015-05-01

    A sensitive and fast HPLC method using ultraviolet diode-array detector (DAD)/electrospray ionization tandem mass spectrometry (Q-TOF-MS/MS) was developed for the determination of tebipenem pivoxyl and in the presence of degradation products formed during thermolysis. The chromatographic separations were performed on stationary phases produced in core-shell technology with particle diameter of 5.0 µm. The mobile phases consisted of formic acid (0.1%) and acetonitrile at different ratios. The flow rate was 0.8 mL/min while the wavelength was set at 331 nm. The stability characteristics of tebipenem pivoxyl were studied by performing stress tests in the solid state in dry air (RH=0%) and at an increased relative air humidity (RH=90%). The validation parameters such as selectivity, accuracy, precision and sensitivity were found to be satisfying. The satisfied selectivity and precision of determination were obtained for the separation of tebipenem pivoxyl from its degradation products using a stationary phase with 5.0 µm particles. The evaluation of the chemical structure of the 9 degradation products of tebipenem pivoxyl was conducted following separation based on the stationary phase with a 5.0 µm particle size by applying a Q-TOF-MS/MS detector. The main degradation products of tebipenem pivoxyl were identified: a product resulting from the condensation of the substituents of 1-(4,5-dihydro-1,3-thiazol-2-yl)-3-azetidinyl]sulfanyl and acid and ester forms of tebipenem with an open β-lactam ring in dry air at an increased temperature (RH=0%, T=393 K) as well as acid and ester forms of tebipenem with an open β-lactam ring at an increased relative air humidity and an elevated temperature (RH=90%, T=333 K). Retention times of tebipenem pivoxyl and its degradation products were used as training data set for predictive model of quantitative structure-retention relationship. An artificial neural network with adaptation protocol and extensive feature selection process was created. Input parameters for model were calculated from molecular geometries optimized with application of Density Functional Theory. The model was prepared and optimized especially for small data sets such as degradation products of specific compound. Validation of the model with statistical test against requirements for QSAR showed its ability for prediction of retention times within given data set. Mean error of 24.75% (0.8 min) was achieved with utilization of topological, geometrical and electronic descriptors. Copyright © 2015 Elsevier B.V. All rights reserved.

  14. Etude de la degradation des refractaires aluminosiliceux par abrasion, chocs thermiques et corrosion par l'aluminium: Correlation et interaction des mecanismes

    NASA Astrophysics Data System (ADS)

    Ntakaburimvo, Nicodeme

    Aluminosilicate refractories used for melting and holding furnaces on which the present work was focused are submitted to mechanical abuse such as abrasion, mechanical impact and erosion, on one hand; and to chemical degradation by corrosion, as well as to thermal stresses, mostly due to thermal shocks; on the other hand. This thesis is focused on four main objectives. The first one is related to the designing of an experimental set-up allowing abrasion testing of refractories. The second deals with the separate study of the deterioration of aluminosilicate refractories by abrasion, thermal shock and corrosion. The third is the correlation between these three mechanisms while the fourth is related to the interaction between thermal shock and corrosion. One of the contributions of this thesis is the realisation of the above mentioned experimental set-up, which permits to carry out refractories abrasion testing, as well as at room and high temperature, in the absence or in the presence of molten metal. The fact of testing refractory resistance when it is submitted separately and simultaneously to the action of dynamic corrosion, erosion and abrasion leads to the studying of the influence of each of these three mechanisms on the other. One of the characteristics of the designed set-up is the fact that it allows to adjust the seventy testing conditions according to the mechanical resistance of the test material. The other important point is related to the fact the abrasion tests were carried out in such manner to permit degradation quantification, otherwise than by the traditional method of loss of weight measurement; particularly by measuring the wear depth and the residual material properties, such as the rupture force and the strength. A perfect correlation was observed between the wear depth and the loss of weight, both being negatively correlated with the residual rupture force. The abrasion resistance was found to be globally positively correlated with the original mechanical material's properties such as the modulus of rupture, the toughness and the elastic modulus. However, for same mechanical resistance for bricks and castables, the latter were more degraded because of more microstructural defects they contain. Moreover, in case of the castables, the original surface facing the mould was more abraded than a rectified surface because of the segregation phenomenon. It has been shown in this study that the abrasion process has no longer effect on material's strength unless the cracks length it promotes is higher than that of the initial defects. (Abstract shortened by UMI.)

  15. Fabrication Control Plan for ORNL RH-LOCA ATF Test Specimens to be Irradiated in the ATR

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Field, Kevin G.; Howard, Richard; Teague, Michael

    2014-06-01

    The purpose of this fabrication plan is (1) to summarize the design of a set of rodlets that will be fabricated and then irradiated in the Advanced Test Reactor (ATR) and (2) provide requirements for fabrication and acceptance criteria for inspections of the Light Water Reactor (LWR) – Accident Tolerant Fuels (ATF) rodlet components. The functional and operational (F&OR) requirements for the ATF program are identified in the ATF Test Plan. The scope of this document only covers fabrication and inspections of rodlet components detailed in drawings 604496 and 604497. It does not cover the assembly of these items tomore » form a completed test irradiation assembly or the inspection of the final assembly, which will be included in a separate INL final test assembly specification/inspection document. The controls support the requirements that the test irradiations must be performed safely and that subsequent examinations must provide valid results.« less

  16. 40 CFR 160.43 - Test system care facilities.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... facility shall have a sufficient number of animal rooms or other test system areas, as needed, to ensure... a room or area by housing them separately in different chambers or aquaria. Separation of species is... testing facility shall have a number of animal rooms or other test system areas separate from those...

  17. 40 CFR 160.43 - Test system care facilities.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... facility shall have a sufficient number of animal rooms or other test system areas, as needed, to ensure... a room or area by housing them separately in different chambers or aquaria. Separation of species is... testing facility shall have a number of animal rooms or other test system areas separate from those...

  18. Results of a carrier aircraft (model AX13191-4) verification test in the Boeing transonic wind tunnel using a 0.03-scale 747 CAM/orbiter model 45-0 (CA6), volume 1

    NASA Technical Reports Server (NTRS)

    1976-01-01

    Force and moment data were obtained on each vehicle both mated and separated. The investigation included the effects of orbiter incidence, orbiter tail cone, orbiter strut fairings, elevon, and body flap settings. Analysis of the data indicated the 747 is suitable as a carrier of the orbiter in both the ALT launch and ferry mode. The effect of configuration changes on drag and stability was determined.

  19. Transonic Reynolds Number and Leading-Edge Bluntness Effects on a 65 deg Delta Wing

    NASA Technical Reports Server (NTRS)

    Luckring, J. M.

    2003-01-01

    A 65 degree delta wing has been tested in the National Transonic Facility (NTF) at mean aerodynamic chord Reynolds numbers from 6 million to 120 million at subsonic and transonic speeds. The configuration incorporated a systematic variation of the leading edge bluntness. The analysis for this paper is focused on the Reynolds number and bluntness effects at transonic speeds (M = 0.85) from this data set. The results show significant effects of both these parameters on the onset and progression of leading edge vortex separation.

  20. Reynolds Number and Leading-Edge Bluntness Effects on a 65 Deg Delta Wing

    NASA Technical Reports Server (NTRS)

    Luckring, J. M.

    2002-01-01

    A 65 deg delta wing has been tested in the National Transonic Facility (NTF) at mean aerodynamic chord Reynolds numbers from 6 million to 120 million at subsonic and transonic speeds. The configuration incorporated systematic variation of the leading edge bluntness. The analysis for this paper is focused on the Reynolds number and bluntness effects at subsonic speeds (M = 0.4) from this data set. The results show significant effects of both these parameters on the onset and progression of leading-edge vortex separation.

  1. Transonic Reynolds Number and Leading-Edge Bluntness Effects on a 65 deg Delta Wing

    NASA Technical Reports Server (NTRS)

    Luckring, J. M.

    2003-01-01

    A 65 deg delta wing has been tested in the National Transonic Facility (NTF) at mean aerodynamic chord Reynolds numbers from 6 million to 120 million at subsonic and transonic speeds. The configuration incorporated a systematic variation of the leading edge bluntness. The analysis for this paper is focused on the Reynolds number and bluntness effects at transonic speeds (M = 0.85) from this data set. The results show significant effects of both these parameters on the onset and progression of leading- edge vortex separation.

  2. Reynolds Number and Leading-Edge Bluntness Effects on a 65 deg Delta Wing

    NASA Technical Reports Server (NTRS)

    Luckring, J. M.

    2002-01-01

    A 65 degree delta wing has been tested in the National Transonic Facility (NTF) at mean aerodynamic chord Reynolds numbers from 6 million to 120 million at subsonic and transonic speeds. The configuration incorporated systematic variation of the leading edge bluntness. The analysis for this paper is focused on the Reynolds number and bluntness effects at subsonic speeds (M = 0.4) from this data set. The results show significant effects of both these parameters on the onset and progression of leading-edge vortex separation.

  3. Transonic Reynolds Number and Leading-Edge Bluntness Effects on a 65 deg Delta Wing

    NASA Technical Reports Server (NTRS)

    Luckring, J. M.

    2003-01-01

    A 65 deg delta wing has been tested in the National Transonic Facility (NTF) at mean aerodynamic chord Reynolds numbers from 6 million to 120 million at subsonic and transonic speeds. The configuration incorporated a systematic variation of the leading edge bluntness. The analysis for this paper is focused on the Reynolds number and bluntness effects at transonic speeds (M=0.85) from this data set. The results show significant effects of both these parameters on the onset and progression of leading-edge vortex separation.

  4. Single Axis Attitude Control and DC Bus Regulation with Two Flywheels

    NASA Technical Reports Server (NTRS)

    Kascak, Peter E.; Jansen, Ralph H.; Kenny, Barbara; Dever, Timothy P.

    2002-01-01

    A computer simulation of a flywheel energy storage single axis attitude control system is described. The simulation models hardware which will be experimentally tested in the future. This hardware consists of two counter rotating flywheels mounted to an air table. The air table allows one axis of rotational motion. An inertia DC bus coordinator is set forth that allows the two control problems, bus regulation and attitude control, to be separated. Simulation results are presented with a previously derived flywheel bus regulator and a simple PID attitude controller.

  5. Salt-water-freshwater transient upconing - An implicit boundary-element solution

    USGS Publications Warehouse

    Kemblowski, M.

    1985-01-01

    The boundary-element method is used to solve the set of partial differential equations describing the flow of salt water and fresh water separated by a sharp interface in the vertical plane. In order to improve the accuracy and stability of the numerical solution, a new implicit scheme was developed for calculating the motion of the interface. The performance of this scheme was tested by means of numerical simulation. The numerical results are compared to experimental results for a salt-water upconing under a drain problem. ?? 1985.

  6. Continuous recovery of valine in a model mixture of amino acids and salt from Corynebacterium bacteria fermentation using a simulated moving bed chromatography.

    PubMed

    Park, Chanhun; Nam, Hee-Geun; Jo, Se-Hee; Wang, Nien-Hwa Linda; Mun, Sungyong

    2016-02-26

    The economical efficiency of valine production in related industries is largely affected by the performance of a valine separation process, in which valine is to be separated from leucine, alanine, and ammonium sulfate. Such separation is currently handled by a batch-mode hybrid process based on ion-exchange and crystallization schemes. To make a substantial improvement in the economical efficiency of an industrial valine production, such a batch-mode process based on two different separation schemes needs to be converted into a continuous-mode separation process based on a single separation scheme. To address this issue, a simulated moving bed (SMB) technology was applied in this study to the development of a continuous-mode valine-separation chromatographic process with uniformity in adsorbent and liquid phases. It was first found that a Chromalite-PCG600C resin could be eligible for the adsorbent of such process, particularly in an industrial scale. The intrinsic parameters of each component on the Chromalite-PCG600C adsorbent were determined and then utilized in selecting a proper set of configurations for SMB units, columns, and ports, under which the SMB operating parameters were optimized with a genetic algorithm. Finally, the optimized SMB based on the selected configurations was tested experimentally, which confirmed its effectiveness in continuous separation of valine from leucine, alanine, ammonium sulfate with high purity, high yield, high throughput, and high valine product concentration. It is thus expected that the developed SMB process in this study will be able to serve as one of the trustworthy ways of improving the economical efficiency of an industrial valine production process. Copyright © 2016 Elsevier B.V. All rights reserved.

  7. Electronic Tongue Response to Chemicals in Orange Juice that Change Concentration in Relation to Harvest Maturity and Citrus Greening or Huanglongbing (HLB) Disease.

    PubMed

    Raithore, Smita; Bai, Jinhe; Plotto, Anne; Manthey, John; Irey, Mike; Baldwin, Elizabeth

    2015-12-02

    In an earlier study, an electronic tongue system (e-tongue) has been used to differentiate between orange juice made from healthy fruit and from fruit affected by the citrus greening or Huanglongbing (HLB) disease. This study investigated the reaction of an e-tongue system to the main chemicals in orange juice that impact flavor and health benefits and are also impacted by HLB. Orange juice was spiked with sucrose (0.2-5.0 g/100 mL), citric acid (0.1%-3.0% g/100 mL) and potassium chloride (0.1-3.0 g/100 mL) as well as the secondary metabolites nomilin (1-30 µg/mL), limonin (1-30 µg/mL), limonin glucoside (30-200 µg/mL), hesperidin (30-400 µg/mL) and hesperetin (30-400 µg/mL). Performance of Alpha MOS sensor sets #1 (pharmaceutical) and #5 (food) were compared for the same samples, with sensor set #1 generally giving better separation than sensor set #5 for sucrose, sensor set #5 giving better separation for nomilin and limonin, both sets being efficient at separating citric acid, potassium chloride, hesperitin and limonin glucoside, and neither set discriminating hesperidin efficiently. Orange juice made from fruit over the harvest season and from fruit harvested from healthy or HLB-affected trees were separated by harvest maturity, disease state and disease severity.

  8. Electronic Tongue Response to Chemicals in Orange Juice that Change Concentration in Relation to Harvest Maturity and Citrus Greening or Huanglongbing (HLB) Disease

    PubMed Central

    Raithore, Smita; Bai, Jinhe; Plotto, Anne; Manthey, John; Irey, Mike; Baldwin, Elizabeth

    2015-01-01

    In an earlier study, an electronic tongue system (e-tongue) has been used to differentiate between orange juice made from healthy fruit and from fruit affected by the citrus greening or Huanglongbing (HLB) disease. This study investigated the reaction of an e-tongue system to the main chemicals in orange juice that impact flavor and health benefits and are also impacted by HLB. Orange juice was spiked with sucrose (0.2–5.0 g/100 mL), citric acid (0.1%–3.0% g/100 mL) and potassium chloride (0.1–3.0 g/100 mL) as well as the secondary metabolites nomilin (1–30 µg/mL), limonin (1–30 µg/mL), limonin glucoside (30–200 µg/mL), hesperidin (30–400 µg/mL) and hesperetin (30–400 µg/mL). Performance of Alpha MOS sensor sets #1 (pharmaceutical) and #5 (food) were compared for the same samples, with sensor set #1 generally giving better separation than sensor set #5 for sucrose, sensor set #5 giving better separation for nomilin and limonin, both sets being efficient at separating citric acid, potassium chloride, hesperitin and limonin glucoside, and neither set discriminating hesperidin efficiently. Orange juice made from fruit over the harvest season and from fruit harvested from healthy or HLB-affected trees were separated by harvest maturity, disease state and disease severity. PMID:26633411

  9. On impact damage detection and quantification for CFRP laminates using structural response data only

    NASA Astrophysics Data System (ADS)

    Sultan, M. T. H.; Worden, K.; Pierce, S. G.; Hickey, D.; Staszewski, W. J.; Dulieu-Barton, J. M.; Hodzic, A.

    2011-11-01

    The overall purpose of the research is to detect and attempt to quantify impact damage in structures made from composite materials. A study that uses simplified coupon specimens made from a Carbon Fibre-Reinforced Polymer (CFRP) prepreg with 11, 12 and 13 plies is presented. PZT sensors were placed at three separate locations in each test specimen to record the responses from impact events. To perform damaging impact tests, an instrumented drop-test machine was used and the impact energy was set to cover a range of 0.37-41.72 J. The response signals captured from each sensor were recorded by a data acquisition system for subsequent evaluation. The impacted specimens were examined with an X-ray technique to determine the extent of the damaged areas and it was found that the apparent damaged area grew monotonically with impact energy. A number of simple univariate and multivariate features were extracted from the sensor signals recorded during impact by computing their spectra and calculating frequency centroids. The concept of discordancy from the statistical discipline of outlier analysis is employed in order to separate the responses from non-damaging and damaging impacts. The results show that the potential damage indices introduced here provide a means of identifying damaging impacts from the response data alone.

  10. Seasonality of Admissions for Mania: Results From a General Hospital Psychiatric Unit in Pondicherry, India

    PubMed Central

    Sarkar, Siddharth

    2015-01-01

    Introduction: Bipolar disorder is affected by variables that modulate circadian rhythm, including seasonal variations. There is evidence of a seasonal pattern of admissions of mania in various geographical settings, though its timing varies by region and climate. Variables such as age and gender have been shown to affect seasonality in some studies. Methodology: Data on monthly admission patterns for mania at a general hospital psychiatry unit in Pondicherry, India, were collected for 4 years (2010–2013) and analyzed for seasonality and seasonal peaks. The effects of age and gender were analyzed separately. Results: There was overall evidence of a seasonal pattern of admissions for mania (P < .01, Friedman test for seasonality), with a peak beginning during the rainy season and ending before summer (P < .0.1, Ratchet circular scan test). Male sex (P < .005, Ratchet circular scan test) and age > 25 years (P < .005, Ratchet circular scan test) were specifically associated with this seasonal peak. Discussion: The effect of seasons on mania is complex and is modulated by a variety of variables. Our study is consistent with earlier research findings: a greater degree of seasonality for mania in men. It is possible that climatic and individual variables interact to determine seasonal patterns in bipolar disorder in a given setting. PMID:26644962

  11. Detection of flow limitation in obstructive sleep apnea with an artificial neural network.

    PubMed

    Norman, Robert G; Rapoport, David M; Ayappa, Indu

    2007-09-01

    During sleep, the development of a plateau on the inspiratory airflow/time contour provides a non-invasive indicator of airway collapsibility. Humans recognize this abnormal contour easily, and this study replicates this with an artificial neural network (ANN) using a normalized shape. Five 10 min segments were selected from each of 18 sleep records (respiratory airflow measured with a nasal cannula) with varying degrees of sleep disordered breathing. Each breath was visually scored for shape, and breaths split randomly into a training and test set. Equally spaced, peak amplitude normalized flow values (representing breath shape) formed the only input to a back propagation ANN. Following training, breath-by-breath agreement of the ANN with the manual classification was tabulated for the training and test sets separately. Agreement of the ANN was 89% in the training set and 70.6% in the test set. When the categories of 'probably normal' and 'normal', and 'probably flow limited' and 'flow limited' were combined, the agreement increased to 92.7% and 89.4% respectively, similar to the intra- and inter-rater agreements obtained by a visual classification of these breaths. On a naive dataset, the agreement of the ANN to visual classification was 57.7% overall and 82.4% when the categories were collapsed. A neural network based only on the shape of inspiratory airflow succeeded in classifying breaths as to the presence/absence of flow limitation. This approach could be used to provide a standardized, reproducible and automated means of detecting elevated upper airway resistance.

  12. 46 CFR 162.050-7 - Approval procedures.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... and is tested in accordance with this subpart; (2) The oil content of each sample of separated water effluent taken during approval testing is 15 ppm or less; (3) During Test No. 3A an oily mixture is not observed at the separated water outlet of the separator; (4) During Test No. 5A its operation is continuous...

  13. 46 CFR 162.050-7 - Approval procedures.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... and is tested in accordance with this subpart; (2) The oil content of each sample of separated water effluent taken during approval testing is 15 ppm or less; (3) During Test No. 3A an oily mixture is not observed at the separated water outlet of the separator; (4) During Test No. 5A its operation is continuous...

  14. A comparative analysis of the categorization of multidimensional stimuli: I. Unidimensional classification does not necessarily imply analytic processing; evidence from pigeons (Columba livia), squirrels (Sciurus carolinensis), and humans (Homo sapiens).

    PubMed

    Wills, A J; Lea, Stephen E G; Leaver, Lisa A; Osthaus, Britta; Ryan, Catriona M E; Suret, Mark B; Bryant, Catherine M L; Chapman, Sue J A; Millar, Louise

    2009-11-01

    Pigeons (Columba livia), gray squirrels (Sciurus carolinensis), and undergraduates (Homo sapiens) learned discrimination tasks involving multiple mutually redundant dimensions. First, pigeons and undergraduates learned conditional discriminations between stimuli composed of three spatially separated dimensions, after first learning to discriminate the individual elements of the stimuli. When subsequently tested with stimuli in which one of the dimensions took an anomalous value, the majority of both species categorized test stimuli by their overall similarity to training stimuli. However some individuals of both species categorized them according to a single dimension. In a second set of experiments, squirrels, pigeons, and undergraduates learned go/no-go discriminations using multiple simultaneous presentations of stimuli composed of three spatially integrated, highly salient dimensions. The tendency to categorize test stimuli including anomalous dimension values unidimensionally was higher than in the first set of experiments and did not differ significantly between species. The authors conclude that unidimensional categorization of multidimensional stimuli is not diagnostic for analytic cognitive processing, and that any differences between human's and pigeons' behavior in such tasks are not due to special features of avian visual cognition.

  15. Systematic study on the TD-DFT calculated electronic circular dichroism spectra of chiral aromatic nitro compounds: A comparison of B3LYP and CAM-B3LYP.

    PubMed

    Komjáti, Balázs; Urai, Ákos; Hosztafi, Sándor; Kökösi, József; Kováts, Benjámin; Nagy, József; Horváth, Péter

    2016-02-15

    B3LYP is one of the most widely used functional for the prediction of electronic circular dichroism spectra, however if the studied molecule contains aromatic nitro group computations may fail to produce reliable results. A test set of molecules of known stereochemistry were synthesized to study this phenomenon in detail. Spectra were computed by B3LYP and CAM-B3LYP functionals with 6-311++G(2d,2p) basis set. It was found that the range separated CAM-B3LYP gives better predictions than B3LYP for all test molecules. Fragment population analysis revealed that the nitro groups form highly localized molecule orbitals but the exact composition depends on the functional. CAM-B3LYP allows sufficient spatial overlap between the nitro group and distant parts of the molecule, which is necessary for the accurate description of excited states especially for charge transfer states. This phenomenon and the synthesized test molecules can be used to benchmark theoretical methods as well as to help the development of new functionals intended for spectroscopical studies. Copyright © 2015 Elsevier B.V. All rights reserved.

  16. Forward Bay Cover Separation Modeling and Testing for the Orion Multi-Purpose Crew Vehicle

    NASA Technical Reports Server (NTRS)

    Ali, Yasmin; Chuhta, Jesse D.; Hughes, Michael P.; Radke, Tara S.

    2015-01-01

    Spacecraft multi-body separation events during atmospheric descent require complex testing and analysis to validate the flight separation dynamics models used to verify no re-contact. The NASA Orion Multi-Purpose Crew Vehicle (MPCV) architecture includes a highly-integrated Forward Bay Cover (FBC) jettison assembly design that combines parachutes and piston thrusters to separate the FBC from the Crew Module (CM) and avoid re-contact. A multi-disciplinary team across numerous organizations examined key model parameters and risk areas to develop a robust but affordable test campaign in order to validate and verify the FBC separation event for Exploration Flight Test-1 (EFT-1). The FBC jettison simulation model is highly complex, consisting of dozens of parameters varied simultaneously, with numerous multi-parameter interactions (coupling and feedback) among the various model elements, and encompassing distinct near-field, mid-field, and far-field regimes. The test campaign was composed of component-level testing (for example gas-piston thrusters and parachute mortars), ground FBC jettison tests, and FBC jettison air-drop tests that were accomplished by a highly multi-disciplinary team. Three ground jettison tests isolated the testing of mechanisms and structures to anchor the simulation models excluding aerodynamic effects. Subsequently, two air-drop tests added aerodynamic and parachute elements, and served as integrated system demonstrations, which had been preliminarily explored during the Orion Pad Abort-1 (PA-1) flight test in May 2010. Both ground and drop tests provided extensive data to validate analytical models and to verify the FBC jettison event for EFT-1. Additional testing will be required to support human certification of this separation event, for which NASA and Lockheed Martin are applying knowledge from Apollo and EFT-1 testing and modeling to develop a robust human-rated FBC separation event.

  17. Analysis of 3D OCT images for diagnosis of skin tumors

    NASA Astrophysics Data System (ADS)

    Raupov, Dmitry S.; Myakinin, Oleg O.; Bratchenko, Ivan A.; Zakharov, Valery P.; Khramov, Alexander G.

    2018-04-01

    Skin cancer is one of the fastest growing type of cancer. It represents the most commonly diagnosed malignancy, surpassing lung, breast, colorectal and prostate cancer. So, diagnostics for different types of skin cancer on early stages is a very high challenge for medicine industry. New optical imaging techniques have been developed in order to improve diagnostics precision. Optical coherence tomography (OCT) is based on low-coherence interferometry to detect the intensity of backscattered infrared light from biological tissues by measuring the optical path length. OCT provides the advantage of real-time, in vivo, low-cost imaging of suspicious lesions without having to proceed directly to a tissue biopsy. The post-processing techniques can be used for improving the precision of diagnostics and providing solutions to overcome limitations for OCT. Image processing can include noise filtration and evaluation of textural, geometric, morphological, spectral, statistic and other features. The main idea of this investigation is using information received from multiple analyze on 2D- and 3D-OCT images for skin tumors differentiating. At first, we tested the computer algorithm on OCT data hypercubes and separated B- and C-scans. Combination of 2D and 3D data give us an opportunity to receive common information about tumor (geometric and morphological characteristics) and use more powerful algorithms for features evaluation (fractal and textural) on these separated scans. These groups of features provide closer connection to classical wide-used ABCDE criteria (Asymmetry, Border irregularity, Color, Diameter, Evolution). We used a set of features consisting of fractal dimension, Haralick's, Gabor's, Tamura's, Markov random fields, geometric features and many others. We could note about good results on the test sets in differentiation between BCC and Nevus, MM and Healthy Skin. We received dividing MM from Healthy Skin with sensitivity more 90% and specificity more 92% (168 B-scans from 8 species) by using three Haralick's features like Contrast, Correlation and Energy. The results are very promising to be tested for new cases and new bigger sets of OCT images.

  18. Simulated swimming: a useful tool for evaluation the VO2 max of swimmers in the laboratory.

    PubMed Central

    Kimura, Y; Yeater, R A; Martin, R B

    1990-01-01

    This study was designed to develop a simulated swimming exercise (SS) so that peak VO2 would be assessed on swimmers in a laboratory setting. The subjects assumed a prone position on an incline bench and performed arm cranking on a Monark Rehab Trainer while performing a flutter kick against tension supplied by elastic cords. The SS test was compared to four peak VO2 tests: treadmill running (RN), tethered swimming (TW), bicycle ergometry (B), and arm cranking (AC). Eleven male varsity swimmers underwent each of the five VO2 max tests, and maximal cardiorespiratory indicators (HR, VE, VO2, O2 pulse, and RQ) were measured. The percentage of peak VO2 obtained during SS was compared to RN, TW, B, and AC. The SS test achieved 78 percent of RN, 91 percent of TW, 81 percent of B, and 124 percent of AC. There were no significant differences in VO2 in ml/kg.min between SS and TW. As expected, RN and B were significantly higher, while AC was lower. Ten subjects performed the SS test twice on two separate days within one week. The reliability of VO2 max in ml/kg.min was 0.95. the validity of VO2 max in ml/kg.min in the SS test vs. RN was 0.68. The SS test is reliable and can be used as effectively as TW to assess the VO2 max of swimmers in a laboratory setting. PMID:2078808

  19. Genome-wide differences in hepatitis C- vs alcoholism-associated hepatocellular carcinoma

    PubMed Central

    Derambure, Céline; Coulouarn, Cédric; Caillot, Frédérique; Daveau, Romain; Hiron, Martine; Scotte, Michel; François, Arnaud; Duclos, Celia; Goria, Odile; Gueudin, Marie; Cavard, Catherine; Terris, Benoit; Daveau, Maryvonne; Salier, Jean-Philippe

    2008-01-01

    AIM: To look at a comprehensive picture of etiology-dependent gene abnormalities in hepatocellular carcinoma in Western Europe. METHODS: With a liver-oriented microarray, transcript levels were compared in nodules and cirrhosis from a training set of patients with hepatocellular carcinoma (alcoholism, 12; hepatitis C, 10) and 5 controls. Loose or tight selection of informative transcripts with an abnormal abundance was statistically valid and the tightly selected transcripts were next quantified by qRTPCR in the nodules from our training set (12 + 10) and a test set (6 + 7). RESULTS: A selection of 475 transcripts pointed to significant gene over-representation on chromosome 8 (alcoholism) or -2 (hepatitis C) and ontology indicated a predominant inflammatory response (alcoholism) or changes in cell cycle regulation, transcription factors and interferon responsiveness (hepatitis C). A stringent selection of 23 transcripts whose differences between etiologies were significant in nodules but not in cirrhotic tissue indicated that the above dysregulations take place in tumor but not in the surrounding cirrhosis. These 23 transcripts separated our test set according to etiologies. The inflammation-associated transcripts pointed to limited alterations of free iron metabolism in alcoholic vs hepatitis C tumors. CONCLUSION: Etiology-specific abnormalities (chromosome preference; differences in transcriptomes and related functions) have been identified in hepatocellular carcinoma driven by alcoholism or hepatitis C. This may open novel avenues for differential therapies in this disease. PMID:18350606

  20. Velocity Loss as a Variable for Monitoring Resistance Exercise.

    PubMed

    González-Badillo, Juan José; Yañez-García, Juan Manuel; Mora-Custodio, Ricardo; Rodríguez-Rosell, David

    2017-03-01

    This study aimed to analyze: 1) the pattern of repetition velocity decline during a single set to failure against different submaximal loads (50-85% 1RM) in the bench press exercise; and 2) the reliability of the percentage of performed repetitions, with respect to the maximum possible number that can be completed, when different magnitudes of velocity loss have been reached within each set. Twenty-two men performed 8 tests of maximum number of repetitions (MNR) against loads of 50-55-60-65-70-75-80-85% 1RM, in random order, every 6-7 days. Another 28 men performed 2 separate MNR tests against 60% 1RM. A very close relationship was found between the relative loss of velocity in a set and the percentage of performed repetitions. This relationship was very similar for all loads, but particularly for 50-70% 1RM, even though the number of repetitions completed at each load was significantly different. Moreover, the percentage of performed repetitions for a given velocity loss showed a high absolute reliability. Equations to predict the percentage of performed repetitions from relative velocity loss are provided. By monitoring repetition velocity and using these equations, one can estimate, with considerable precision, how many repetitions are left in reserve in a bench press exercise set. © Georg Thieme Verlag KG Stuttgart · New York.

  1. Automatic brain caudate nuclei segmentation and classification in diagnostic of Attention-Deficit/Hyperactivity Disorder.

    PubMed

    Igual, Laura; Soliva, Joan Carles; Escalera, Sergio; Gimeno, Roger; Vilarroya, Oscar; Radeva, Petia

    2012-12-01

    We present a fully automatic diagnostic imaging test for Attention-Deficit/Hyperactivity Disorder diagnosis assistance based on previously found evidences of caudate nucleus volumetric abnormalities. The proposed method consists of different steps: a new automatic method for external and internal segmentation of caudate based on Machine Learning methodologies; the definition of a set of new volume relation features, 3D Dissociated Dipoles, used for caudate representation and classification. We separately validate the contributions using real data from a pediatric population and show precise internal caudate segmentation and discrimination power of the diagnostic test, showing significant performance improvements in comparison to other state-of-the-art methods. Copyright © 2012 Elsevier Ltd. All rights reserved.

  2. Plasma Oscillation Characterization of NASA's HERMeS Hall Thruster via High Speed Imaging

    NASA Technical Reports Server (NTRS)

    Huang, Wensheng; Kamhawi, Hani; Haag, Thomas W.

    2016-01-01

    The performance and facility effect characterization tests of NASA's 12.5-kW Hall Effect Rocket with Magnetic Shielding had been completed. As a part of these tests, three plasma oscillation characterization studies were performed to help determine operation settings and quantify margins. The studies included the magnetic field strength variation study, background pressure effect study, and cathode flow fraction study. Separate high-speed videos of the thruster including the cathode and of only the cathode were recorded. Breathing mode at 10-15 kHz and cathode gradient-driven mode at 60-75 kHz were observed. An additional high frequency (40-70 kHz) global oscillation mode with sinusoidal probability distribution function was identified.

  3. Validity and extension of the SCS-CN method for computing infiltration and rainfall-excess rates

    NASA Astrophysics Data System (ADS)

    Mishra, Surendra Kumar; Singh, Vijay P.

    2004-12-01

    A criterion is developed for determining the validity of the Soil Conservation Service curve number (SCS-CN) method. According to this criterion, the existing SCS-CN method is found to be applicable when the potential maximum retention, S, is less than or equal to twice the total rainfall amount. The criterion is tested using published data of two watersheds. Separating the steady infiltration from capillary infiltration, the method is extended for predicting infiltration and rainfall-excess rates. The extended SCS-CN method is tested using 55 sets of laboratory infiltration data on soils varying from Plainfield sand to Yolo light clay, and the computed and observed infiltration and rainfall-excess rates are found to be in good agreement.

  4. Landsat test of diffuse reflectance models for aquatic suspended solids measurement

    NASA Technical Reports Server (NTRS)

    Munday, J. C., Jr.; Alfoldi, T. T.

    1979-01-01

    Landsat radiance data were used to test mathematical models relating diffuse reflectance to aquatic suspended solids concentration. Digital CCT data for Landsat passes over the Bay of Fundy, Nova Scotia were analyzed on a General Electric Co. Image 100 multispectral analysis system. Three data sets were studied separately and together in all combinations with and without solar angle correction. Statistical analysis and chromaticity analysis show that a nonlinear relationship between Landsat radiance and suspended solids concentration is better at curve-fitting than a linear relationship. In particular, the quasi-single-scattering diffuse reflectance model developed by Gordon and coworkers is corroborated. The Gordon model applied to 33 points of MSS 5 data combined from three dates produced r = 0.98.

  5. Zero liquid carryover whole-body shower vortex liquid/gas separator

    NASA Technical Reports Server (NTRS)

    1975-01-01

    The development and evaluation of a liquid/gas vortex type separator design eliminating liquid and semi-liquid (suds) carryover into air recirculating system were described. Consideration was given to a number of soaps other than the "Miranol JEM" which was the low sudsing soap used in previous test runs of the space shower. Analysis of test parameters and prototype testing resulted in a revised separator configuration and a better understanding of the suds generating mechanism in the wastewater collection system. The final design of the new separator provides for a wider choice of soaps without leading to the problem of "carryover". Furthermore, no changes in separator-to-shower interfaces were required. The new separator was retrofitted on the "space shower" and satisfactorily demonstrated in one-g testing.

  6. Improved, low cost inorganic-organic separators for rechargeable silver-zinc batteries

    NASA Technical Reports Server (NTRS)

    Sheibley, D. W.

    1979-01-01

    Several flexible, low-cost inorganic-organic separators with performance characteristics and cycle life equal to, or better than, the Lewis Research Center Astropower separator were developed. These new separators can be made on continuous-production equipment at about one-fourth the cost of the Astropower separator produced the same way. In test cells, these new separators demonstrate cycle life improvement, acceptable operating characteristics, and uniform current density. The various separator formulas, test cell construction, and data analysis are described.

  7. Winning Before the Fight: An Armed Suasion Approach to Countering Near Peer Competition

    DTIC Science & Technology

    2017-05-25

    the risk of unintended escalation. This monograph also proposes a tailored set of principles , separate from the principles of joint operations...Suasion; Deterrence; Compellence; Armed Conflict; Conflict Continuum; Principles of Joint Operations. 16. SECURITY CLASSIFICATION OF: 17. LIMITATION...tailored set of principles , separate from the principles of joint operations, which allow a planning staff to balance achieving success with managing

  8. Analytical investigation of aerodynamic characteristics of highly swept wings with separated flow

    NASA Technical Reports Server (NTRS)

    Reddy, C. S.

    1980-01-01

    Many modern aircraft designed for supersonic speeds employ highly swept-back and low-aspect-ratio wings with sharp or thin edges. Flow separation occurs near the leading and tip edges of such wings at moderate to high angles of attack. Attempts have been made over the years to develop analytical methods for predicting the aerodynamic characteristics of such aircraft. Before any method can really be useful, it must be tested against a standard set of data to determine its capabilities and limitations. The present work undertakes such an investigation. Three methods are considered: the free-vortex-sheet method (Weber et al., 1975), the vortex-lattice method with suction analogy (Lamar and Gloss, 1975), and the quasi-vortex lattice method of Mehrotra (1977). Both flat and cambered wings of different configurations, for which experimental data are available, are studied and comparisons made.

  9. Turbulence Model Selection for Low Reynolds Number Flows

    PubMed Central

    2016-01-01

    One of the major flow phenomena associated with low Reynolds number flow is the formation of separation bubbles on an airfoil’s surface. NACA4415 airfoil is commonly used in wind turbines and UAV applications. The stall characteristics are gradual compared to thin airfoils. The primary criterion set for this work is the capture of laminar separation bubble. Flow is simulated for a Reynolds number of 120,000. The numerical analysis carried out shows the advantages and disadvantages of a few turbulence models. The turbulence models tested were: one equation Spallart Allmars (S-A), two equation SST K-ω, three equation Intermittency (γ) SST, k-kl-ω and finally, the four equation transition γ-Reθ SST. However, the variation in flow physics differs between these turbulence models. Procedure to establish the accuracy of the simulation, in accord with previous experimental results, has been discussed in detail. PMID:27104354

  10. Turbulence Model Selection for Low Reynolds Number Flows.

    PubMed

    Aftab, S M A; Mohd Rafie, A S; Razak, N A; Ahmad, K A

    2016-01-01

    One of the major flow phenomena associated with low Reynolds number flow is the formation of separation bubbles on an airfoil's surface. NACA4415 airfoil is commonly used in wind turbines and UAV applications. The stall characteristics are gradual compared to thin airfoils. The primary criterion set for this work is the capture of laminar separation bubble. Flow is simulated for a Reynolds number of 120,000. The numerical analysis carried out shows the advantages and disadvantages of a few turbulence models. The turbulence models tested were: one equation Spallart Allmars (S-A), two equation SST K-ω, three equation Intermittency (γ) SST, k-kl-ω and finally, the four equation transition γ-Reθ SST. However, the variation in flow physics differs between these turbulence models. Procedure to establish the accuracy of the simulation, in accord with previous experimental results, has been discussed in detail.

  11. Design Considerations for a New Terminal Area Arrival Scheduler

    NASA Technical Reports Server (NTRS)

    Thipphavong, Jane; Mulfinger, Daniel

    2010-01-01

    Design of a terminal area arrival scheduler depends on the interrelationship between throughput, delay and controller intervention. The main contribution of this paper is an analysis of the above interdependence for several stochastic behaviors of expected system performance distributions in the aircraft s time of arrival at the meter fix and runway. Results of this analysis serve to guide the scheduler design choices for key control variables. Two types of variables are analyzed, separation buffers and terminal delay margins. The choice for these decision variables was tested using sensitivity analysis. Analysis suggests that it is best to set the separation buffer at the meter fix to its minimum and adjust the runway buffer to attain the desired system performance. Delay margin was found to have the least effect. These results help characterize the variables most influential in the scheduling operations of terminal area arrivals.

  12. Classification of LC columns based on the QSRR method and selectivity toward moclobemide and its metabolites.

    PubMed

    Plenis, Alina; Olędzka, Ilona; Bączek, Tomasz

    2013-05-05

    This paper focuses on a comparative study of the column classification system based on the quantitative structure-retention relationships (QSRR method) and column performance in real biomedical analysis. The assay was carried out for the LC separation of moclobemide and its metabolites in human plasma, using a set of 24 stationary phases. The QSRR models established for the studied stationary phases were compared with the column test performance results under two chemometric techniques - the principal component analysis (PCA) and the hierarchical clustering analysis (HCA). The study confirmed that the stationary phase classes found closely related by the QSRR approach yielded comparable separation for moclobemide and its metabolites. Therefore, the QSRR method could be considered supportive in the selection of a suitable column for the biomedical analysis offering the selection of similar or dissimilar columns with a relatively higher certainty. Copyright © 2013 Elsevier B.V. All rights reserved.

  13. Lung lobe modeling and segmentation with individualized surface meshes

    NASA Astrophysics Data System (ADS)

    Blaffert, Thomas; Barschdorf, Hans; von Berg, Jens; Dries, Sebastian; Franz, Astrid; Klinder, Tobias; Lorenz, Cristian; Renisch, Steffen; Wiemker, Rafael

    2008-03-01

    An automated segmentation of lung lobes in thoracic CT images is of interest for various diagnostic purposes like the quantification of emphysema or the localization of tumors within the lung. Although the separating lung fissures are visible in modern multi-slice CT-scanners, their contrast in the CT-image often does not separate the lobes completely. This makes it impossible to build a reliable segmentation algorithm without additional information. Our approach uses general anatomical knowledge represented in a geometrical mesh model to construct a robust lobe segmentation, which even gives reasonable estimates of lobe volumes if fissures are not visible at all. The paper describes the generation of the lung model mesh including lobes by an average volume model, its adaptation to individual patient data using a special fissure feature image, and a performance evaluation over a test data set showing an average segmentation accuracy of 1 to 3 mm.

  14. Relations of Maternal Socialization and Toddlers' Effortful Control to Children's Adjustment and Social Competence

    PubMed Central

    Spinrad, Tracy L.; Eisenberg, Nancy; Gaertner, Bridget; Popp, Tierney; Smith, Cynthia L.; Kupfer, Anne; Greving, Karissa; Liew, Jeffrey; Hofer, Claire

    2007-01-01

    The authors examined the relations of maternal supportive parenting to effortful control and internalizing problems (i.e., separation distress, inhibition to novelty), externalizing problems, and social competence when toddlers were 18 months old (n = 256) and a year later (n = 230). Mothers completed the Coping With Toddlers' Negative Emotions Scale, and their sensitivity and warmth were observed. Toddlers' effortful control was measured with a delay task and adults' reports (Early Childhood Behavior Questionnaire). Toddlers' social functioning was assessed with the Infant/Toddler Social and Emotional Assessment. Within each age, children's regulation significantly mediated the relation between supportive parenting and low levels of externalizing problems and separation distress, and high social competence. When using stronger tests of mediation, controlling for stability over time, the authors found only partial evidence for mediation. The findings suggest these relations may be set at an early age. PMID:17723043

  15. Gas separation membrane module assembly

    DOEpatents

    Wynn, Nicholas P [Palo Alto, CA; Fulton, Donald A [Fairfield, CA

    2009-03-31

    A gas-separation membrane module assembly and a gas-separation process using the assembly. The assembly includes a set of tubes, each containing gas-separation membranes, arranged within a housing. The housing contains a tube sheet that divides the space within the housing into two gas-tight spaces. A permeate collection system within the housing gathers permeate gas from the tubes for discharge from the housing.

  16. Electrochemical CO2 and O2 separation for crew and plant environments

    NASA Technical Reports Server (NTRS)

    Lee, M. G.; Grigger, David J.; Foerg, Sandra L.

    1992-01-01

    The study describes a closed ecosystem concept that includes electrochemical CO2 and O2 separators and a moisture condenser/separator for maintaining CO2, O2, and humidity levels in the crew and plant habitats at their respective optimal conditions. The key processes of this concept are aqueous electrolyte-based electrochemical CO2 and O2 separations. The principles and cell characteristics of these electrochemical gas separation processes are described. Also presented are descriptions of test hardware and the test results of the Electrochemical CO2 Separator (ECS) and the Electrochemical O2 Separator (EOS), and the combination of the ECS and the EOS. The test results proved that the ECS and EOS processes for the combined concept are viable.

  17. Method and apparatus for processing a test sample to concentrate an analyte in the sample from a solvent in the sample

    DOEpatents

    Turner, Terry D.; Beller, Laurence S.; Clark, Michael L.; Klingler, Kerry M.

    1997-01-01

    A method of processing a test sample to concentrate an analyte in the sample from a solvent in the sample includes: a) boiling the test sample containing the analyte and solvent in a boiling chamber to a temperature greater than or equal to the solvent boiling temperature and less than the analyte boiling temperature to form a rising sample vapor mixture; b) passing the sample vapor mixture from the boiling chamber to an elongated primary separation tube, the separation tube having internal sidewalls and a longitudinal axis, the longitudinal axis being angled between vertical and horizontal and thus having an upper region and a lower region; c) collecting the physically transported liquid analyte on the internal sidewalls of the separation tube; and d) flowing the collected analyte along the angled internal sidewalls of the separation tube to and pass the separation tube lower region. The invention also includes passing a turbulence inducing wave through a vapor mixture to separate physically transported liquid second material from vaporized first material. Apparatus are also disclosed for effecting separations. Further disclosed is a fluidically powered liquid test sample withdrawal apparatus for withdrawing a liquid test sample from a test sample container and for cleaning the test sample container.

  18. Method and apparatus for processing a test sample to concentrate an analyte in the sample from a solvent in the sample

    DOEpatents

    Turner, T.D.; Beller, L.S.; Clark, M.L.; Klingler, K.M.

    1997-10-14

    A method of processing a test sample to concentrate an analyte in the sample from a solvent in the sample includes: (a) boiling the test sample containing the analyte and solvent in a boiling chamber to a temperature greater than or equal to the solvent boiling temperature and less than the analyte boiling temperature to form a rising sample vapor mixture; (b) passing the sample vapor mixture from the boiling chamber to an elongated primary separation tube, the separation tube having internal sidewalls and a longitudinal axis, the longitudinal axis being angled between vertical and horizontal and thus having an upper region and a lower region; (c) collecting the physically transported liquid analyte on the internal sidewalls of the separation tube; and (d) flowing the collected analyte along the angled internal sidewalls of the separation tube to and pass the separation tube lower region. The invention also includes passing a turbulence inducing wave through a vapor mixture to separate physically transported liquid second material from vaporized first material. Apparatus is also disclosed for effecting separations. Further disclosed is a fluidically powered liquid test sample withdrawal apparatus for withdrawing a liquid test sample from a test sample container and for cleaning the test sample container. 8 figs.

  19. Learning the Constellations: From Junior High to Undergraduate Descriptive Astronomy Class

    NASA Astrophysics Data System (ADS)

    Stephens, Denise C.; Hintz, Eric G.; Hintz, Maureen; Lawler, Jeannette; Jones, Michael; Bench, Nathan

    2015-01-01

    As part of two separate studies we have examined the ability of students to learn and remember a group of constellations, bright stars, and deep sky objects. For a group of junior high students we tested their knowledge of only the constellations by giving them a 'constellation quiz' without any instruction. We then provided the students with a lab session, and retested. We also tested a large number of undergraduate students in our descriptive astronomy classes, but in this case there were the same 30 constellations, 17 bright stars, and 3 deep sky objects. The undergraduate students were tested in a number of ways: 1) pre-testing without instruction, 2) self-reporting of knowledge, 3) normal constellation quizzes as part of the class, and 4) retesting students from previous semesters. This provided us with a set of baseline measurements, allowed us to track the learning curve, and test retention of the material. We will present our early analysis of the data.

  20. Detecting central fixation by means of artificial neural networks in a pediatric vision screener using retinal birefringence scanning.

    PubMed

    Gramatikov, Boris I

    2017-04-27

    Reliable detection of central fixation and eye alignment is essential in the diagnosis of amblyopia ("lazy eye"), which can lead to blindness. Our lab has developed and reported earlier a pediatric vision screener that performs scanning of the retina around the fovea and analyzes changes in the polarization state of light as the scan progresses. Depending on the direction of gaze and the instrument design, the screener produces several signal frequencies that can be utilized in the detection of central fixation. The objective of this study was to compare artificial neural networks with classical statistical methods, with respect to their ability to detect central fixation reliably. A classical feedforward, pattern recognition, two-layer neural network architecture was used, consisting of one hidden layer and one output layer. The network has four inputs, representing normalized spectral powers at four signal frequencies generated during retinal birefringence scanning. The hidden layer contains four neurons. The output suggests presence or absence of central fixation. Backpropagation was used to train the network, using the gradient descent algorithm and the cross-entropy error as the performance function. The network was trained, validated and tested on a set of controlled calibration data obtained from 600 measurements from ten eyes in a previous study, and was additionally tested on a clinical set of 78 eyes, independently diagnosed by an ophthalmologist. In the first part of this study, a neural network was designed around the calibration set. With a proper architecture and training, the network provided performance that was comparable to classical statistical methods, allowing perfect separation between the central and paracentral fixation data, with both the sensitivity and the specificity of the instrument being 100%. In the second part of the study, the neural network was applied to the clinical data. It allowed reliable separation between normal subjects and affected subjects, its accuracy again matching that of the statistical methods. With a proper choice of a neural network architecture and a good, uncontaminated training data set, the artificial neural network can be an efficient classification tool for detecting central fixation based on retinal birefringence scanning.

  1. Graph-matching based CTA.

    PubMed

    Maksimov, Dmitry; Hesser, Jürgen; Brockmann, Carolin; Jochum, Susanne; Dietz, Tiina; Schnitzer, Andreas; Düber, Christoph; Schoenberg, Stefan O; Diehl, Steffen

    2009-12-01

    Separating bone, calcification, and vessels in computer tomography angiography (CTA) allows for a detailed diagnosis of vessel stenosis. This paper presents a new, graph-based technique that solves this difficult problem with high accuracy. The approach requires one native data set and one that is contrast enhanced. On each data set, an attributed level-graph is derived and both graphs are matched by dynamic programming to differentiate between bone, on one hand side, and vessel/calcification on the other hand side. Lumen and calcified regions are then separated by a profile technique. Evaluation is based on data from vessels of pelvis and lower extremities of elderly patients. Due to substantial calcification and motion of patients between and during the acquisitions, the underlying approach is tested on a class of difficult cases. Analysis requires 3-5 min on a Pentium IV 3 GHz for a 700 MByte data set. Among 37 patients, our approach correctly identifies all three components in 80% of cases correctly compared to visual control. Critical inconsistencies with visual inspection were found in 6% of all cases; 70% of these inconsistencies are due to small vessels that have 1) a diameter near the resolution of the CT and 2) are passing next to bony structures. All other remaining deviations are found in an incorrect handling of the iliac artery since the slice thickness is near the diameter of this vessel and since the orientation is not in cranio-caudal direction. Increasing resolution is thus expected to solve many the aforementioned difficulties.

  2. Turning off the spigot: reducing drug-resistant tuberculosis transmission in resource-limited settings.

    PubMed

    Nardell, E; Dharmadhikari, A

    2010-10-01

    Ongoing transmission and re-infection, primarily in congregate settings, is a key factor fueling the global multidrug-resistant/extensively drug-resistant tuberculosis (MDR/XDR-TB) epidemic, especially in association with the human immunodeficiency virus. Even as efforts to broadly implement conventional TB transmission control measures begin, current strategies may be incompletely effective under the overcrowded conditions extant in high-burden, resource-limited settings. Longstanding evidence suggesting that TB patients on effective therapy rapidly become non-infectious and that unsuspected, untreated TB cases account for the most transmission makes a strong case for the implementation of rapid point-of-care diagnostics coupled with fully supervised effective treatment. Among the most important decisions affecting transmission, the choice of an MDR-TB treatment model that includes community-based treatment may offer important advantages over hospital or clinic-based care, not only in cost and effectiveness, but also in transmission control. In the community, too, rapid identification of infectious cases, especially drug-resistant cases, followed by effective, fully supervised treatment, is critical to stopping transmission. Among the conventional interventions available, we present a simple triage and separation strategy, point out that separation is intimately linked to the design and engineering of clinical space and call attention to the pros and cons of natural ventilation, simple mechanical ventilation systems, germicidal ultraviolet air disinfection, fit-tested respirators on health care workers and short-term use of masks on patients before treatment is initiated.

  3. Development and Validation of a qRT-PCR Classifier for Lung Cancer Prognosis

    PubMed Central

    Chen, Guoan; Kim, Sinae; Taylor, Jeremy MG; Wang, Zhuwen; Lee, Oliver; Ramnath, Nithya; Reddy, Rishindra M; Lin, Jules; Chang, Andrew C; Orringer, Mark B; Beer, David G

    2011-01-01

    Purpose This prospective study aimed to develop a robust and clinically-applicable method to identify high-risk early stage lung cancer patients and then to validate this method for use in future translational studies. Patients and Methods Three published Affymetrix microarray data sets representing 680 primary tumors were used in the survival-related gene selection procedure using clustering, Cox model and random survival forest (RSF) analysis. A final set of 91 genes was selected and tested as a predictor of survival using a qRT-PCR-based assay utilizing an independent cohort of 101 lung adenocarcinomas. Results The RSF model built from 91 genes in the training set predicted patient survival in an independent cohort of 101 lung adenocarcinomas, with a prediction error rate of 26.6%. The mortality risk index (MRI) was significantly related to survival (Cox model p < 0.00001) and separated all patients into low, medium, and high-risk groups (HR = 1.00, 2.82, 4.42). The MRI was also related to survival in stage 1 patients (Cox model p = 0.001), separating patients into low, medium, and high-risk groups (HR = 1.00, 3.29, 3.77). Conclusions The development and validation of this robust qRT-PCR platform allows prediction of patient survival with early stage lung cancer. Utilization will now allow investigators to evaluate it prospectively by incorporation into new clinical trials with the goal of personalized treatment of lung cancer patients and improving patient survival. PMID:21792073

  4. Turning off the spigot: reducing drug-resistant tuberculosis transmission in resource-limited settings

    PubMed Central

    Nardell, E.; Dharmadhikari, A.

    2013-01-01

    SUMMARY Ongoing transmission and re-infection, primarily in congregate settings, is a key factor fueling the global multidrug-resistant/extensively drug-resistant tuberculosis (MDR/XDR-TB) epidemic, especially in association with the human immunodeficiency virus. Even as efforts to broadly implement conventional TB transmission control measures begin, current strategies may be incompletely effective under the overcrowded conditions extant in high-burden, resource-limited settings. Longstanding evidence suggesting that TB patients on effective therapy rapidly become non-infectious and that unsuspected, untreated TB cases account for the most transmission makes a strong case for the implementation of rapid point-of-care diagnostics coupled with fully supervised effective treatment. Among the most important decisions affecting transmission, the choice of an MDR-TB treatment model that includes community-based treatment may offer important advantages over hospital or clinic-based care, not only in cost and effectiveness, but also in transmission control. In the community, too, rapid identification of infectious cases, especially drug-resistant cases, followed by effective, fully supervised treatment, is critical to stopping transmission. Among the conventional interventions available, we present a simple triage and separation strategy, point out that separation is intimately linked to the design and engineering of clinical space and call attention to the pros and cons of natural ventilation, simple mechanical ventilation systems, germicidal ultraviolet air disinfection, fit-tested respirators on health care workers and short-term use of masks on patients before treatment is initiated. PMID:20843413

  5. Joint Inversion of Vp, Vs, and Resistivity at SAFOD

    NASA Astrophysics Data System (ADS)

    Bennington, N. L.; Zhang, H.; Thurber, C. H.; Bedrosian, P. A.

    2010-12-01

    Seismic and resistivity models at SAFOD have been derived from separate inversions that show significant spatial similarity between the main model features. Previous work [Zhang et al., 2009] used cluster analysis to make lithologic inferences from trends in the seismic and resistivity models. We have taken this one step further by developing a joint inversion scheme that uses the cross-gradient penalty function to achieve structurally similar Vp, Vs, and resistivity images that adequately fit the seismic and magnetotelluric MT data without forcing model similarity where none exists. The new inversion code, tomoDDMT, merges the seismic inversion code tomoDD [Zhang and Thurber, 2003] and the MT inversion code Occam2DMT [Constable et al., 1987; deGroot-Hedlin and Constable, 1990]. We are exploring the utility of the cross-gradients penalty function in improving models of fault-zone structure at SAFOD on the San Andreas Fault in the Parkfield, California area. Two different sets of end-member starting models are being tested. One set is the separately inverted Vp, Vs, and resistivity models. The other set consists of simple, geologically based block models developed from borehole information at the SAFOD drill site and a simplified version of features seen in geophysical models at Parkfield. For both starting models, our preliminary results indicate that the inversion produces a converging solution with resistivity, seismic, and cross-gradient misfits decreasing over successive iterations. We also compare the jointly inverted Vp, Vs, and resistivity models to borehole information from SAFOD to provide a "ground truth" comparison.

  6. Aural Acoustic Stapedius-Muscle Reflex Threshold Procedures to Test Human Infants and Adults.

    PubMed

    Keefe, Douglas H; Feeney, M Patrick; Hunter, Lisa L; Fitzpatrick, Denis F

    2017-02-01

    Power-based procedures are described to measure acoustic stapedius-muscle reflex threshold and supra-threshold responses in human adult and infant ears at frequencies from 0.2 to 8 kHz. The stimulus set included five clicks in which four pulsed activators were placed between each pair of clicks, with each stimulus set separated from the next by 0.79 s to allow for reflex decay. Each click response was used to detect the presence of reflex effects across frequency that were elicited by a pulsed broadband-noise or tonal activator in the ipsilateral or contralateral test ear. Acoustic reflex shifts were quantified in terms of the difference in absorbed sound power between the initial baseline click and the later four clicks in each set. Acoustic reflex shifts were measured over a 40-dB range of pulsed activators, and the acoustic reflex threshold was objectively calculated using a maximum 10 likelihood procedure. To illustrate the principles underlying these new reflex tests, reflex shifts in absorbed sound power and absorbance are presented for data acquired in an adult ear with normal hearing and in two infant ears in the initial and follow-up newborn hearing screening exams, one with normal hearing and the other with a conductive hearing loss. The use of absorbed sound power was helpful in classifying an acoustic reflex shift as present or absent. The resulting reflex tests are in use in a large study of wideband clinical diagnosis and monitoring of middle-ear and cochlear function in infant and adult ears.

  7. Pilot study of a novel tool for input-free automated identification of transition zone prostate tumors using T2- and diffusion-weighted signal and textural features.

    PubMed

    Stember, Joseph N; Deng, Fang-Ming; Taneja, Samir S; Rosenkrantz, Andrew B

    2014-08-01

    To present results of a pilot study to develop software that identifies regions suspicious for prostate transition zone (TZ) tumor, free of user input. Eight patients with TZ tumors were used to develop the model by training a Naïve Bayes classifier to detect tumors based on selection of most accurate predictors among various signal and textural features on T2-weighted imaging (T2WI) and apparent diffusion coefficient (ADC) maps. Features tested as inputs were: average signal, signal standard deviation, energy, contrast, correlation, homogeneity and entropy (all defined on T2WI); and average ADC. A forward selection scheme was used on the remaining 20% of training set supervoxels to identify important inputs. The trained model was tested on a different set of ten patients, half with TZ tumors. In training cases, the software tiled the TZ with 4 × 4-voxel "supervoxels," 80% of which were used to train the classifier. Each of 100 iterations selected T2WI energy and average ADC, which therefore were deemed the optimal model input. The two-feature model was applied blindly to the separate set of test patients, again without operator input of suspicious foci. The software correctly predicted presence or absence of TZ tumor in all test patients. Furthermore, locations of predicted tumors corresponded spatially with locations of biopsies that had confirmed their presence. Preliminary findings suggest that this tool has potential to accurately predict TZ tumor presence and location, without operator input. © 2013 Wiley Periodicals, Inc.

  8. Alteration of topoisomerase II-alpha gene in human breast cancer: association with responsiveness to anthracycline-based chemotherapy.

    PubMed

    Press, Michael F; Sauter, Guido; Buyse, Marc; Bernstein, Leslie; Guzman, Roberta; Santiago, Angela; Villalobos, Ivonne E; Eiermann, Wolfgang; Pienkowski, Tadeusz; Martin, Miguel; Robert, Nicholas; Crown, John; Bee, Valerie; Taupin, Henry; Flom, Kerry J; Tabah-Fisch, Isabelle; Pauletti, Giovanni; Lindsay, Mary-Ann; Riva, Alessandro; Slamon, Dennis J

    2011-03-01

    Approximately 35% of HER2-amplified breast cancers have coamplification of the topoisomerase II-alpha (TOP2A) gene encoding an enzyme that is a major target of anthracyclines. This study was designed to evaluate whether TOP2A gene alterations may predict incremental responsiveness to anthracyclines in some breast cancers. A total of 4,943 breast cancers were analyzed for alterations in TOP2A and HER2. Primary tumor tissues from patients with metastatic breast cancer treated in a trial of chemotherapy plus/minus trastuzumab were studied for amplification/deletion of TOP2A and HER2 as a test set followed by evaluation of malignancies from two separate, large trials for changes in these same genes as a validation set. Association between these alterations and clinical outcomes was determined. Test set cases containing HER2 amplification treated with doxorubicin and cyclophosphamide (AC) plus trastuzumab, demonstrated longer progression-free survival compared to those treated with AC alone (P = .0002). However, patients treated with AC alone whose tumors contain HER2/TOP2A coamplification experienced a similar improvement in survival (P = .004). Conversely, for patients treated with paclitaxel, HER2/TOP2A coamplification was not associated with improved outcomes. These observations were confirmed in a larger validation set, where HER2/TOP2A coamplification was again associated with longer survival when only anthracycline-containing chemotherapy was used for treatment compared with outcome in HER2-positive cancers lacking TOP2A coamplification. In a study involving nearly 5,000 breast malignancies, both test set and validation set demonstrate that TOP2A coamplification, not HER2 amplification, is the clinically useful predictive marker of an incremental response to anthracycline-based chemotherapy. Absence of HER2/TOP2A coamplification may indicate a more restricted efficacy advantage for breast cancers than previously thought.

  9. Alteration of Topoisomerase II–Alpha Gene in Human Breast Cancer: Association With Responsiveness to Anthracycline-Based Chemotherapy

    PubMed Central

    Press, Michael F.; Sauter, Guido; Buyse, Marc; Bernstein, Leslie; Guzman, Roberta; Santiago, Angela; Villalobos, Ivonne E.; Eiermann, Wolfgang; Pienkowski, Tadeusz; Martin, Miguel; Robert, Nicholas; Crown, John; Bee, Valerie; Taupin, Henry; Flom, Kerry J.; Tabah-Fisch, Isabelle; Pauletti, Giovanni; Lindsay, Mary-Ann; Riva, Alessandro; Slamon, Dennis J.

    2011-01-01

    Purpose Approximately 35% of HER2-amplified breast cancers have coamplification of the topoisomerase II-alpha (TOP2A) gene encoding an enzyme that is a major target of anthracyclines. This study was designed to evaluate whether TOP2A gene alterations may predict incremental responsiveness to anthracyclines in some breast cancers. Methods A total of 4,943 breast cancers were analyzed for alterations in TOP2A and HER2. Primary tumor tissues from patients with metastatic breast cancer treated in a trial of chemotherapy plus/minus trastuzumab were studied for amplification/deletion of TOP2A and HER2 as a test set followed by evaluation of malignancies from two separate, large trials for changes in these same genes as a validation set. Association between these alterations and clinical outcomes was determined. Results Test set cases containing HER2 amplification treated with doxorubicin and cyclophosphamide (AC) plus trastuzumab, demonstrated longer progression-free survival compared to those treated with AC alone (P = .0002). However, patients treated with AC alone whose tumors contain HER2/TOP2A coamplification experienced a similar improvement in survival (P = .004). Conversely, for patients treated with paclitaxel, HER2/TOP2A coamplification was not associated with improved outcomes. These observations were confirmed in a larger validation set, where HER2/TOP2A coamplification was again associated with longer survival when only anthracycline-containing chemotherapy was used for treatment compared with outcome in HER2-positive cancers lacking TOP2A coamplification. Conclusion In a study involving nearly 5,000 breast malignancies, both test set and validation set demonstrate that TOP2A coamplification, not HER2 amplification, is the clinically useful predictive marker of an incremental response to anthracycline-based chemotherapy. Absence of HER2/TOP2A coamplification may indicate a more restricted efficacy advantage for breast cancers than previously thought. PMID:21189395

  10. Updated Prognostic Model for Predicting Overall Survival in First-Line Chemotherapy for Patients With Metastatic Castration-Resistant Prostate Cancer

    PubMed Central

    Halabi, Susan; Lin, Chen-Yen; Kelly, W. Kevin; Fizazi, Karim S.; Moul, Judd W.; Kaplan, Ellen B.; Morris, Michael J.; Small, Eric J.

    2014-01-01

    Purpose Prognostic models for overall survival (OS) for patients with metastatic castration-resistant prostate cancer (mCRPC) are dated and do not reflect significant advances in treatment options available for these patients. This work developed and validated an updated prognostic model to predict OS in patients receiving first-line chemotherapy. Methods Data from a phase III trial of 1,050 patients with mCRPC were used (Cancer and Leukemia Group B CALGB-90401 [Alliance]). The data were randomly split into training and testing sets. A separate phase III trial served as an independent validation set. Adaptive least absolute shrinkage and selection operator selected eight factors prognostic for OS. A predictive score was computed from the regression coefficients and used to classify patients into low- and high-risk groups. The model was assessed for its predictive accuracy using the time-dependent area under the curve (tAUC). Results The model included Eastern Cooperative Oncology Group performance status, disease site, lactate dehydrogenase, opioid analgesic use, albumin, hemoglobin, prostate-specific antigen, and alkaline phosphatase. Median OS values in the high- and low-risk groups, respectively, in the testing set were 17 and 30 months (hazard ratio [HR], 2.2; P < .001); in the validation set they were 14 and 26 months (HR, 2.9; P < .001). The tAUCs were 0.73 (95% CI, 0.70 to 0.73) and 0.76 (95% CI, 0.72 to 0.76) in the testing and validation sets, respectively. Conclusion An updated prognostic model for OS in patients with mCRPC receiving first-line chemotherapy was developed and validated on an external set. This model can be used to predict OS, as well as to better select patients to participate in trials on the basis of their prognosis. PMID:24449231

  11. Free-Flight Tests of 0.11-Scale North American F-100 Airplane Wings to Investigate the Possibility of Flutter in Transonic Speed Range at Varying Angles of Attack

    NASA Technical Reports Server (NTRS)

    O'Kelly, Burke R.

    1954-01-01

    Free-flight tests in the transonic speed range utilizing rocketpropelled models have been made on three pairs of 0.11-scale North American F-100 airplane wings having an aspect ratio of 3.47, a taper ratio of 0.308, 45 degree sweepback at the quarter-chord line, and thickness ratios of 31 and 5 percent to investigate the possibility of flutte r. Data from tests of two other rocket-propelled models which accidentally fluttered during a drag investigation of the North American F-100 airplane are also presented. The first set of wings (5 percent thick) was tested on a model which was disturbed in pitch by a moving tail and reached a maximum Mach number of 0.85. The wings encountered mild oscillations near the first - bending frequency at high lift coefficients. The second set of wings 9 percent thick was tested up to a maximum Mach number of 0.95 at (2) angles of attack provided by small rocket motors installed in the nose of the model. No oscillations resembling flutter were encountered during the coasting flight between separation from the booster and sustainer firing (Mach numbers from 0.86 to 0.82) or during the sustainer firing at accelerations of about 8g up to the maximum Mach number of the test (0.95). The third set of wings was similar to the first set and was tested up to a maximum Mach number of 1.24. A mild flutter at frequencies near the first-bending frequency of the wings was encountered between a Mach number of 1.15 and a Mach number of 1.06 during both accelerating and coasting flight. The two drag models, which were 0.ll-scale models of the North American F-100 airplane configuration, reached a maximum Mach number of 1.77. The wings of these models had bending and torsional frequencies which were 40 and 89 percent, respectively, of the calculated scaled frequencies of the full-scale 7-percent-thick wing. Both models experienced flutter of the same type as that experienced-by the third set of wings.

  12. Using factorial experimental design to evaluate the separation of plastics by froth flotation.

    PubMed

    Salerno, Davide; Jordão, Helga; La Marca, Floriana; Carvalho, M Teresa

    2018-03-01

    This paper proposes the use of factorial experimental design as a standard experimental method in the application of froth flotation to plastic separation instead of the commonly used OVAT method (manipulation of one variable at a time). Furthermore, as is common practice in minerals flotation, the parameters of the kinetic model were used as process responses rather than the recovery of plastics in the separation products. To explain and illustrate the proposed methodology, a set of 32 experimental tests was performed using mixtures of two polymers with approximately the same density, PVC and PS (with mineral charges), with particle size ranging from 2 to 4 mm. The manipulated variables were frother concentration, air flow rate and pH. A three-level full factorial design was conducted. The models establishing the relationships between the manipulated variables and their interactions with the responses (first order kinetic model parameters) were built. The Corrected Akaike Information Criterion was used to select the best fit model and an analysis of variance (ANOVA) was conducted to identify the statistically significant terms of the model. It was shown that froth flotation can be used to efficiently separate PVC from PS with mineral charges by reducing the floatability of PVC, which largely depends on the action of pH. Within the tested interval, this is the factor that most affects the flotation rate constants. The results obtained show that the pure error may be of the same magnitude as the sum of squares of the errors, suggesting that there is significant variability within the same experimental conditions. Thus, special care is needed when evaluating and generalizing the process. Copyright © 2017 Elsevier Ltd. All rights reserved.

  13. Flight Test Analysis of the Forces and Moments Imparted on a B737-100 Airplane During Wake Vortex Encounters

    NASA Technical Reports Server (NTRS)

    Roberts, Chistopher L.

    2001-01-01

    Aircraft travel has become a major form of transportation. Several of our major airports are operating near their capacity limit, increasing congestion and delays for travelers. As a result, the National Aeronautics and Space Administration (NASA) has been working in conjunction with the Federal Aviation Administration (FAA), airline operators, and the airline industry to increase airport capacity without sacrificing public safety. One solution to the problem is to increase the number of airports and build new. runways; yet, this solution is becoming increasingly difficult due to limited space. A better solution is to increase the production per runway. This solution increases the possibility that one aircraft will encounter the trailing wake of another aircraft. Hazardous wake vortex encounters occur when an aircraft encounters the wake produced by a heavier aircraft. This heavy-load aircraft produces high-intensity wake turbulence that redistributes the aerodynamic loads of trailing smaller aircraft. This situation is particularly hazardous for smaller aircraft during takeoffs and landings. In order to gain a better understanding of the wake-vortex/aircraft encounter phenomena, NASA Langley Research Center conducted a series of flight tests from 1995 through 1997. These tests were designed to gather data for the development a wake encounter and wake-measurement data set with the accompanying atmospheric state information. This data set is being compiled into a database that can be used by wake vortex researchers to compare with experimental and computational results. The purpose of this research is to derive and implement a procedure for calculating the wake-vortex/aircraft interaction portion of that database by using the data recorded during those flight tests. There were three objectives to this research. Initially, the wake-induced forces and moments from each flight were analyzed based on varying flap deflection angles. The flap setting alternated between 15 and 30 degrees while the separation distance remained constant. This examination was performed to determine if increases in flap deflection would increase or decrease the effects of the wake-induced forces and moments. Next, the wake-induced forces and moments from each flight were analyzed based on separation distances of 1-3 nautical miles. In this comparison, flap deflection was held constant at 30 degrees. The purpose of this study was to determine if increased separation distances reduced the effects of the wake vortex on the aircraft. The last objective compared the wake-induced forces and moments of each flight as it executed a series of maneuvers through the wake-vortex. This analysis was conducted to examine the impact of the wake on the B737 as it traversed the wake horizontally and vertically. Results from the first analysis indicated that there was no difference in wake effect at flap deflections of 15 and 30 degrees. This conclusion is evidenced in the cases of the wake-induced sideforce, rolling moment, and yawing moment. The wake-induced lift, drag, and pitching moment cases yielded less conclusive results. The second analysis compared the wake-induced forces and moments at separation distances of 1-3 nautical miles. Results indicated that there was no significant difference in the wake-induced lift, drag, sideforce, or yawing moment coefficients. The analysis compared the wake-induced forces and moments based on different flight maneuvers. It was found that the wake-induced forces and moments had the greatest impact on out-to-in and in-to-out maneuvers.

  14. The performance of cleaner wrasse, Labroides dimidiatus, in a reversal learning task varies across experimental paradigms.

    PubMed

    Gingins, Simon; Marcadier, Fanny; Wismer, Sharon; Krattinger, Océane; Quattrini, Fausto; Bshary, Redouan; Binning, Sandra A

    2018-01-01

    Testing performance in controlled laboratory experiments is a powerful tool for understanding the extent and evolution of cognitive abilities in non-human animals. However, cognitive testing is prone to a number of potential biases, which, if unnoticed or unaccounted for, may affect the conclusions drawn. We examined whether slight modifications to the experimental procedure and apparatus used in a spatial task and reversal learning task affected performance outcomes in the bluestreak cleaner wrasse, Labroides dimidiatus (hereafter "cleaners"). Using two-alternative forced-choice tests, fish had to learn to associate a food reward with a side (left or right) in their holding aquarium. Individuals were tested in one of four experimental treatments that differed slightly in procedure and/or physical set-up. Cleaners from all four treatment groups were equally able to solve the initial spatial task. However, groups differed in their ability to solve the reversal learning task: no individuals solved the reversal task when tested in small tanks with a transparent partition separating the two options, whereas over 50% of individuals solved the task when performed in a larger tank, or with an opaque partition. These results clearly show that seemingly insignificant details to the experimental set-up matter when testing performance in a spatial task and might significantly influence the outcome of experiments. These results echo previous calls for researchers to exercise caution when designing methodologies for cognition tasks to avoid misinterpretations.

  15. Ares I Stage Separation System Design Certification Testing

    NASA Technical Reports Server (NTRS)

    Mayers, Stephen L.; Beard, Bernard B.; Smith, R. Kenneth; Patterson, Alan

    2009-01-01

    NASA is committed to the development of a new crew launch vehicle, the Ares I, that can support human missions to low Earth orbit (LEO) and the moon with unprecedented safety and reliability. NASA's Constellation program comprises the Ares I and Ares V launch vehicles, the Orion crew vehicle, and the Altair lunar lander. Based on historical precedent, stage separation is one of the most significant technical and systems engineering challenges that must be addressed in order to achieve this commitment. This paper surveys historical separation system tests that have been completed in order to ensure staging of other launch vehicles. Key separation system design trades evaluated for Ares I include single vs. dual separation plane options, retro-rockets vs. pneumatic gas actuators, small solid motor quantity/placement/timing, and continuous vs. clamshell interstage configuration options. Both subscale and full-scale tests are required to address the prediction of complex dynamic loading scenarios present during staging events. Test objectives such as separation system functionality, and pyroshock and debris field measurements for the full-scale tests are described. Discussion about the test article, support infrastructure and instrumentation are provided.

  16. Organic aerosol components derived from 25 AMS data sets across Europe using a consistent ME-2 based source apportionment approach

    NASA Astrophysics Data System (ADS)

    Crippa, M.; Canonaco, F.; Lanz, V. A.; Äijälä, M.; Allan, J. D.; Carbone, S.; Capes, G.; Ceburnis, D.; Dall'Osto, M.; Day, D. A.; DeCarlo, P. F.; Ehn, M.; Eriksson, A.; Freney, E.; Hildebrandt Ruiz, L.; Hillamo, R.; Jimenez, J. L.; Junninen, H.; Kiendler-Scharr, A.; Kortelainen, A.-M.; Kulmala, M.; Laaksonen, A.; Mensah, A. A.; Mohr, C.; Nemitz, E.; O'Dowd, C.; Ovadnevaite, J.; Pandis, S. N.; Petäjä, T.; Poulain, L.; Saarikoski, S.; Sellegri, K.; Swietlicki, E.; Tiitta, P.; Worsnop, D. R.; Baltensperger, U.; Prévôt, A. S. H.

    2014-06-01

    Organic aerosols (OA) represent one of the major constituents of submicron particulate matter (PM1) and comprise a huge variety of compounds emitted by different sources. Three intensive measurement field campaigns to investigate the aerosol chemical composition all over Europe were carried out within the framework of the European Integrated Project on Aerosol Cloud Climate and Air Quality Interactions (EUCAARI) and the intensive campaigns of European Monitoring and Evaluation Programme (EMEP) during 2008 (May-June and September-October) and 2009 (February-March). In this paper we focus on the identification of the main organic aerosol sources and we define a standardized methodology to perform source apportionment using positive matrix factorization (PMF) with the multilinear engine (ME-2) on Aerodyne aerosol mass spectrometer (AMS) data. Our source apportionment procedure is tested and applied on 25 data sets accounting for two urban, several rural and remote and two high altitude sites; therefore it is likely suitable for the treatment of AMS-related ambient data sets. For most of the sites, four organic components are retrieved, improving significantly previous source apportionment results where only a separation in primary and secondary OA sources was possible. Generally, our solutions include two primary OA sources, i.e. hydrocarbon-like OA (HOA) and biomass burning OA (BBOA) and two secondary OA components, i.e. semi-volatile oxygenated OA (SV-OOA) and low-volatility oxygenated OA (LV-OOA). For specific sites cooking-related (COA) and marine-related sources (MSA) are also separated. Finally, our work provides a large overview of organic aerosol sources in Europe and an interesting set of highly time resolved data for modeling purposes.

  17. Clinical Trials With Large Numbers of Variables: Important Advantages of Canonical Analysis.

    PubMed

    Cleophas, Ton J

    2016-01-01

    Canonical analysis assesses the combined effects of a set of predictor variables on a set of outcome variables, but it is little used in clinical trials despite the omnipresence of multiple variables. The aim of this study was to assess the performance of canonical analysis as compared with traditional multivariate methods using multivariate analysis of covariance (MANCOVA). As an example, a simulated data file with 12 gene expression levels and 4 drug efficacy scores was used. The correlation coefficient between the 12 predictor and 4 outcome variables was 0.87 (P = 0.0001) meaning that 76% of the variability in the outcome variables was explained by the 12 covariates. Repeated testing after the removal of 5 unimportant predictor and 1 outcome variable produced virtually the same overall result. The MANCOVA identified identical unimportant variables, but it was unable to provide overall statistics. (1) Canonical analysis is remarkable, because it can handle many more variables than traditional multivariate methods such as MANCOVA can. (2) At the same time, it accounts for the relative importance of the separate variables, their interactions and differences in units. (3) Canonical analysis provides overall statistics of the effects of sets of variables, whereas traditional multivariate methods only provide the statistics of the separate variables. (4) Unlike other methods for combining the effects of multiple variables such as factor analysis/partial least squares, canonical analysis is scientifically entirely rigorous. (5) Limitations include that it is less flexible than factor analysis/partial least squares, because only 2 sets of variables are used and because multiple solutions instead of one is offered. We do hope that this article will stimulate clinical investigators to start using this remarkable method.

  18. Dogs with separation-related problems show a "less pessimistic" cognitive bias during treatment with fluoxetine (Reconcile™) and a behaviour modification plan.

    PubMed

    Karagiannis, Christos I; Burman, Oliver Hp; Mills, Daniel S

    2015-03-28

    Canine separation-related problems (SRP) (also described as "separation anxiety" or "separation distress") are among the most common behavioural complaints of dog owners. Treatment with psychoactive medication in parallel with a behaviour modification plan is well documented in the literature, but it is unknown if this is associated with an improvement in underlying affective state (emotion and mood) or simply an inhibition of the behaviour. Cognitive judgement bias tasks have been proposed as a method for assessing underlying affective state and so we used this approach to identify if any change in clinical signs during treatment was associated with a consistent change in cognitive bias (affective state). Five dogs showing signs of SRP (vocalising - e.g. barking, howling-, destruction of property, and toileting - urination or defecation- when alone) were treated with fluoxetine chewable tablets (Reconcile™) and set on a standard behaviour modification plan for two months. Questionnaires and interviews of the owners were used to monitor the clinical progress of the dogs. Subjects were also evaluated using a spatial cognitive bias test to infer changes in underlying affect prior to, and during, treatment. Concurrently, seven other dogs without signs of SRP were tested in the same way to act as controls. Furthermore, possible correlations between cognitive bias and clinical measures were also assessed for dogs with SRP. Prior to treatment, the dogs with SRP responded to ambiguous positions in the cognitive bias test negatively (i.e. with slower running speeds) compared to control dogs (p < 0.05). On weeks 2 and 6 of treatment, SRP dogs displayed similar responses in the cognitive bias test to control dogs, consistent with the possible normalization of affect during treatment, with this effect more pronounced at week 6 (p > 0.05). Questionnaire based clinical measures were significantly correlated among themselves and with performance in the cognitive bias test. These results demonstrate for the first time that the clinical treatment of a negative affective state and associated behaviours in a non-human species can produce a shift in cognitive bias. These findings demonstrate how the outcome of an intervention on a clinical problem can be evaluated to determine not only that the subject's behaviour has improved, but also its psychological state (welfare).

  19. Performance in complex motor tasks deteriorates in hyperthermic humans.

    PubMed

    Piil, Jacob F; Lundbye-Jensen, Jesper; Trangmar, Steven J; Nybo, Lars

    2017-01-01

    Heat stress, leading to elevations in whole-body temperature, has a marked impact on both physical performance and cognition in ecological settings. Lab experiments confirm this for physically demanding activities, whereas observations are inconsistent for tasks involving cognitive processing of information or decision-making prior to responding. We hypothesized that divergences could relate to task complexity and developed a protocol consisting of 1) simple motor task [TARGET_pinch], 2) complex motor task [Visuo-motor tracking], 3) simple math task [MATH_type], 4) combined motor-math task [MATH_pinch]. Furthermore, visuo-motor tracking performance was assessed both in a separate- and a multipart protocol (complex motor tasks alternating with the three other tasks). Following familiarization, each of the 10 male subjects completed separate and multipart protocols in randomized order in the heat (40°C) or control condition (20°C) with testing at baseline (seated rest) and similar seated position, following exercise-induced hyperthermia (core temperature ∼ 39.5°C in the heat and 38.2°C in control condition). All task scores were unaffected by control exercise or passive heat exposure, but visuo-motor tracking performance was reduced by 10.7 ± 6.5% following exercise-induced hyperthermia when integrated in the multipart protocol and 4.4 ± 5.7% when tested separately (both P < 0.05 ). TARGET_pinch precision declined by 2.6 ± 1.3% ( P < 0.05 ), while no significant changes were observed for the math tasks. These results indicate that heat per se has little impact on simple motor or cognitive test performance, but complex motor performance is impaired by hyperthermia and especially so when multiple tasks are combined.

  20. A modified Pegasus rocket ignites moments after release from the B-52B, beginning the acceleration of the X-43A over the Pacific Ocean on March 27, 2004

    NASA Image and Video Library

    2004-03-27

    The second X-43A hypersonic research aircraft and its modified Pegasus booster rocket accelerate after launch from NASA's B-52B launch aircraft over the Pacific Ocean on March 27, 2004. The mission originated from the NASA Dryden Flight Research Center at Edwards Air Force Base, Calif. Minutes later the X-43A separated from the Pegasus booster and accelerated to its intended speed of Mach 7. In a combined research effort involving Dryden, Langley, and several industry partners, NASA demonstrated the value of its X-43A hypersonic research aircraft, as it became the first air-breathing, unpiloted, scramjet-powered plane to fly freely by itself. The March 27 flight, originating from NASA's Dryden Flight Research Center, began with the Agency's B-52B launch aircraft carrying the X-43A out to the test range over the Pacific Ocean off the California coast. The X-43A was boosted up to its test altitude of about 95,000 feet, where it separated from its modified Pegasus booster and flew freely under its own power. Two very significant aviation milestones occurred during this test flight: first, controlled accelerating flight at Mach 7 under scramjet power, and second, the successful stage separation at high dynamic pressure of two non-axisymmetric vehicles. To top it all off, the flight resulted in the setting of a new aeronautical speed record. The X-43A reached a speed of over Mach 7, or about 5,000 miles per hour faster than any known aircraft powered by an air-breathing engine has ever flown.

  1. Statistical performance of image cytometry for DNA, lipids, cytokeratin, & CD45 in a model system for circulation tumor cell detection.

    PubMed

    Futia, Gregory L; Schlaepfer, Isabel R; Qamar, Lubna; Behbakht, Kian; Gibson, Emily A

    2017-07-01

    Detection of circulating tumor cells (CTCs) in a blood sample is limited by the sensitivity and specificity of the biomarker panel used to identify CTCs over other blood cells. In this work, we present Bayesian theory that shows how test sensitivity and specificity set the rarity of cell that a test can detect. We perform our calculation of sensitivity and specificity on our image cytometry biomarker panel by testing on pure disease positive (D + ) populations (MCF7 cells) and pure disease negative populations (D - ) (leukocytes). In this system, we performed multi-channel confocal fluorescence microscopy to image biomarkers of DNA, lipids, CD45, and Cytokeratin. Using custom software, we segmented our confocal images into regions of interest consisting of individual cells and computed the image metrics of total signal, second spatial moment, spatial frequency second moment, and the product of the spatial-spatial frequency moments. We present our analysis of these 16 features. The best performing of the 16 features produced an average separation of three standard deviations between D + and D - and an average detectable rarity of ∼1 in 200. We performed multivariable regression and feature selection to combine multiple features for increased performance and showed an average separation of seven standard deviations between the D + and D - populations making our average detectable rarity of ∼1 in 480. Histograms and receiver operating characteristics (ROC) curves for these features and regressions are presented. We conclude that simple regression analysis holds promise to further improve the separation of rare cells in cytometry applications. © 2017 International Society for Advancement of Cytometry. © 2017 International Society for Advancement of Cytometry.

  2. Item Banking Enables Stand-Alone Measurement of Driving Ability.

    PubMed

    Khadka, Jyoti; Fenwick, Eva K; Lamoureux, Ecosse L; Pesudovs, Konrad

    2016-12-01

    To explore whether large item sets, as used in item banking, enable important latent traits, such as driving, to form stand-alone measures. The 88-item activity limitation (AL) domain of the glaucoma module of the Eye-tem Bank was interviewer-administered to patients with glaucoma. Rasch analysis was used to calibrate all items in AL domain on the same interval-level scale and test its psychometric properties. Based on Rasch dimensionality metrics, the AL scale was separated into subscales. These subscales underwent separate Rasch analyses to test whether they could form stand-alone measures. Independence of these measures was tested with Bland and Altman (B&A) Limit of Agreement (LOA). The AL scale was completed by 293 patients (median age, 71 years). It demonstrated excellent precision (3.12). However, Rasch analysis dimensionality metrics indicated that the domain arguably had other dimensions which were driving, luminance, and reading. Once separated, the remaining AL items, driving and luminance subscales, were unidimensional and had excellent precision of 4.25, 2.94, and 2.22, respectively. The reading subscale showed poor precision (1.66), so it was not examined further. The luminance subscale demonstrated excellent agreement (mean bias, 0.2 logit; 95% LOA, -2.2 to 3.3 logit); however, the driving subscale demonstrated poor agreement (mean bias, 1.1 logit; 95% LOA, -4.8 to 7.0 logit) with the AL scale. These findings indicate that driving items in the AL domain of the glaucoma module were perceived and responded to differently from the other AL items, but the reading and luminance items were not. Therefore, item banking enables stand-alone measurement of driving ability in glaucoma.

  3. Experimental Evaluation of Suitability of Selected Multi-Criteria Decision-Making Methods for Large-Scale Agent-Based Simulations.

    PubMed

    Tučník, Petr; Bureš, Vladimír

    2016-01-01

    Multi-criteria decision-making (MCDM) can be formally implemented by various methods. This study compares suitability of four selected MCDM methods, namely WPM, TOPSIS, VIKOR, and PROMETHEE, for future applications in agent-based computational economic (ACE) models of larger scale (i.e., over 10 000 agents in one geographical region). These four MCDM methods were selected according to their appropriateness for computational processing in ACE applications. Tests of the selected methods were conducted on four hardware configurations. For each method, 100 tests were performed, which represented one testing iteration. With four testing iterations conducted on each hardware setting and separated testing of all configurations with the-server parameter de/activated, altogether, 12800 data points were collected and consequently analyzed. An illustrational decision-making scenario was used which allows the mutual comparison of all of the selected decision making methods. Our test results suggest that although all methods are convenient and can be used in practice, the VIKOR method accomplished the tests with the best results and thus can be recommended as the most suitable for simulations of large-scale agent-based models.

  4. Communication between functional and denervated muscles using radiofrequency.

    PubMed

    Jacob, Doreen K; Stefko, Susan Tonya; Hackworth, Steven A; Lovell, Michael R; Mickle, Marlin H

    2006-05-01

    This article focuses on establishing communication between a functional muscle and a denervated muscle using a radiofrequency communications link. The ultimate objective of the project is to restore the eye blink in patients with facial nerve paralysis. Two sets of experiments were conducted using the gastrocnemius leg muscles of Sprague-Dawley rats. In the initial tests, varying magnitudes of voltages ranging from 0.85 to 2.5 V were applied directly to a denervated muscle to determine the voltage required to produce visible contraction. The second set of experiments was then conducted to determine the voltage output from an in vivo muscle contraction that could be sensed and used to coordinate a signal for actuation of a muscle in a separate limb. After designing the appropriate external communication circuitry, a third experiment was performed to verify that a signal between a functional and a denervated muscle can be generated and used as a stimulus. Voltages below 2 V at a 10-millisecond pulse width elicited a gentle, controlled contraction of the denervated muscle in vivo. It was also observed that with longer pulse widths, higher stimulation voltages were required to produce sufficient contractions. It is possible to detect contraction of a muscle, use this to generate a signal to an external base station, and subsequently cause a separate, denervated muscle to contract in response to the signal. This demonstration in vivo of a signaling system for pacing of electrical stimulation of 1 muscle to spontaneous contraction of another, separate muscle, using radiofrequency communication without direct connection, may be used in numerous ways to overcome nerve damage.

  5. Inferring Causalities in Landscape Genetics: An Extension of Wright's Causal Modeling to Distance Matrices.

    PubMed

    Fourtune, Lisa; Prunier, Jérôme G; Paz-Vinas, Ivan; Loot, Géraldine; Veyssière, Charlotte; Blanchet, Simon

    2018-04-01

    Identifying landscape features that affect functional connectivity among populations is a major challenge in fundamental and applied sciences. Landscape genetics combines landscape and genetic data to address this issue, with the main objective of disentangling direct and indirect relationships among an intricate set of variables. Causal modeling has strong potential to address the complex nature of landscape genetic data sets. However, this statistical approach was not initially developed to address the pairwise distance matrices commonly used in landscape genetics. Here, we aimed to extend the applicability of two causal modeling methods-that is, maximum-likelihood path analysis and the directional separation test-by developing statistical approaches aimed at handling distance matrices and improving functional connectivity inference. Using simulations, we showed that these approaches greatly improved the robustness of the absolute (using a frequentist approach) and relative (using an information-theoretic approach) fits of the tested models. We used an empirical data set combining genetic information on a freshwater fish species (Gobio occitaniae) and detailed landscape descriptors to demonstrate the usefulness of causal modeling to identify functional connectivity in wild populations. Specifically, we demonstrated how direct and indirect relationships involving altitude, temperature, and oxygen concentration influenced within- and between-population genetic diversity of G. occitaniae.

  6. High throughput image cytometry for detection of suspicious lesions in the oral cavity

    NASA Astrophysics Data System (ADS)

    MacAulay, Calum; Poh, Catherine F.; Guillaud, Martial; Michele Williams, Pamela; Laronde, Denise M.; Zhang, Lewei; Rosin, Miriam P.

    2012-08-01

    The successful management of oral cancer depends upon early detection, which relies heavily on the clinician's ability to discriminate sometimes subtle alterations of the infrequent premalignant lesions from the more common reactive and inflammatory conditions in the oral mucosa. Even among experienced oral specialists this can be challenging, particularly when using new wide field-of-view direct fluorescence visualization devices clinically introduced for the recognition of at-risk tissue. The objective of this study is to examine if quantitative cytometric analysis of oral brushing samples could facilitate the assessment of the risk of visually ambiguous lesions. About 369 cytological samples were collected and analyzed: (1) 148 samples from pathology-proven sites of SCC, carcinoma in situ or severe dysplasia; (2) 77 samples from sites with inflammation, infection, or trauma, and (3) 144 samples from normal sites. These were randomly separated into training and test sets. The best algorithm correctly recognized 92.5% of the normal samples, 89.4% of the abnormal samples, 86.2% of the confounders in the training set as well as 100% of the normal samples, and 94.4% of the abnormal samples in the test set. These data suggest that quantitative cytology could reduce by more than 85% the number of visually suspect lesions requiring further assessment by biopsy.

  7. Cognitive dysfunction in Body Dysmorphic Disorder: New implications for nosological systems & neurobiological models

    PubMed Central

    Jefferies-Sewell, K; Chamberlain, SR; Fineberg, NA; Laws, KR

    2017-01-01

    Background Body dysmorphic disorder (BDD) is a debilitating disorder, characterised by obsessions and compulsions relating specifically to perceived appearance, newly classified within the DSM-5 Obsessive-Compulsive and Related Disorders grouping. Until now, little research has been conducted into the cognitive profile of this disorder. Materials and Methods Participants with BDD (n=12) and healthy controls (n=16) were tested using a computerised neurocognitive battery investigating attentional set-shifting (Intra/Extra Dimensional Set Shift Task), decision-making (Cambridge Gamble Task), motor response-inhibition (Stop-Signal Reaction Time Task) and affective processing (Affective Go-No Go Task). The groups were matched for age, IQ and education. Results In comparison to controls, patients with BDD showed significantly impaired attentional set shifting, abnormal decision-making, impaired response inhibition and greater omission and commission errors on the emotional processing task. Conclusions Despite the modest sample size, our results showed that individuals with BDD performed poorly compared to healthy controls on tests of cognitive flexibility, reward and motor impulsivity and affective processing. Results from separate studies in OCD patients suggest similar cognitive dysfunction. Therefore, these findings are consistent with the re-classification of BDD alongside OCD. These data also hint at additional areas of decision-making abnormalities that might contribute specifically to the psychopathology of BDD. PMID:27899165

  8. The actuation of microflaps inspired by shark scales deeply embedded in a boundary layer

    NASA Astrophysics Data System (ADS)

    Morris, Jackson; Lang, Amy; Hubner, Paul

    2016-11-01

    Thanks to millions of years of natural selection, sharks have evolved to become quick apex predators. Shark skin is made up of microscopic scales on the order of 0.2 mm in size. This array of scales is hypothesized to be a flow control mechanism where individual scales are capable of being passively actuated by reversed flow in water due to their preferential orientation to attached flow. Previous research has proven shark skin to reduce flow separation in water, which would result in lower pressure drag. We believe shark scales are strategically sized to interact with the lower 5 percent of the boundary layer, where reversed flow occurs close to the wall. To test the capability of micro-flaps to be actuated in air various sets of flaps, inspired by shark scale geometry, were rapidly prototyped. These microflaps were tested in a low-speed wind tunnel at various flow speeds and boundary layer thicknesses. Boundary layer flow conditions were measured using a hot-wire probe and microflap actuation was observed. Microflap actuation in airflow would mean that this bio-inspired separation control mechanism found on shark skin has potential application for aircraft. Boeing.

  9. PCR and restriction fragment length polymorphism of a pel gene as a tool to identify Erwinia carotovora in relation to potato diseases.

    PubMed Central

    Darrasse, A; Priou, S; Kotoujansky, A; Bertheau, Y

    1994-01-01

    Using a sequenced pectate lyase-encoding gene (pel gene), we developed a PCR test for Erwinia carotovora. A set of primers allowed the amplification of a 434-bp fragment in E. carotovora strains. Among the 89 E. carotovora strains tested, only the Erwinia carotovora subsp. betavasculorum strains were not detected. A restriction fragment length polymorphism (RFLP) study was undertaken on the amplified fragment with seven endonucleases. The Sau3AI digestion pattern specifically identified the Erwinia carotovora subsp. atroseptica strains, and the whole set of data identified the Erwinia carotovora subsp. wasabiae strains. However, Erwinia carotovora subsp. carotovora and Erwinia carotovora subsp. odorifera could not be separated. Phenetic and phylogenic analyses of RFLP results showed E. carotovora subsp. atroseptica as a homogeneous group while E. carotovora subsp. carotovora and E. carotovora subsp. odorifera strains exhibited a genetic diversity that may result from a nonmonophyletic origin. The use of RFLP on amplified fragments in epidemiology and for diagnosis is discussed. Images PMID:7912502

  10. The First Geodetic VLBI Field Test of LIFT: A 550-km-long Optical Fiber Link for Remote Antenna Synchronization

    NASA Astrophysics Data System (ADS)

    Perini, Federico; Bortolotti, Claudio; Roma, Mauro; Ambrosini, Roberto; Negusini, Monia; Maccaferri, Giuseppe; Stagni, Matteo; Nanni, Mauro; Clivati, Cecilia; Frittelli, Matteo; Mura, Alberto; Levi, Filippo; Zucco, Massimo; Calonico, Davide; Bertarini, Alessandra; Artz, Thomas

    2016-12-01

    We present the first field test of the implementation of a coherent optical fiber link for remote antenna synchronization realized in Italy between the Italian Metrological Institute (INRIM) and the Medicina radio observatory of the National Institute for Astrophysics (INAF). The Medicina VLBI antenna participated in the EUR137 experiment carried out in September 2015 using, as reference systems, both the local H-maser and a remote H-maser hosted at the INRIM labs in Turin, separated by about 550 km. In order to assess the quality of the remote clock, the observed radio sources were split into two sets, using either the local or the remote H-maser. A system to switch automatically between the two references was integrated into the antenna field system. The observations were correlated in Bonn and preliminary results are encouraging since fringes were detected with both time references along the full 24 hours of the session. The experimental set-up, the results, and the perspectives for future radio astronomical and geodetic experiments are presented.

  11. Group vs. single mindfulness meditation: exploring avoidance, impulsivity, and weight management in two separate mindfulness meditation settings.

    PubMed

    Mantzios, Michail; Giannou, Kyriaki

    2014-07-01

    Recent research has identified that mindfulness meditation in group settings supports people who are trying to lose weight. The present research investigated mindfulness meditation in group and individual settings, and explored the potential impact on weight loss and other factors (i.e. mindfulness, impulsivity, and avoidance) that may assist or hinder weight loss. Specifically, the hypotheses tested were that the group setting assisted dieters more than the individual setting by reducing weight, cognitive-behavioral avoidance, and impulsivity and by increasing mindfulness. Participants (n = 170) who were trying to lose weight were randomly assigned to practice meditation for 6 weeks within a group or independently. Measurements in mindfulness, cognitive-behavioral avoidance, impulsivity, and weight occurred twice (pre- and post-intervention). Results indicated that participants in the group setting lost weight and lowered their levels of cognitive-behavioral avoidance, while impulsivity and mindfulness remained stable. On the other hand, participants in the individual condition lost less weight, while there was an increase in cognitive-behavioral avoidance and mindfulness scores, but a decrease in impulsivity. Seeing that benefits and limitations observed in group settings are not replicated when people meditate alone, this study concluded that mindfulness meditation in individual settings needs to be used with caution, although there are some potential benefits that could aid future weight loss research. © 2014 The International Association of Applied Psychology.

  12. Wind Tunnel Test of a Risk-Reduction Wing/Fuselage Model to Examine Juncture-Flow Phenomena

    NASA Technical Reports Server (NTRS)

    Kegerise, Michael A.; Neuhart, Dan H.

    2016-01-01

    A wing/fuselage wind-tunnel model was tested in the Langley 14- by 22-foot Subsonic Wind Tunnel in preparation for a highly-instrumented Juncture Flow Experiment to be conducted in the same facility. This test, which was sponsored by the NASA Transformational Tool and Technologies Project, is part of a comprehensive set of experimental and computational research activities to develop revolutionary, physics-based aeronautics analysis and design capability. The objectives of this particular test were to examine the surface and off-body flow on a generic wing/body combination to: 1) choose a final wing for a future, highly instrumented model, 2) use the results to facilitate unsteady pressure sensor placement on the model, 3) determine the area to be surveyed with an embedded laser-doppler velocimetry (LDV) system, 4) investigate the primary juncture corner- flow separation region using particle image velocimetry (PIV) to see if the particle seeding is adequately entrained and to examine the structure in the separated region, and 5) to determine the similarity of observed flow features with those predicted by computational fluid dynamics (CFD). This report documents the results of the above experiment that specifically address the first three goals. Multiple wing configurations were tested at a chord Reynolds number of 2.4 million. Flow patterns on the surface of the wings and in the region of the wing/fuselage juncture were examined using oil- flow visualization and infrared thermography. A limited number of unsteady pressure sensors on the fuselage around the wing leading and trailing edges were used to identify any dynamic effects of the horseshoe vortex on the flow field. The area of separated flow in the wing/fuselage juncture near the wing trailing edge was observed for all wing configurations at various angles of attack. All of the test objectives were met. The staff of the 14- by 22-foot Subsonic Wind Tunnel provided outstanding support and delivered exceptional value to the experiment, which exceeded expectations. The results of this test will directly inform the planning for the first of a series of instrumented-model tests at the same Reynolds number. These tests will be performed on a slightly larger-scale model with the selected wing, and will include off-body measurements with LDV and PIV, steady and unsteady pressure measurements, and the flow-visualization techniques that are discussed in this report.

  13. Coherent Pulsed Lidar Sensing of Wake Vortex Position and Strength, Winds and Turbulence in the Terminal Area

    NASA Technical Reports Server (NTRS)

    Brockman, Philip; Barker, Ben C., Jr.; Koch, Grady J.; Nguyen, Dung Phu Chi; Britt, Charles L., Jr.; Petros, Mulugeta

    1999-01-01

    NASA Langley Research Center (LaRC) has field tested a 2.0 gm, 100 Hertz, pulsed coherent lidar to detect and characterize wake vortices and to measure atmospheric winds and turbulence. The quantification of aircraft wake-vortex hazards is being addressed by the Wake Vortex Lidar (WVL) Project as part of Aircraft Vortex Spacing System (AVOSS), which is under the Reduced Spacing Operations Element of the Terminal Area Productivity (TAP) Program. These hazards currently set the minimum, fixed separation distance between two aircraft and affect the number of takeoff and landing operations on a single runway under Instrument Meteorological Conditions (IMC). The AVOSS concept seeks to safely reduce aircraft separation distances, when weather conditions permit, to increase the operational capacity of major airports. The current NASA wake-vortex research efforts focus on developing and validating wake vortex encounter models, wake decay and advection models, and wake sensing technologies. These technologies will be incorporated into an automated AVOSS that can properly select safe separation distances for different weather conditions, based on the aircraft pair and predicted/measured vortex behavior. The sensor subsystem efforts focus on developing and validating wake sensing technologies. The lidar system has been field-tested to provide real-time wake vortex trajectory and strength data to AVOSS for wake prediction verification. Wake vortices, atmospheric winds, and turbulence products have been generated from processing the lidar data collected during deployments to Norfolk (ORF), John F. Kennedy (JFK), and Dallas/Fort Worth (DFW) International Airports.

  14. a Threshold-Free Filtering Algorithm for Airborne LIDAR Point Clouds Based on Expectation-Maximization

    NASA Astrophysics Data System (ADS)

    Hui, Z.; Cheng, P.; Ziggah, Y. Y.; Nie, Y.

    2018-04-01

    Filtering is a key step for most applications of airborne LiDAR point clouds. Although lots of filtering algorithms have been put forward in recent years, most of them suffer from parameters setting or thresholds adjusting, which will be time-consuming and reduce the degree of automation of the algorithm. To overcome this problem, this paper proposed a threshold-free filtering algorithm based on expectation-maximization. The proposed algorithm is developed based on an assumption that point clouds are seen as a mixture of Gaussian models. The separation of ground points and non-ground points from point clouds can be replaced as a separation of a mixed Gaussian model. Expectation-maximization (EM) is applied for realizing the separation. EM is used to calculate maximum likelihood estimates of the mixture parameters. Using the estimated parameters, the likelihoods of each point belonging to ground or object can be computed. After several iterations, point clouds can be labelled as the component with a larger likelihood. Furthermore, intensity information was also utilized to optimize the filtering results acquired using the EM method. The proposed algorithm was tested using two different datasets used in practice. Experimental results showed that the proposed method can filter non-ground points effectively. To quantitatively evaluate the proposed method, this paper adopted the dataset provided by the ISPRS for the test. The proposed algorithm can obtain a 4.48 % total error which is much lower than most of the eight classical filtering algorithms reported by the ISPRS.

  15. Spatial resolution test of a beam diagnostic system for DESIREE

    NASA Astrophysics Data System (ADS)

    Das, Susanta; Kallberg, A.

    2010-11-01

    A diagnostic system based on the observation of low energy ( ˜ 10 eV) secondary electrons (SE) produced by a beam, striking a metallic foil has been built to monitor and to cover the wide range of beam intensities and energies for Double ElectroStatic Ion Ring ExpEriment [1,2].The system consists of a Faraday cup to measure the beam current, a collimator with circular apertures of different diameters to measure the spatial resolution of the system, a beam profile monitoring system (BPMS), and a control unit. The BPMS, in turn, consists of an aluminim (Al) foil, a grid placed in front of the Al foil to accelerate the SE, position sensitive MCP, fluorescent screen, and a CCD camera to capture the images. The collimator contains a set of circular holes of different diameters and separations (d) between them. The collimator cuts out from the beam areas equal to the holes with separation d mm between the beams centers and creates well separated (distinguishable) narrow beams of approximately same intensity close to each other. A 10 keV proton beam was used. The spatial resolution of the system was tested for different Al plate and MCP voltages and resolution of better than 2 mm was achieved. Ref.: 1. K. Kruglov {et al}., NIM A 441 (2000) 595; 701 (2002) 193c, 2. MSL and Atomic Physics, Stockholm Univ.(www.msl.se, http://www.atom.physto.se/Cederquist/desiree/web/hc.html).

  16. The structure of Turkish trait-descriptive adjectives.

    PubMed

    Somer, O; Goldberg, L R

    1999-03-01

    This description of the Turkish lexical project reports some initial findings on the structure of Turkish personality-related variables. In addition, it provides evidence on the effects of target evaluative homogeneity vs. heterogeneity (e.g., samples of well-liked target individuals vs. samples of both liked and disliked targets) on the resulting factor structures, and thus it provides a first test of the conclusions reached by D. Peabody and L. R. Goldberg (1989) using English trait terms. In 2 separate studies, and in 2 types of data sets, clear versions of the Big Five factor structure were found. And both studies replicated and extended the findings of Peabody and Goldberg; virtually orthogonal factors of relatively equal size were found in the homogeneous samples, and a more highly correlated set of factors with relatively large Agreeableness and Conscientiousness dimensions was found in the heterogeneous samples.

  17. Extending the granularity of representation and control for the MIL-STD CAIS 1.0 node model

    NASA Technical Reports Server (NTRS)

    Rogers, Kathy L.

    1986-01-01

    The Common APSE (Ada 1 Program Support Environment) Interface Set (CAIS) (DoD85) node model provides an excellent baseline for interfaces in a single-host development environment. To encompass the entire spectrum of computing, however, the CAIS model should be extended in four areas. It should provide the interface between the engineering workstation and the host system throughout the entire lifecycle of the system. It should provide a basis for communication and integration functions needed by distributed host environments. It should provide common interfaces for communications mechanisms to and among target processors. It should provide facilities for integration, validation, and verification of test beds extending to distributed systems on geographically separate processors with heterogeneous instruction set architectures (ISAS). Additions to the PROCESS NODE model to extend the CAIS into these four areas are proposed.

  18. Coordinate transformation by minimizing correlations between parameters

    NASA Technical Reports Server (NTRS)

    Kumar, M.

    1972-01-01

    This investigation was to determine the transformation parameters (three rotations, three translations and a scale factor) between two Cartesian coordinate systems from sets of coordinates given in both systems. The objective was the determination of well separated transformation parameters with reduced correlations between each other, a problem especially relevant when the sets of coordinates are not well distributed. The above objective is achieved by preliminarily determining the three rotational parameters and the scale factor from the respective direction cosines and chord distances (these being independent of the translation parameters) between the common points, and then computing all the seven parameters from a solution in which the rotations and the scale factor are entered as weighted constraints according to their variances and covariances obtained in the preliminary solutions. Numerical tests involving two geodetic reference systems were performed to evaluate the effectiveness of this approach.

  19. A confluence of contexts: asymmetric versus global failures of selective attention to stroop dimensions.

    PubMed

    Sabri, M; Melara, R D; Algom, D

    2001-06-01

    In 6 experiments probing selective attention through Stroop classification, 4 factors of context were manipulated: (a) psychophysical context, the distinctiveness of values along the color and word dimensions; (b) set size context, the number of stimulus values tested; (c) production context, the mode used to respond; and (d) covariate context, the correlation between the dimensions. The psychophysical and production contexts mainly caused an asymmetry in selective attention failure between colors and words, whereas the set size and covariate contexts contributed primarily to the average or global magnitudes of attentional disruption across dimensions. The results suggest that (a) Stroop dimensions are perceptually separable, (b) J.R. Stroop's (1935) classic findings arose from his particular combination of contexts, and (c) stimulus uncertainty and dimensional imbalance are the primary sources of task and congruity effects in the Stroop paradigm.

  20. A clinical database management system for improved integration of the Veterans Affairs Hospital Information System.

    PubMed

    Andrews, R D; Beauchamp, C

    1989-12-01

    The Department of Veterans Affairs (VA) Decentralized Hospital Computer Program (DHCP) contains data modules derived from separate ancillary services (e.g., Lab, Pharmacy and Radiology). It is currently difficult to integrate information between the modules. A prototype is being developed aimed at integrating ancillary data by storing clinical data oriented to the patient so that there is easy interaction of data from multiple services. A set of program utilities provides for user-defined functions of decision support, queries, and reports. Information can be used to monitor quality of care by providing feedback in the form of reports, and reminders. Initial testing has indicated the prototype's design and implementation are feasible (in terms of space requirements, speed, and ease of use) in outpatient and inpatient settings. The design, development, and clinical use of this prototype are described.

  1. Reproducibility and Prognosis of Quantitative Features Extracted from CT Images12

    PubMed Central

    Balagurunathan, Yoganand; Gu, Yuhua; Wang, Hua; Kumar, Virendra; Grove, Olya; Hawkins, Sam; Kim, Jongphil; Goldgof, Dmitry B; Hall, Lawrence O; Gatenby, Robert A; Gillies, Robert J

    2014-01-01

    We study the reproducibility of quantitative imaging features that are used to describe tumor shape, size, and texture from computed tomography (CT) scans of non-small cell lung cancer (NSCLC). CT images are dependent on various scanning factors. We focus on characterizing image features that are reproducible in the presence of variations due to patient factors and segmentation methods. Thirty-two NSCLC nonenhanced lung CT scans were obtained from the Reference Image Database to Evaluate Response data set. The tumors were segmented using both manual (radiologist expert) and ensemble (software-automated) methods. A set of features (219 three-dimensional and 110 two-dimensional) was computed, and quantitative image features were statistically filtered to identify a subset of reproducible and nonredundant features. The variability in the repeated experiment was measured by the test-retest concordance correlation coefficient (CCCTreT). The natural range in the features, normalized to variance, was measured by the dynamic range (DR). In this study, there were 29 features across segmentation methods found with CCCTreT and DR ≥ 0.9 and R2Bet ≥ 0.95. These reproducible features were tested for predicting radiologist prognostic score; some texture features (run-length and Laws kernels) had an area under the curve of 0.9. The representative features were tested for their prognostic capabilities using an independent NSCLC data set (59 lung adenocarcinomas), where one of the texture features, run-length gray-level nonuniformity, was statistically significant in separating the samples into survival groups (P ≤ .046). PMID:24772210

  2. Development of a Semi-Quantitative Food Frequency Questionnaire to Assess the Dietary Intake of a Multi-Ethnic Urban Asian Population.

    PubMed

    Neelakantan, Nithya; Whitton, Clare; Seah, Sharna; Koh, Hiromi; Rebello, Salome A; Lim, Jia Yi; Chen, Shiqi; Chan, Mei Fen; Chew, Ling; van Dam, Rob M

    2016-08-27

    Assessing habitual food consumption is challenging in multi-ethnic cosmopolitan settings. We systematically developed a semi-quantitative food frequency questionnaire (FFQ) in a multi-ethnic population in Singapore, using data from two 24-h dietary recalls from a nationally representative sample of 805 Singapore residents of Chinese, Malay and Indian ethnicity aged 18-79 years. Key steps included combining reported items on 24-h recalls into standardized food groups, developing a food list for the FFQ, pilot testing of different question formats, and cognitive interviews. Percentage contribution analysis and stepwise regression analysis were used to identify foods contributing cumulatively ≥90% to intakes and individually ≥1% to intake variance of key nutrients, for the total study population and for each ethnic group separately. Differences between ethnic groups were observed in proportions of consumers of certain foods (e.g., lentil stews, 1%-47%; and pork dishes, 0%-50%). The number of foods needed to explain variability in nutrient intakes differed substantially by ethnic groups and was substantially larger for the total population than for separate ethnic groups. A 163-item FFQ covered >95% of total population intake for all key nutrients. The methodological insights provided in this paper may be useful in developing similar FFQs in other multi-ethnic settings.

  3. Ares I-X Flight Evaluation Tasks in Support of Ares I Development

    NASA Technical Reports Server (NTRS)

    Huebner, Lawrence D.; Richards, James S.; Coates, Ralph H., III; Cruit, Wendy D.; Ramsey, Matthew N.

    2010-01-01

    NASA s Constellation Program successfully launched the Ares I-X Flight Test Vehicle on October 28, 2009. The Ares I-X flight was a development flight test that offered a unique opportunity for early engineering data to impact the design and development of the Ares I crew launch vehicle. As the primary customer for flight data from the Ares I-X mission, the Ares Projects Office established a set of 33 flight evaluation tasks to correlate fight results with prospective design assumptions and models. Included within these tasks were direct comparisons of flight data with pre-flight predictions and post-flight assessments utilizing models and modeling techniques being applied to design and develop Ares I. A discussion of the similarities and differences in those comparisons and the need for discipline-level model updates based upon those comparisons form the substance of this paper. The benefits of development flight testing were made evident by implementing these tasks that used Ares I-X data to partially validate tools and methodologies in technical disciplines that will ultimately influence the design and development of Ares I and future launch vehicles. The areas in which partial validation from the flight test was most significant included flight control system algorithms to predict liftoff clearance, ascent, and stage separation; structural models from rollout to separation; thermal models that have been updated based on these data; pyroshock attenuation; and the ability to predict complex flow fields during time-varying conditions including plume interactions.

  4. Main devices design of submarine oil-water separation system

    NASA Astrophysics Data System (ADS)

    Cai, Wen-Bin; Liu, Bo-Hong

    2017-11-01

    In the process of offshore oil production, in order to thoroughly separate oil from produced fluid, solve the environment problem caused by oily sewage, and improve the economic benefit of offshore drilling, from the perspective of new oil-water separation, a set of submarine oil-water separation devices were designed through adsorption and desorption mechanism of the polymer materials for crude oil in this paper. The paper introduces the basic structure of gas-solid separation device, periodic separation device and adsorption device, and proves the rationality and feasibility of this device.

  5. Mechanical Properties of Degraded PMR-15 Resin

    NASA Technical Reports Server (NTRS)

    Tsuji, Luis C.; McManus, Hugh L.; Bowles, Kenneth J.

    1998-01-01

    Thermo-oxidative aging produces a non-uniform degradation state in PMR-15 resin. A surface layer, usually attributed to oxidative degradation, forms. This surface layer has different properties from the inner material. A set of material tests was designed to separate the properties of the oxidized surface layer from the properties of interior material. Test specimens were aged at 316 C in either air or nitrogen, for durations of up to 800 hours. The thickness of the oxidized surface layer in air aged specimens, and the shrinkage and Coefficient of Thermal Expansion (CTE) of nitrogen aged specimens were measured directly. Four-point-bend tests were performed to determine modulus of both the oxidized surface layer and the interior material. Bimaterial strip specimens consisting of oxidized surface material and unoxidized interior material were constructed and used to determine surface layer shrinkage and CTE. Results confirm that the surface layer and core materials have substantially different properties.

  6. Gravity packaging final waste recovery based on gravity separation and chemical imaging control.

    PubMed

    Bonifazi, Giuseppe; Serranti, Silvia; Potenza, Fabio; Luciani, Valentina; Di Maio, Francesco

    2017-02-01

    Plastic polymers are characterized by a high calorific value. Post-consumer plastic waste can be thus considered, in many cases, as a typical secondary solid fuels according to the European Commission directive on End of Waste (EoW). In Europe the practice of incineration is considered one of the solutions for waste disposal waste, for energy recovery and, as a consequence, for the reduction of waste sent to landfill. A full characterization of these products represents the first step to profitably and correctly utilize them. Several techniques have been investigated in this paper in order to separate and characterize post-consumer plastic packaging waste fulfilling the previous goals, that is: gravity separation (i.e. Reflux Classifier), FT-IR spectroscopy, NIR HyperSpectralImaging (HSI) based techniques and calorimetric test. The study demonstrated as the proposed separation technique and the HyperSpectral NIR Imaging approach allow to separate and recognize the different polymers (i.e. PolyVinyl Chloride (PVC), PolyStyrene (PS), PolyEthylene (PE), PoliEtilene Tereftalato (PET), PolyPropylene (PP)) in order to maximize the removal of the PVC fraction from plastic waste and to perform the full quality control of the resulting products, can be profitably utilized to set up analytical/control strategies finalized to obtain a low content of PVC in the final Solid Recovered Fuel (SRF), thus enhancing SRF quality, increasing its value and reducing the "final waste". Copyright © 2016 Elsevier Ltd. All rights reserved.

  7. Separation of Doppler radar-based respiratory signatures.

    PubMed

    Lee, Yee Siong; Pathirana, Pubudu N; Evans, Robin J; Steinfort, Christopher L

    2016-08-01

    Respiration detection using microwave Doppler radar has attracted significant interest primarily due to its unobtrusive form of measurement. With less preparation in comparison with attaching physical sensors on the body or wearing special clothing, Doppler radar for respiration detection and monitoring is particularly useful for long-term monitoring applications such as sleep studies (i.e. sleep apnoea, SIDS). However, motion artefacts and interference from multiple sources limit the widespread use and the scope of potential applications of this technique. Utilising the recent advances in independent component analysis (ICA) and multiple antenna configuration schemes, this work investigates the feasibility of decomposing respiratory signatures into each subject from the Doppler-based measurements. Experimental results demonstrated that FastICA is capable of separating two distinct respiratory signatures from two subjects adjacent to each other even in the presence of apnoea. In each test scenario, the separated respiratory patterns correlate closely to the reference respiration strap readings. The effectiveness of FastICA in dealing with the mixed Doppler radar respiration signals confirms its applicability in healthcare applications, especially in long-term home-based monitoring as it usually involves at least two people in the same environment (i.e. two people sleeping next to each other). Further, the use of FastICA to separate involuntary movements such as the arm swing from the respiratory signatures of a single subject was explored in a multiple antenna environment. The separated respiratory signal indeed demonstrated a high correlation with the measurements made by a respiratory strap used currently in clinical settings.

  8. 46 CFR 162.050-7 - Approval procedures.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... accordance with this subpart; (2) The oil content of each sample of separated water effluent taken during approval testing is 15 ppm or less; (3) During Test No. 3A an oily mixture is not observed at the separated water outlet of the separator; (4) During Test No. 5A its operation is continuous; and (5) Any substance...

  9. 46 CFR 162.050-7 - Approval procedures.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... accordance with this subpart; (2) The oil content of each sample of separated water effluent taken during approval testing is 15 ppm or less; (3) During Test No. 3A an oily mixture is not observed at the separated water outlet of the separator; (4) During Test No. 5A its operation is continuous; and (5) Any substance...

  10. The complete set of Cassini's UVIS occultation observations of Enceladus plume: model fits

    NASA Astrophysics Data System (ADS)

    Portyankina, G.; Esposito, L. W.; Hansen, C. J.

    2017-12-01

    Since the discovery in 2005, plume of Enceladus was observed by most of the instruments onboard Cassini spacecraft. Ultraviolet Imaging Spectrograph (UVIS) have observed Enceladus plume and collimated jets embedded in it in occultational geometry on 6 different occasions. We have constructed a 3D direct simulation Monte Carlo (DSMC) model for Enceladus jets and apply it to the analysis of the full set of UVIS occultation observations conducted during Cassini's mission from 2005 to 2017. The Monte Carlo model tracks test particles from their source at the surface into space. The initial positions of all test particles for a single jet are fixed to one of 100 jets sources identified by Porco et al. (2014). The initial three-dimensional velocity of each particle contains two components: a velocity Vz which is perpendicular to the surface, and a thermal velocity which is isotropic in the upward hemisphere. The direction and speed of the thermal velocity of each particle is chosen randomly but the ensemble moves isotropically at a speed which satisfies a Boltzmann distribution for a given temperature Tth. A range for reasonable Vz is then determined by requiring that modeled jet widths match the observed ones. Each model run results in a set of coordinates and velocities of a given set of test particles. These are converted to the test particle number densities and then integrated along LoS for each time step of the occultation observation. The geometry of the observation is calculated using SPICE. The overarching result of the simulation run is a test particle number density along LoS for each time point during the occultation observation for each of the jets separately. To fit the model to the data, we integrate all jets that are crossed by the LoS at each point during an observation. The relative strength of the jets must be determined to fit the observed UVIS curves. The results of the fits are sets of active jets for each occultation. Each UVIS occultation observation was done under a unique observational geometry. Consequently, the model fits produce different sets of active jets and different minimum Vz. We discuss and compare the results of fitting all UVIS occultation observations.

  11. Asynchronous glimpsing of speech: Spread of masking and task set-size

    PubMed Central

    Ozmeral, Erol J.; Buss, Emily; Hall, Joseph W.

    2012-01-01

    Howard-Jones and Rosen [(1993). J. Acoust. Soc. Am. 93, 2915–2922] investigated the ability to integrate glimpses of speech that are separated in time and frequency using a “checkerboard” masker, with asynchronous amplitude modulation (AM) across frequency. Asynchronous glimpsing was demonstrated only for spectrally wide frequency bands. It is possible that the reduced evidence of spectro-temporal integration with narrower bands was due to spread of masking at the periphery. The present study tested this hypothesis with a dichotic condition, in which the even- and odd-numbered bands of the target speech and asynchronous AM masker were presented to opposite ears, minimizing the deleterious effects of masking spread. For closed-set consonant recognition, thresholds were 5.1–8.5 dB better for dichotic than for monotic asynchronous AM conditions. Results were similar for closed-set word recognition, but for open-set word recognition the benefit of dichotic presentation was more modest and level dependent, consistent with the effects of spread of masking being level dependent. There was greater evidence of asynchronous glimpsing in the open-set than closed-set tasks. Presenting stimuli dichotically supported asynchronous glimpsing with narrower frequency bands than previously shown, though the magnitude of glimpsing was reduced for narrower bandwidths even in some dichotic conditions. PMID:22894234

  12. Investigation of Spiral Bevel Gear Condition Indicator Validation via AC-29-2C Combining Test Rig Damage Progression Data with Fielded Rotorcraft Data

    NASA Technical Reports Server (NTRS)

    Dempsey, Paula J.

    2015-01-01

    This is the final of three reports published on the results of this project. In the first report, results were presented on nineteen tests performed in the NASA Glenn Spiral Bevel Gear Fatigue Test Rig on spiral bevel gear sets designed to simulate helicopter fielded failures. In the second report, fielded helicopter HUMS data from forty helicopters were processed with the same techniques that were applied to spiral bevel rig test data. Twenty of the forty helicopters experienced damage to the spiral bevel gears, while the other twenty helicopters had no known anomalies within the time frame of the datasets. In this report, results from the rig and helicopter data analysis will be compared for differences and similarities in condition indicator (CI) response. Observations and findings using sub-scale rig failure progression tests to validate helicopter gear condition indicators will be presented. In the helicopter, gear health monitoring data was measured when damage occurred and after the gear sets were replaced at two helicopter regimes. For the helicopters or tails, data was taken in the flat pitch ground 101 rotor speed (FPG101) regime. For nine tails, data was also taken at 120 knots true airspeed (120KTA) regime. In the test rig, gear sets were tested until damage initiated and progressed while gear health monitoring data and operational parameters were measured and tooth damage progression documented. For the rig tests, the gear speed was maintained at 3500RPM, a one hour run-in was performed at 4000 in-lb gear torque, than the torque was increased to 8000 in-lbs. The HUMS gear condition indicator data evaluated included Figure of Merit 4 (FM4), Root Mean Square (RMS) or Diagnostic Algorithm 1(DA1), + 3 Sideband Index (SI3) and + 1 Sideband Index (SI1). These were selected based on their sensitivity in detecting contact fatigue damage modes from analytical, experimental and historical helicopter data. For this report, the helicopter dataset was reduced to fourteen tails and the test rig data set was reduced to eight tested gear sets. The damage modes compared were separated into three cases. For case one, both the gear and pinion showed signs of contact fatigue or scuffing damage. For case two, only the pinion showed signs of contact fatigue damage or scuffing. Case three was limited to the gear tests when scuffing occurred immediately after the gear run-in. Results of this investigation highlighted the importance of understanding the complete monitored systems, for both the helicopter and test rig, before interpreting health monitoring data. Further work is required to better define these two systems that include better state awareness of the fielded systems, new sensing technologies, new experimental methods or models that quantify the effect of system design on CI response and new methods for setting thresholds that take into consideration the variance of each system.

  13. NEUROPSYCHOLOGICAL PERFORMANCE WITHIN-PERSON VARIABILITY IS ASSOCIATED WITH REDUCED TREATMENT CONSENT CAPACITY

    PubMed Central

    Gurrera, Ronald J.; Karel, Michele J.; Azar, Armin R.; Moye, Jennifer

    2013-01-01

    OBJECTIVES The capacity of older adults to make health care decisions is often impaired in dementia and has been linked to performance on specific neuropsychological tasks. Within-person across-test neuropsychological performance variability has been shown to predict future dementia. This study examined the relationship of within-person across-test neuropsychological performance variability to a current construct of treatment decision (consent) capacity. DESIGN Participants completed a neuropsychological test battery and a standardized capacity assessment. Standard scores were used to compute mean neuropsychological performance and within-person across-test variability. SETTING Assessments were performed in the participant’s preferred location (e.g., outpatient clinic office, senior center, or home). PARTICIPANTS Participants were recruited from the community with fliers and advertisements, and consisted of men (N=79) and women (N=80) with (N=83) or without (N=76) significant cognitive impairment. MEASUREMENTS Participants completed the MacArthur Competence Assessment Tool - Treatment (MacCAT-T) and 11 neuropsychological tests commonly used in the cognitive assessment of older individuals. RESULTS Neuropsychological performance and within-person variability were independently associated with continuous and dichotomous measures of capacity, and within-person neuropsychological variability was significantly associated with within-person decisional ability variability. Prevalence of incapacity was greater than expected in participants with and without significant cognitive impairment when decisional abilities were considered separately. CONCLUSIONS These findings are consistent with an emerging construct of consent capacity in which discrete decisional abilities are differentially associated with cognitive processes, and indicate that the sensitivity and accuracy of consent capacity assessments can be improved by evaluating decisional abilities separately. PMID:23831178

  14. Characterization of Photoreceivers for LISA

    NASA Technical Reports Server (NTRS)

    Cervantes, F. Guzman; Livas, J.; Silverberg, R.; Buchanan, E.; Stebbins, R.

    2010-01-01

    LISA will use quadrant photo receivers as front-end devices for the phase meter measuring the motion of drag-free test masses in both angular orientation and separation. We have set up a laboratory testbed for the characterization of photo receivers. Some of the limiting noise sources have been identified and their contribution has been either measured or determined from the measured data. We have built a photo receiver with a 0.5 mm diameter quadrant photodiode with an equivalent input noise of better than 1.8 pA/(square root of)Hz below 20 MHz and a 3 dB bandwidth of 34 MHz.

  15. Investigation of exomic variants associated with overall survival in ovarian cancer

    PubMed Central

    Ann Chen, Yian; Larson, Melissa C; Fogarty, Zachary C; Earp, Madalene A; Anton-Culver, Hoda; Bandera, Elisa V; Cramer, Daniel; Doherty, Jennifer A; Goodman, Marc T; Gronwald, Jacek; Karlan, Beth Y; Kjaer, Susanne K; Levine, Douglas A; Menon, Usha; Ness, Roberta B; Pearce, Celeste L; Pejovic, Tanja; Rossing, Mary Anne; Wentzensen, Nicolas; Bean, Yukie T; Bisogna, Maria; Brinton, Louise A; Carney, Michael E; Cunningham, Julie M; Cybulski, Cezary; deFazio, Anna; Dicks, Ed M; Edwards, Robert P; Gayther, Simon A; Gentry-Maharaj, Aleksandra; Gore, Martin; Iversen, Edwin S; Jensen, Allan; Johnatty, Sharon E; Lester, Jenny; Lin, Hui-Yi; Lissowska, Jolanta; Lubinski, Jan; Menkiszak, Janusz; Modugno, Francesmary; Moysich, Kirsten B; Orlow, Irene; Pike, Malcolm C; Ramus, Susan J; Song, Honglin; Terry, Kathryn L; Thompson, Pamela J; Tyrer, Jonathan P; van den Berg, David J; Vierkant, Robert A; Vitonis, Allison F; Walsh, Christine; Wilkens, Lynne R; Wu, Anna H; Yang, Hannah; Ziogas, Argyrios; Berchuck, Andrew; Chenevix-Trench, Georgia; Schildkraut, Joellen M; Permuth-Wey, Jennifer; Phelan, Catherine M; Pharoah, Paul D P; Fridley, Brooke L

    2016-01-01

    Background While numerous susceptibility loci for epithelial ovarian cancer (EOC) have been identified, few associations have been reported with overall survival. In the absence of common prognostic genetic markers, we hypothesize that rare coding variants may be associated with overall EOC survival and assessed their contribution in two exome-based genotyping projects of the Ovarian Cancer Association Consortium (OCAC). Methods The primary patient set (Set 1) included 14 independent EOC studies (4293 patients) and 227,892 variants, and a secondary patient set (Set 2) included six additional EOC studies (1744 patients) and 114,620 variants. Because power to detect rare variants individually is reduced, gene-level tests were conducted. Sets were analyzed separately at individual variants and by gene, and then combined with meta-analyses (73,203 variants and 13,163 genes overlapped). Results No individual variant reached genome-wide statistical significance. A SNP previously implicated to be associated with EOC risk and, to a lesser extent, survival, rs8170, showed the strongest evidence of association with survival and similar effect size estimates across sets (Pmeta=1.1E-6, HRSet1=1.17, HRSet2=1.14). Rare variants in ATG2B, an autophagy gene important for apoptosis, were significantly associated with survival after multiple testing correction (Pmeta=1.1E-6; Pcorrected=0.01). Conclusions Common variant rs8170 and rare variants in ATG2B may be associated with EOC overall survival, although further study is needed. Impact This study represents the first exome-wide association study of EOC survival to include rare variant analyses, and suggests that complementary single variant and gene-level analyses in large studies are needed to identify rare variants that warrant follow-up study. PMID:26747452

  16. International Life Science Institute North America Cronobacter (Formerly Enterobacter sakazakii) isolate set.

    PubMed

    Ivy, Reid A; Farber, Jeffrey M; Pagotto, Franco; Wiedmann, Martin

    2013-01-01

    Foodborne pathogen isolate collections are important for the development of detection methods, for validation of intervention strategies, and to develop an understanding of pathogenesis and virulence. We have assembled a publicly available Cronobacter (formerly Enterobacter sakazakii) isolate set that consists of (i) 25 Cronobacter sakazakii isolates, (ii) two Cronobacter malonaticus isolates, (iii) one Cronobacter muytjensii isolate, which displays some atypical phenotypic characteristics, biochemical profiles, and colony color on selected differential media, and (iv) two nonclinical Enterobacter asburiae isolates, which show some phenotypic characteristics similar to those of Cronobacter spp. The set consists of human (n = 10), food (n = 11), and environmental (n = 9) isolates. Analysis of partial 16S rDNA sequence and seven-gene multilocus sequence typing data allowed for reliable identification of these isolates to species and identification of 14 isolates as sequence type 4, which had previously been shown to be the most common C. sakazakii sequence type associated with neonatal meningitis. Phenotypic characterization was carried out with API 20E and API 32E test strips and streaking on two selective chromogenic agars; isolates were also assessed for sorbitol fermentation and growth at 45°C. Although these strategies typically produced the same classification as sequence-based strategies, based on a panel of four biochemical tests, one C. sakazakii isolate yielded inconclusive data and one was classified as C. malonaticus. EcoRI automated ribotyping and pulsed-field gel electrophoresis (PFGE) with XbaI separated the set into 23 unique ribotypes and 30 unique PFGE types, respectively, indicating subtype diversity within the set. Subtype and source data for the collection are publicly available in the PathogenTracker database (www. pathogentracker. net), which allows for continuous updating of information on the set, including links to publications that include information on isolates from this collection.

  17. Method for in-situ calibration of electrophoretic analysis systems

    DOEpatents

    Liu, Changsheng; Zhao, Hequan

    2005-05-08

    An electrophoretic system having a plurality of separation lanes is provided with an automatic calibration feature in which each lane is separately calibrated. For each lane, the calibration coefficients map a spectrum of received channel intensities onto values reflective of the relative likelihood of each of a plurality of dyes being present. Individual peaks, reflective of the influence of a single dye, are isolated from among the various sets of detected light intensity spectra, and these can be used to both detect the number of dye components present, and also to establish exemplary vectors for the calibration coefficients which may then be clustered and further processed to arrive at a calibration matrix for the system. The system of the present invention thus permits one to use different dye sets to tag DNA nucleotides in samples which migrate in separate lanes, and also allows for in-situ calibration with new, previously unused dye sets.

  18. Effect of emulsifier and viscosity on oil separation in ready-to-use therapeutic food

    USDA-ARS?s Scientific Manuscript database

    Oil separation is a common food quality problem in ready-to-use therapeutic food (RUTF), the shelf-stable, peanut-based food used to treat severe acute malnutrition in home settings. Our objective was to evaluate the effect on oil separation of three emulsifiers at different concentrations in RUTF. ...

  19. 5 CFR 330.704 - Eligibility.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... employee has been or is being separated, that does not have a greater promotion potential than the position from which the employee has been or is being separated; (4) Occupies, or was displaced from a position... requirements set forth in paragraph (a) of this section (e.g., the employee is no longer being separated by RIF...

  20. A "Natural Experiment" in Childrearing Ecologies and Adolescents' Attachment and Separation Representations.

    ERIC Educational Resources Information Center

    Scharf, Miri

    2001-01-01

    Explored long-term effects of different childrearing contexts on attachment and separation representations of Israeli 16- to 18-year-olds. Found that adolescents raised in a kibbutz communal setting showed higher incidence of nonautonomous attachment representations and less competent coping with imagined separations than adolescents raised in a…

  1. High-resolution Identification and Separation of Living Cell Types by Multiple microRNA-responsive Synthetic mRNAs.

    PubMed

    Endo, Kei; Hayashi, Karin; Saito, Hirohide

    2016-02-23

    The precise identification and separation of living cell types is critical to both study cell function and prepare cells for medical applications. However, intracellular information to distinguish live cells remains largely inaccessible. Here, we develop a method for high-resolution identification and separation of cell types by quantifying multiple microRNA (miRNA) activities in live cell populations. We found that a set of miRNA-responsive, in vitro synthesized mRNAs identify a specific cell population as a sharp peak and clearly separate different cell types based on less than two-fold differences in miRNA activities. Increasing the number of miRNA-responsive mRNAs enhanced the capability for cell identification and separation, as we precisely and simultaneously distinguished different cell types with similar miRNA profiles. In addition, the set of synthetic mRNAs separated HeLa cells into subgroups, uncovering heterogeneity of the cells and the level of resolution achievable. Our method could identify target live cells and improve the efficiency of cell purification from heterogeneous populations.

  2. Neural Network for Nanoscience Scanning Electron Microscope Image Recognition.

    PubMed

    Modarres, Mohammad Hadi; Aversa, Rossella; Cozzini, Stefano; Ciancio, Regina; Leto, Angelo; Brandino, Giuseppe Piero

    2017-10-16

    In this paper we applied transfer learning techniques for image recognition, automatic categorization, and labeling of nanoscience images obtained by scanning electron microscope (SEM). Roughly 20,000 SEM images were manually classified into 10 categories to form a labeled training set, which can be used as a reference set for future applications of deep learning enhanced algorithms in the nanoscience domain. The categories chosen spanned the range of 0-Dimensional (0D) objects such as particles, 1D nanowires and fibres, 2D films and coated surfaces, and 3D patterned surfaces such as pillars. The training set was used to retrain on the SEM dataset and to compare many convolutional neural network models (Inception-v3, Inception-v4, ResNet). We obtained compatible results by performing a feature extraction of the different models on the same dataset. We performed additional analysis of the classifier on a second test set to further investigate the results both on particular cases and from a statistical point of view. Our algorithm was able to successfully classify around 90% of a test dataset consisting of SEM images, while reduced accuracy was found in the case of images at the boundary between two categories or containing elements of multiple categories. In these cases, the image classification did not identify a predominant category with a high score. We used the statistical outcomes from testing to deploy a semi-automatic workflow able to classify and label images generated by the SEM. Finally, a separate training was performed to determine the volume fraction of coherently aligned nanowires in SEM images. The results were compared with what was obtained using the Local Gradient Orientation method. This example demonstrates the versatility and the potential of transfer learning to address specific tasks of interest in nanoscience applications.

  3. Enhancement of face recognition learning in patients with brain injury using three cognitive training procedures.

    PubMed

    Powell, Jane; Letson, Susan; Davidoff, Jules; Valentine, Tim; Greenwood, Richard

    2008-04-01

    Twenty patients with impairments of face recognition, in the context of a broader pattern of cognitive deficits, were administered three new training procedures derived from contemporary theories of face processing to enhance their learning of new faces: semantic association (being given additional verbal information about the to-be-learned faces); caricaturing (presentation of caricatured versions of the faces during training and veridical versions at recognition testing); and part recognition (focusing patients on distinctive features during the training phase). Using a within-subjects design, each training procedure was applied to a different set of 10 previously unfamiliar faces and entailed six presentations of each face. In a "simple exposure" control procedure (SE), participants were given six presentations of another set of faces using the same basic protocol but with no further elaboration. Order of the four procedures was counterbalanced, and each condition was administered on a different day. A control group of 12 patients with similar levels of face recognition impairment were trained on all four sets of faces under SE conditions. Compared to the SE condition, all three training procedures resulted in more accurate discrimination between the 10 studied faces and 10 distractor faces in a post-training recognition test. This did not reflect any intrinsic lesser memorability of the faces used in the SE condition, as evidenced by the comparable performance across face sets by the control group. At the group level, the three experimental procedures were of similar efficacy, and associated cognitive deficits did not predict which technique would be most beneficial to individual patients; however, there was limited power to detect such associations. Interestingly, a pure prosopagnosic patient who was tested separately showed benefit only from the part recognition technique. Possible mechanisms for the observed effects, and implications for rehabilitation, are discussed.

  4. Interpolatability distinguishes LOCC from separable von Neumann measurements

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Childs, Andrew M.; Leung, Debbie; Mančinska, Laura

    2013-11-15

    Local operations with classical communication (LOCC) and separable operations are two classes of quantum operations that play key roles in the study of quantum entanglement. Separable operations are strictly more powerful than LOCC, but no simple explanation of this phenomenon is known. We show that, in the case of von Neumann measurements, the ability to interpolate measurements is an operational principle that sets apart LOCC and separable operations.

  5. Parametric Studies of Flow Separation using Air Injection

    NASA Technical Reports Server (NTRS)

    Zhang, Wei

    2004-01-01

    Boundary Layer separation causes the airfoil to stall and therefore imposes dramatic performance degradation on the airfoil. In recent years, flow separation control has been one of the active research areas in the field of aerodynamics due to its promising performance improvements on the lifting device. These active flow separation control techniques include steady and unsteady air injection as well as suction on the airfoil surface etc. This paper will be focusing on the steady and unsteady air injection on the airfoil. Although wind tunnel experiments revealed that the performance improvements on the airfoil using injection techniques, the details of how the key variables such as air injection slot geometry and air injection angle etc impact the effectiveness of flow separation control via air injection has not been studied. A parametric study of both steady and unsteady air injection active flow control will be the main objective for this summer. For steady injection, the key variables include the slot geometry, orientation, spacing, air injection velocity as well as the injection angle. For unsteady injection, the injection frequency will also be investigated. Key metrics such as lift coefficient, drag coefficient, total pressure loss and total injection mass will be used to measure the effectiveness of the control technique. A design of experiments using the Box-Behnken Design is set up in order to determine how each of the variables affects each of the key metrics. Design of experiment is used so that the number of experimental runs will be at minimum and still be able to predict which variables are the key contributors to the responses. The experiments will then be conducted in the 1ft by 1ft wind tunnel according to the design of experiment settings. The data obtained from the experiments will be imported into JMP, statistical software, to generate sets of response surface equations which represent the statistical empirical model for each of the metrics as a function of the key variables. Next, the variables such as the slot geometry can be optimized using the build-in optimizer within JMP. Finally, a wind tunnel testing will be conducted using the optimized slot geometry and other key variables to verify the empirical statistical model. The long term goal for this effort is to assess the impacts of active flow control using air injection at system level as one of the task plan included in the NASAs URETI program with Georgia Institute of Technology.

  6. UTM, a universal simulator for lightcurves of transiting systems

    NASA Astrophysics Data System (ADS)

    Deeg, Hans

    2009-02-01

    The Universal Transit Modeller (UTM) is a light-curve simulator for all kinds of transiting or eclipsing configurations between arbitrary numbers of several types of objects, which may be stars, planets, planetary moons, and planetary rings. Applications of UTM to date have been mainly in the generation of light-curves for the testing of detection algorithms. For the preparation of such test for the Corot Mission, a special version has been used to generate multicolour light-curves in Corot's passbands. A separate fitting program, UFIT (Universal Fitter) is part of the UTM distribution and may be used to derive best fits to light-curves for any set of continuously variable parameters. UTM/UFIT is written in IDL code and its source is released in the public domain under the GNU General Public License.

  7. The development of STS payload environmental engineering standards

    NASA Technical Reports Server (NTRS)

    Bangs, W. F.

    1982-01-01

    The presently reported effort to provide a single set of standards for the design, analysis and testing of Space Transportation System (STS) payloads throughout the NASA organization must be viewed as essentially experimental, since the concept of incorporating the diverse opinions and experiences of several separate field research centers may in retrospect be judged too ambitious or perhaps even naive. While each STS payload may have unique characteristics, and the project should formulate its own criteria for environmental design, testing and evaluation, a reference source document providing coordinated standards is expected to minimize the duplication of effort and limit random divergence of practices among the various NASA payload programs. These standards would provide useful information to all potential STS users, and offer a degree of standardization to STS users outside the NASA organization.

  8. Neutron/Gamma-ray discrimination through measures of fit

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Amiri, Moslem; Prenosil, Vaclav; Cvachovec, Frantisek

    2015-07-01

    Statistical tests and their underlying measures of fit can be utilized to separate neutron/gamma-ray pulses in a mixed radiation field. In this article, first the application of a sample statistical test is explained. Fit measurement-based methods require true pulse shapes to be used as reference for discrimination. This requirement makes practical implementation of these methods difficult; typically another discrimination approach should be employed to capture samples of neutrons and gamma-rays before running the fit-based technique. In this article, we also propose a technique to eliminate this requirement. These approaches are applied to several sets of mixed neutron and gamma-ray pulsesmore » obtained through different digitizers using stilbene scintillator in order to analyze them and measure their discrimination quality. (authors)« less

  9. Microparticle Separation by Cyclonic Separation

    NASA Astrophysics Data System (ADS)

    Karback, Keegan; Leith, Alexander

    2017-11-01

    The ability to separate particles based on their size has wide ranging applications from the industrial to the medical. Currently, cyclonic separators are primarily used in agriculture and manufacturing to syphon out contaminates or products from an air supply. This has led us to believe that cyclonic separation has more applications than the agricultural and industrial. Using the OpenFoam computational package, we were able to determine the flow parameters of a vortex in a cyclonic separator in order to segregate dust particles to a cutoff size of tens of nanometers. To test the model, we constructed an experiment to separate a test dust of various sized particles. We filled a chamber with Arizona test dust and utilized an acoustic suspension technique to segregate particles finer than a coarse cutoff size and introduce them into the cyclonic separation apparatus where they were further separated via a vortex following our computational model. The size of the particles separated from this experiment will be used to further refine our model. Metropolitan State University of Denver, Colorado University of Denver, Dr. Randall Tagg, Dr. Richard Krantz.

  10. Innovative high-performance liquid chromatography method development for the screening of 19 antimalarial drugs based on a generic approach, using design of experiments, independent component analysis and design space.

    PubMed

    Debrus, B; Lebrun, P; Kindenge, J Mbinze; Lecomte, F; Ceccato, A; Caliaro, G; Mbay, J Mavar Tayey; Boulanger, B; Marini, R D; Rozet, E; Hubert, Ph

    2011-08-05

    An innovative methodology based on design of experiments (DoE), independent component analysis (ICA) and design space (DS) was developed in previous works and was tested out with a mixture of 19 antimalarial drugs. This global LC method development methodology (i.e. DoE-ICA-DS) was used to optimize the separation of 19 antimalarial drugs to obtain a screening method. DoE-ICA-DS methodology is fully compliant with the current trend of quality by design. DoE was used to define the set of experiments to model the retention times at the beginning, the apex and the end of each peak. Furthermore, ICA was used to numerically separate coeluting peaks and estimate their unbiased retention times. Gradient time, temperature and pH were selected as the factors of a full factorial design. These retention times were modelled by stepwise multiple linear regressions. A recently introduced critical quality attribute, namely the separation criterion (S), was also used to assess the quality of separations rather than using the resolution. Furthermore, the resulting mathematical models were also studied from a chromatographic point of view to understand and investigate the chromatographic behaviour of each compound. Good adequacies were found between the mathematical models and the expected chromatographic behaviours predicted by chromatographic theory. Finally, focusing at quality risk management, the DS was computed as the multidimensional subspace where the probability for the separation criterion to lie in acceptance limits was higher than a defined quality level. The DS was computed propagating the prediction error from the modelled responses to the quality criterion using Monte Carlo simulations. DoE-ICA-DS allowed encountering optimal operating conditions to obtain a robust screening method for the 19 considered antimalarial drugs in the framework of the fight against counterfeit medicines. Moreover and only on the basis of the same data set, a dedicated method for the determination of three antimalarial compounds in a pharmaceutical formulation was optimized to demonstrate both the efficiency and flexibility of the methodology proposed in the present study. Copyright © 2011 Elsevier B.V. All rights reserved.

  11. Test-Retest Reliability and Practice Effects of the Stability Evaluation Test.

    PubMed

    Williams, Richelle M; Corvo, Matthew A; Lam, Kenneth C; Williams, Travis A; Gilmer, Lesley K; McLeod, Tamara C Valovich

    2017-01-17

    Postural control plays an essential role in concussion evaluation. The Stability Evaluation Test (SET) aims to objectively analyze postural control by measuring sway velocity on the NeuroCom's VSR portable force platform (Natus, San Carlos, CA). To assess the test-retest reliability and practice effects of the SET protocol. Cohort. Research Laboratory. Fifty healthy adults (males=20, females=30, age=25.30±3.60 years, height=166.60±12.80 cm, mass=68.80±13.90 kg). All participants completed four trials of the SET. Each trial consisted of six 20-second balance tests with eyes closed, under the following conditions: double-leg firm (DFi), single-leg firm (SFi), tandem firm (TFi), double-leg foam (DFo), single-leg foam (SFo), and tandem foam (TFo). Each trial was separated by a 5-minute seated rest period. The dependent variable was sway velocity (deg/sec), with lower values indicating better balance. Sway velocity was recorded for each of the six conditions as well as a composite score for each trial. Test-retest reliability was analyzed across four trials with Intraclass Correlation Coefficients. Practice effects analyzed with repeated measures analysis of variance, followed by Tukey post-hoc comparisons for any significant main effects (p<.05). Sway velocity reliability values were good to excellent: DFi (ICC=0.88;95%CI:0.81,0.92), SFi (ICC=0.75;95%CI:0.61,0.85), TFi (ICC=0.84;95%CI:0.75,0.90), DFo (ICC=0.83;95%CI:0.74,0.90), SFo (ICC=0.82;95%CI:0.72,0.89), TFo (ICC=0.81;95%CI:0.69,0.88), and composite score (ICC=0.93;95%CI:0.88,0.95). Significant practice effects (p<.05) were noted on the SFi, DFo, SFo, TFo conditions, and composite scores. Our results suggest the SET has good to excellent reliability for the assessment of postural control in healthy adults. Due to the practice effects noted, a familiarization session is recommended (i.e., all 6 conditions) prior to recording the data. Future studies should evaluate injured patients to determine meaningful change scores during various injuries.

  12. High-Reynolds-Number Test of a 5-Percent-Thick Low-Aspect-Ratio Semispan Wing in the Langley 0.3-Meter Transonic Cryogenic Tunnel: Wing Pressure Distributions

    NASA Technical Reports Server (NTRS)

    Chu, Julio; Lawing, Pierce L.

    1990-01-01

    A high Reynolds number test of a 5 percent thick low aspect ratio semispan wing was conducted in the adaptive wall test section of the Langley 0.3 m Transonic Cryogenic Tunnel. The model tested had a planform and a NACA 64A-105 airfoil section that is similar to that of the pressure instrumented canard on the X-29 experimental aircraft. Chordwise pressure data for Mach numbers of 0.3, 0.7, and 0.9 were measured for an angle-of-attack range of -4 to 15 deg. The associated Reynolds numbers, based on the geometric mean chord, encompass most of the flight regime of the canard. This test was a free transition investigation. A summary of the wing pressures are presented without analysis as well as adapted test section top and bottom wall pressure signatures. However, the presented graphical data indicate Reynolds number dependent complex leading edge separation phenomena. This data set supplements the existing high Reynolds number database and are useful for computational codes comparison.

  13. The influence of tyre characteristics on measures of rolling performance during cross-country mountain biking.

    PubMed

    Macdermid, Paul William; Fink, Philip W; Stannard, Stephen R

    2015-01-01

    This investigation sets out to assess the effect of five different models of mountain bike tyre on rolling performance over hard-pack mud. Independent characteristics included total weight, volume, tread surface area and tread depth. One male cyclist performed multiple (30) trials of a deceleration field test to assess reliability. Further tests performed on a separate occasion included multiple (15) trials of the deceleration test and six fixed power output hill climb tests for each tyre. The deceleration test proved to be reliable as a means of assessing rolling performance via differences in initial and final speed (coefficient of variation (CV) = 4.52%). Overall differences between tyre performance for both deceleration test (P = 0.014) and hill climb (P = 0.032) were found, enabling significant (P < 0.0001 and P = 0.049) models to be generated, allowing tyre performance prediction based on tyre characteristics. The ideal tyre for rolling and climbing performance on hard-pack surfaces would be to decrease tyre weight by way of reductions in tread surface area and tread depth while keeping volume high.

  14. Characterization of intra-annual reflectance properties of land cover classes in southeastern South Dakota using Landsat TM and ETM+ data

    USGS Publications Warehouse

    Vogelmann, James E.; DeFelice, Thomas P.

    2003-01-01

    Landsat-7 and Landsat-5 have orbits that are offset from each other by 8 days. During the time that the sensors on both satellites are operational, there is an opportunity for conducting analyses that incorporate multiple intra-annual high spatial resolution data sets for characterizing the Earth's land surface. In the current study, nine Landsat thematic mapper (TM) and enhanced thematic mapper plus (ETM+) data sets, covering the same path and row on different dates, were acquired during a 1-year time interval for a region in southeastern South Dakota and analyzed. Scenes were normalized using pseudoinvariant objects, and digital data from a series of test sites were extracted from the imagery and converted to surface reflectance. Sunphotometer data acquired on site were used to atmospherically correct the data. Ground observations that were made throughout the growing season by a large group of volunteers were used to help interpret spectroradiometric patterns and trends. Normalized images were found to be very effective in portraying the seasonal patterns of reflectance change that occurred throughout the region. Many of the radiometric patterns related to plant growth and development, but some also related to different background properties. The different kinds of land cover in the region were spectrally and radiometrically characterized and were found to have different seasonal patterns of reflectance. The degree to which the land cover classes could be separated spectrally and radiometrically, however, depended on the time of year during which the data sets were acquired, and no single data set appeared to be adequate for separating all types of land cover. This has practical implications for classification studies because known patterns of seasonal reflectance properties for the different types of land cover within a region will facilitate selection of the most appropriate data sets for producing land cover classifications.

  15. Genomic Prediction of Gene Bank Wheat Landraces.

    PubMed

    Crossa, José; Jarquín, Diego; Franco, Jorge; Pérez-Rodríguez, Paulino; Burgueño, Juan; Saint-Pierre, Carolina; Vikram, Prashant; Sansaloni, Carolina; Petroli, Cesar; Akdemir, Deniz; Sneller, Clay; Reynolds, Matthew; Tattaris, Maria; Payne, Thomas; Guzman, Carlos; Peña, Roberto J; Wenzl, Peter; Singh, Sukhwinder

    2016-07-07

    This study examines genomic prediction within 8416 Mexican landrace accessions and 2403 Iranian landrace accessions stored in gene banks. The Mexican and Iranian collections were evaluated in separate field trials, including an optimum environment for several traits, and in two separate environments (drought, D and heat, H) for the highly heritable traits, days to heading (DTH), and days to maturity (DTM). Analyses accounting and not accounting for population structure were performed. Genomic prediction models include genotype × environment interaction (G × E). Two alternative prediction strategies were studied: (1) random cross-validation of the data in 20% training (TRN) and 80% testing (TST) (TRN20-TST80) sets, and (2) two types of core sets, "diversity" and "prediction", including 10% and 20%, respectively, of the total collections. Accounting for population structure decreased prediction accuracy by 15-20% as compared to prediction accuracy obtained when not accounting for population structure. Accounting for population structure gave prediction accuracies for traits evaluated in one environment for TRN20-TST80 that ranged from 0.407 to 0.677 for Mexican landraces, and from 0.166 to 0.662 for Iranian landraces. Prediction accuracy of the 20% diversity core set was similar to accuracies obtained for TRN20-TST80, ranging from 0.412 to 0.654 for Mexican landraces, and from 0.182 to 0.647 for Iranian landraces. The predictive core set gave similar prediction accuracy as the diversity core set for Mexican collections, but slightly lower for Iranian collections. Prediction accuracy when incorporating G × E for DTH and DTM for Mexican landraces for TRN20-TST80 was around 0.60, which is greater than without the G × E term. For Iranian landraces, accuracies were 0.55 for the G × E model with TRN20-TST80. Results show promising prediction accuracies for potential use in germplasm enhancement and rapid introgression of exotic germplasm into elite materials. Copyright © 2016 Crossa et al.

  16. Attentional sets influence perceptual load effects, but not dilution effects.

    PubMed

    Benoni, Hanna; Zivony, Alon; Tsal, Yehoshua

    2014-01-01

    Perceptual load theory [Lavie, N. (1995). Perceptual load as a necessary condition for selective attention. Journal of Experimental Psychology: Human Perception and Performance, 21, 451-468.; Lavie, N., & Tsal, Y. (1994) Perceptual load as a major determinant of the locus of selection in visual attention. Perception & Psychophysics, 56, 183-197.] proposes that interference from distractors can only be avoided in situations of high perceptual load. This theory has been supported by blocked design manipulations separating low load (when the target appears alone) and high load (when the target is embedded among neutral letters). Tsal and Benoni [(2010a). Diluting the burden of load: Perceptual load effects are simply dilution effects. Journal of Experimental Psychology: Human Perception and Performance, 36, 1645-1656.; Benoni, H., & Tsal, Y. (2010). Where have we gone wrong? Perceptual load does not affect selective attention. Vision Research, 50, 1292-1298.] have recently shown that these manipulations confound perceptual load with "dilution" (the mere presence of additional heterogeneous items in high-load situations). Theeuwes, Kramer, and Belopolsky [(2004). Attentional set interacts with perceptual load in visual search. Psychonomic Bulletin & Review, 11, 697-702.] independently questioned load theory by suggesting that attentional sets might also affect distractor interference. When high load and low load were intermixed, and participants could not prepare for the presentation that followed, both the low-load and high-load trials showed distractor interference. This result may also challenge the dilution account, which proposes a stimulus-driven mechanism. In the current study, we presented subjects with both fixed and mixed blocks, including a mix of dilution trials with low-load trials and with high-load trials. We thus separated the effect of dilution from load and tested the influence of attentional sets on each component. The results revealed that whereas perceptual load effects are influenced by attentional sets, the dilution component is not. This strengthens the notion that dilution is a stimulus-driven mechanism, which enables effective selectivity.

  17. Genomic Prediction of Gene Bank Wheat Landraces

    PubMed Central

    Crossa, José; Jarquín, Diego; Franco, Jorge; Pérez-Rodríguez, Paulino; Burgueño, Juan; Saint-Pierre, Carolina; Vikram, Prashant; Sansaloni, Carolina; Petroli, Cesar; Akdemir, Deniz; Sneller, Clay; Reynolds, Matthew; Tattaris, Maria; Payne, Thomas; Guzman, Carlos; Peña, Roberto J.; Wenzl, Peter; Singh, Sukhwinder

    2016-01-01

    This study examines genomic prediction within 8416 Mexican landrace accessions and 2403 Iranian landrace accessions stored in gene banks. The Mexican and Iranian collections were evaluated in separate field trials, including an optimum environment for several traits, and in two separate environments (drought, D and heat, H) for the highly heritable traits, days to heading (DTH), and days to maturity (DTM). Analyses accounting and not accounting for population structure were performed. Genomic prediction models include genotype × environment interaction (G × E). Two alternative prediction strategies were studied: (1) random cross-validation of the data in 20% training (TRN) and 80% testing (TST) (TRN20-TST80) sets, and (2) two types of core sets, “diversity” and “prediction”, including 10% and 20%, respectively, of the total collections. Accounting for population structure decreased prediction accuracy by 15–20% as compared to prediction accuracy obtained when not accounting for population structure. Accounting for population structure gave prediction accuracies for traits evaluated in one environment for TRN20-TST80 that ranged from 0.407 to 0.677 for Mexican landraces, and from 0.166 to 0.662 for Iranian landraces. Prediction accuracy of the 20% diversity core set was similar to accuracies obtained for TRN20-TST80, ranging from 0.412 to 0.654 for Mexican landraces, and from 0.182 to 0.647 for Iranian landraces. The predictive core set gave similar prediction accuracy as the diversity core set for Mexican collections, but slightly lower for Iranian collections. Prediction accuracy when incorporating G × E for DTH and DTM for Mexican landraces for TRN20-TST80 was around 0.60, which is greater than without the G × E term. For Iranian landraces, accuracies were 0.55 for the G × E model with TRN20-TST80. Results show promising prediction accuracies for potential use in germplasm enhancement and rapid introgression of exotic germplasm into elite materials. PMID:27172218

  18. SU-E-T-378: Limits and Possibilities of a Simplistic Approach to Whole Breast Radiation Therapy Planning

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hipp, E; Osa, E; No, H

    2014-06-01

    Purpose: Challenges for radiation therapy in developing countries include unreliable infrastructure and high patient load. We propose a system to treat whole breast in the prone position without computed tomography and/or planning software. Methods: Six parameters are measured using calipers and levels with the patient prone in the treatment position. (1) The largest separation; (2) the angle that separation makes with the horizontal; (3) the separation 2 cm posterior to the nipple; (4) the vertical distance between these two separations; (5) the sup/inf length and (6) angle of the desired posterior field edge. The data in (5) (6) and (2)more » provide field length, collimator and gantry angles. Isocenter is set to the midpoint of (1), anterior jaw setting is 20cm (half-beam setup), and the dose is prescribed to a point 1.5 cm anterior to isocenter. MUs and wedge angles are calculated using an MU calculator and by requiring 100% dose at that point and 100-105% at the midpoint of (3). Measurements on 30 CT scans were taken to obtain the data 1-6. To test the resulting MU/wedge combinations, they were entered into Eclipse (Varian) and dose distributions were calculated. The MU/wedge combinations were recorded and tabulated. Results: Performing a dose volume histogram analysis, the contoured breast V95 was 90.5%, and the average V90 was 94.1%. The maximum dose never exceeded 114.5%, (average 108%). The lung V20 was <5% for 96.7%, and the heart V5 was <10% for 93.3% of our sample. Conclusion: A method to provide prone whole breast treatment without CT-planning was developed. The method provides reasonable coverage and normal tissue sparing. This approach is not recommended if imaging and planning capabilities are available; it was designed to specifically avoid the need for CT planning and should be reserved to clinics that need to avoid that step.« less

  19. Optimization of digital droplet polymerase chain reaction for quantification of genetically modified organisms

    PubMed Central

    Gerdes, Lars; Iwobi, Azuka; Busch, Ulrich; Pecoraro, Sven

    2016-01-01

    Digital PCR in droplets (ddPCR) is an emerging method for more and more applications in DNA (and RNA) analysis. Special requirements when establishing ddPCR for analysis of genetically modified organisms (GMO) in a laboratory include the choice between validated official qPCR methods and the optimization of these assays for a ddPCR format. Differentiation between droplets with positive reaction and negative droplets, that is setting of an appropriate threshold, can be crucial for a correct measurement. This holds true in particular when independent transgene and plant-specific reference gene copy numbers have to be combined to determine the content of GM material in a sample. Droplets which show fluorescent units ranging between those of explicit positive and negative droplets are called ‘rain’. Signals of such droplets can hinder analysis and the correct setting of a threshold. In this manuscript, a computer-based algorithm has been carefully designed to evaluate assay performance and facilitate objective criteria for assay optimization. Optimized assays in return minimize the impact of rain on ddPCR analysis. We developed an Excel based ‘experience matrix’ that reflects the assay parameters of GMO ddPCR tests performed in our laboratory. Parameters considered include singleplex/duplex ddPCR, assay volume, thermal cycler, probe manufacturer, oligonucleotide concentration, annealing/elongation temperature, and a droplet separation evaluation. We additionally propose an objective droplet separation value which is based on both absolute fluorescence signal distance of positive and negative droplet populations and the variation within these droplet populations. The proposed performance classification in the experience matrix can be used for a rating of different assays for the same GMO target, thus enabling employment of the best suited assay parameters. Main optimization parameters include annealing/extension temperature and oligonucleotide concentrations. The droplet separation value allows for easy and reproducible assay performance evaluation. The combination of separation value with the experience matrix simplifies the choice of adequate assay parameters for a given GMO event. PMID:27077048

  20. Optimization of digital droplet polymerase chain reaction for quantification of genetically modified organisms.

    PubMed

    Gerdes, Lars; Iwobi, Azuka; Busch, Ulrich; Pecoraro, Sven

    2016-03-01

    Digital PCR in droplets (ddPCR) is an emerging method for more and more applications in DNA (and RNA) analysis. Special requirements when establishing ddPCR for analysis of genetically modified organisms (GMO) in a laboratory include the choice between validated official qPCR methods and the optimization of these assays for a ddPCR format. Differentiation between droplets with positive reaction and negative droplets, that is setting of an appropriate threshold, can be crucial for a correct measurement. This holds true in particular when independent transgene and plant-specific reference gene copy numbers have to be combined to determine the content of GM material in a sample. Droplets which show fluorescent units ranging between those of explicit positive and negative droplets are called 'rain'. Signals of such droplets can hinder analysis and the correct setting of a threshold. In this manuscript, a computer-based algorithm has been carefully designed to evaluate assay performance and facilitate objective criteria for assay optimization. Optimized assays in return minimize the impact of rain on ddPCR analysis. We developed an Excel based 'experience matrix' that reflects the assay parameters of GMO ddPCR tests performed in our laboratory. Parameters considered include singleplex/duplex ddPCR, assay volume, thermal cycler, probe manufacturer, oligonucleotide concentration, annealing/elongation temperature, and a droplet separation evaluation. We additionally propose an objective droplet separation value which is based on both absolute fluorescence signal distance of positive and negative droplet populations and the variation within these droplet populations. The proposed performance classification in the experience matrix can be used for a rating of different assays for the same GMO target, thus enabling employment of the best suited assay parameters. Main optimization parameters include annealing/extension temperature and oligonucleotide concentrations. The droplet separation value allows for easy and reproducible assay performance evaluation. The combination of separation value with the experience matrix simplifies the choice of adequate assay parameters for a given GMO event.

  1. Effect of temperature on composite sandwich structures subjected to low velocity impact. [aircraft construction materials

    NASA Technical Reports Server (NTRS)

    Sharma, A. V.

    1980-01-01

    The effect of low velocity projectile impact on sandwich-type structural components was investigated. The materials used in the fabrication of the impact surface were graphite-, Kevlar-, and boron-fibers with appropriate epoxy matrices. The testing of the specimens was performed at moderately low- and high-temperatures as well as at room temperature to assess the impact-initiated strength degradation of the laminates. Eleven laminates with different stacking sequences, orientations, and thicknesses were tested. The low energy projectile impact is considered to simulate the damage caused by runway debris, the dropping of the hand tools during servicing, etc., on the secondary aircraft structures fabricated with the composite materials. The results show the preload and the impact energy combinations necessary to cause catastrophic failure in the laminates tested. A set of faired curves indicating the failure thresholds is shown separately for the tension-and compression-loaded laminates. The specific-strengths and -modulii for the various laminates tested are also given.

  2. Impact-initiated damage thresholds in composites

    NASA Technical Reports Server (NTRS)

    Sharma, A. V.

    1980-01-01

    An experimental investigation was conducted to study the effect of low velocity projectile impact on the sandwich-type structural components. The materials used in the fabrication of the impact surface were graphite-, Kevlar-, and boron-fibers with appropriate epoxy matrices. The testing of the specimens was performed at moderately low- and high-temperatures as well as at room temperature to assess the impact-initiated strength degradation of the laminates. Eleven laminates with different stacking sequences, orientations, and thicknesses were tested. The low energy projectile impact is considered to simulate the damage caused by runway debris, dropping of the hand tools during servicing, etc., on the secondary aircraft structures fabricated with the composite materials. The results show the preload and the impact energy combinations necessary to cause catastrophic failures in the laminates tested. A set of faired curves indicating the failure thresholds is shown separately for the tension- and compression-loaded laminates. The specific-strengths and -moduli for the various laminates tested are also given.

  3. 2011 Release of the Evaluated Nuclear Data Library (ENDL2011.0)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brown, D. A.; Beck, B.; Descalles, M. A.

    LLNL’s Computational Nuclear Physics Group and Nuclear Theory and Modeling Group have collaborated to produce the last of three major releases of LLNL’s evaluated nuclear database, ENDL2011. ENDL2011 is designed to support LLNL’s current and future nuclear data needs by providing the best nuclear data available to our programmatic customers. This library contains many new evaluations for radiochemical diagnostics, structural materials, and thermonuclear reactions. We have made an effort to eliminate all holes in reaction networks, allowing in-line isotopic creation and depletion calculations. We have striven to keep ENDL2011 at the leading edge of nuclear data library development by reviewingmore » and incorporating new evaluations as they are made available to the nuclear data community. Finally, this release is our most highly tested release as we have strengthened our already rigorous testing regime by adding tests against IPPE Activation Ratio Measurements, many more new critical assemblies and a more complete set of classified testing (to be detailed separately).« less

  4. Adoption of recommended practices and basic technologies in a low-income setting

    PubMed Central

    English, Mike; Gathara, David; Mwinga, Stephen; Ayieko, Philip; Opondo, Charles; Aluvaala, Jalemba; Kihuba, Elesban; Mwaniki, Paul; Were, Fred; Irimu, Grace; Wasunna, Aggrey; Mogoa, Wycliffe; Nyamai, Rachel

    2014-01-01

    Objective In global health considerable attention is focused on the search for innovations; however, reports tracking their adoption in routine hospital settings from low-income countries are absent. Design and setting We used data collected on a consistent panel of indicators during four separate cross-sectional, hospital surveys in Kenya to track changes over a period of 11 years (2002–2012). Main outcome measures Basic resource availability, use of diagnostics and uptake of recommended practices. Results There appeared little change in availability of a panel of 28 basic resources (median 71% in 2002 to 82% in 2012) although availability of specific feeds for severe malnutrition and vitamin K improved. Use of blood glucose and HIV testing increased but remained inappropriately low throughout. Commonly (malaria) and uncommonly (lumbar puncture) performed diagnostic tests frequently failed to inform practice while pulse oximetry, a simple and cheap technology, was rarely available even in 2012. However, increasing adherence to prescribing guidance occurred during a period from 2006 to 2012 in which efforts were made to disseminate guidelines. Conclusions Findings suggest changes in clinical practices possibly linked to dissemination of guidelines at reasonable scale. However, full availability of basic resources was not attained and major gaps likely exist between the potential and actual impacts of simple diagnostics and technologies representing problems with availability, adoption and successful utilisation. These findings are relevant to debates on scaling up in low-income settings and to those developing novel therapeutic or diagnostic interventions. PMID:24482351

  5. Are mind wandering rates an artifact of the probe-caught method? Using self-caught mind wandering in the classroom to test, and reject, this possibility.

    PubMed

    Varao-Sousa, Trish L; Kingstone, Alan

    2018-06-26

    Mind wandering (MW) reports often rely on individuals responding to specific external thought probes. Researchers have used this probe-caught method almost exclusively, due to its reliability across a wide range of testing situations. However, it remains an open question whether the probe-caught MW rates in more complex settings converge with those for simpler tasks, because of the rather artificial and controlled nature of the probe-caught methodology itself, which is shared across the different settings. To address this issue, we measured MW in a real-world lecture, during which students indicated whether they were mind wandering by simply catching themselves (as one would normally do in real life) or by catching themselves and responding to thought probes. Across three separate lectures, self-caught MW reports were stable and unaffected by the inclusion of MW probes. That the probe rates were similar to those found in prior classroom research and did not affect the self-caught MW rates strongly suggests that the past consistency of probe-caught MW rates across a range of different settings is not an artifact of the thought-probe method. Our study also indicates that the self-caught MW methodology is a reliable way to acquire MW data. The extension of measurement techniques to include students' self-caught reports provides valuable information about how to successfully and naturalistically monitor MW in lecture settings, outside the laboratory.

  6. Implementation of tuberculosis infection control measures in designated hospitals in Zhejiang Province, China: are we doing enough to prevent nosocomial tuberculosis infections?

    PubMed Central

    Chen, Bin; Liu, Min; Gu, Hua; Wang, Xiaomeng; Qiu, Wei; Shen, Jian; Jiang, Jianmin

    2016-01-01

    Objectives Tuberculosis (TB) infection control measures are very important to prevent nosocomial transmission and protect healthcare workers (HCWs) in hospitals. The TB infection control situation in TB treatment institutions in southeastern China has not been studied previously. Therefore, the aim of this study was to investigate the implementation of TB infection control measures in TB-designated hospitals in Zhejiang Province, China. Design Cross-sectional survey using observation and interviews. Setting All TB-designated hospitals (n=88) in Zhejiang Province, China in 2014. Primary and secondary outcome measures Managerial, administrative, environmental and personal infection control measures were assessed using descriptive analyses and univariate logistic regression analysis. Results The TB-designated hospitals treated a median of 3030 outpatients (IQR 764–7094) and 279 patients with confirmed TB (IQR 154–459) annually, and 160 patients with TB (IQR 79–426) were hospitalised in the TB wards. Most infection control measures were performed by the TB-designated hospitals. Measures including regular monitoring of TB infection control in high-risk areas (49%), shortening the wait times (42%), and providing a separate waiting area for patients with suspected TB (46%) were sometimes neglected. N95 respirators were available in 85 (97%) hospitals, although only 44 (50%) hospitals checked that they fit. Hospitals with more TB staff and higher admission rates of patients with TB were more likely to set a dedicated sputum collection area and to conduct annual respirator fit testing. Conclusions TB infection control measures were generally implemented by the TB-designated hospitals. Measures including separation of suspected patients, regular monitoring of infection control practices, and regular fit testing of respirators should be strengthened. Infection measures for sputum collection and respirator fit testing should be improved in hospitals with lower admission rates of patients with TB. PMID:26940111

  7. Scalar relativistic computations of nuclear magnetic shielding and g-shifts with the zeroth-order regular approximation and range-separated hybrid density functionals

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Aquino, Fredy W.; Govind, Niranjan; Autschbach, Jochen

    2011-10-01

    Density functional theory (DFT) calculations of NMR chemical shifts and molecular g-tensors with Gaussian-type orbitals are implemented via second-order energy derivatives within the scalar relativistic zeroth order regular approximation (ZORA) framework. Nonhybrid functionals, standard (global) hybrids, and range-separated (Coulomb-attenuated, long-range corrected) hybrid functionals are tested. Origin invariance of the results is ensured by use of gauge-including atomic orbital (GIAO) basis functions. The new implementation in the NWChem quantum chemistry package is verified by calculations of nuclear shielding constants for the heavy atoms in HX (X=F, Cl, Br, I, At) and H2X (X = O, S, Se, Te, Po), and Temore » chemical shifts in a number of tellurium compounds. The basis set and functional dependence of g-shifts is investigated for 14 radicals with light and heavy atoms. The problem of accurately predicting F NMR shielding in UF6-nCln, n = 1 to 6, is revisited. The results are sensitive to approximations in the density functionals, indicating a delicate balance of DFT self-interaction vs. correlation. For the uranium halides, the results with the range-separated functionals are mixed.« less

  8. 7 CFR 3300.37 - Testing of a mechanical refrigerating appliance.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 15 2010-01-01 2010-01-01 false Testing of a mechanical refrigerating appliance. 3300... SPECIAL EQUIPMENT Procedures for Separate Testing of Mechanical Refrigerating Appliances § 3300.37 Testing of a mechanical refrigerating appliance. For separate testing of a mechanical refrigerating appliance...

  9. 3D-QSAR comparative molecular field analysis on opioid receptor antagonists: pooling data from different studies.

    PubMed

    Peng, Youyi; Keenan, Susan M; Zhang, Qiang; Kholodovych, Vladyslav; Welsh, William J

    2005-03-10

    Three-dimensional quantitative structure-activity relationship (3D-QSAR) models were constructed using comparative molecular field analysis (CoMFA) on a series of opioid receptor antagonists. To obtain statistically significant and robust CoMFA models, a sizable data set of naltrindole and naltrexone analogues was assembled by pooling biological and structural data from independent studies. A process of "leave one data set out", similar to the traditional "leave one out" cross-validation procedure employed in partial least squares (PLS) analysis, was utilized to study the feasibility of pooling data in the present case. These studies indicate that our approach yields statistically significant and highly predictive CoMFA models from the pooled data set of delta, mu, and kappa opioid receptor antagonists. All models showed excellent internal predictability and self-consistency: q(2) = 0.69/r(2) = 0.91 (delta), q(2) = 0.67/r(2) = 0.92 (mu), and q(2) = 0.60/r(2) = 0.96 (kappa). The CoMFA models were further validated using two separate test sets: one test set was selected randomly from the pooled data set, while the other test set was retrieved from other published sources. The overall excellent agreement between CoMFA-predicted and experimental binding affinities for a structurally diverse array of ligands across all three opioid receptor subtypes gives testimony to the superb predictive power of these models. CoMFA field analysis demonstrated that the variations in binding affinity of opioid antagonists are dominated by steric rather than electrostatic interactions with the three opioid receptor binding sites. The CoMFA steric-electrostatic contour maps corresponding to the delta, mu, and kappa opioid receptor subtypes reflected the characteristic similarities and differences in the familiar "message-address" concept of opioid receptor ligands. Structural modifications to increase selectivity for the delta over mu and kappa opioid receptors have been predicted on the basis of the CoMFA contour maps. The structure-activity relationships (SARs) together with the CoMFA models should find utility for the rational design of subtype-selective opioid receptor antagonists.

  10. Audiologist-driven versus patient-driven fine tuning of hearing instruments.

    PubMed

    Boymans, Monique; Dreschler, Wouter A

    2012-03-01

    Two methods of fine tuning the initial settings of hearing aids were compared: An audiologist-driven approach--using real ear measurements and a patient-driven fine-tuning approach--using feedback from real-life situations. The patient-driven fine tuning was conducted by employing the Amplifit(®) II system using audiovideo clips. The audiologist-driven fine tuning was based on the NAL-NL1 prescription rule. Both settings were compared using the same hearing aids in two 6-week trial periods following a randomized blinded cross-over design. After each trial period, the settings were evaluated by insertion-gain measurements. Performance was evaluated by speech tests in quiet, in noise, and in time-reversed speech, presented at 0° and with spatially separated sound sources. Subjective results were evaluated using extensive questionnaires and audiovisual video clips. A total of 73 participants were included. On average, higher gain values were found for the audiologist-driven settings than for the patient-driven settings, especially at 1000 and 2000 Hz. Better objective performance was obtained for the audiologist-driven settings for speech perception in quiet and in time-reversed speech. This was supported by better scores on a number of subjective judgments and in the subjective ratings of video clips. The perception of loud sounds scored higher than when patient-driven, but the overall preference was in favor of the audiologist-driven settings for 67% of the participants.

  11. 40 CFR 792.43 - Test system care facilities.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    .... (a) A testing facility shall have a sufficient number of animal rooms or other test system areas, as... accomplished within a room or area by housing them separately in different chambers or aquaria. Separation of... different tests. (b) A testing facility shall have a number of animal rooms or other test system areas...

  12. 40 CFR 792.43 - Test system care facilities.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    .... (a) A testing facility shall have a sufficient number of animal rooms or other test system areas, as... accomplished within a room or area by housing them separately in different chambers or aquaria. Separation of... different tests. (b) A testing facility shall have a number of animal rooms or other test system areas...

  13. 40 CFR 792.43 - Test system care facilities.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    .... (a) A testing facility shall have a sufficient number of animal rooms or other test system areas, as... accomplished within a room or area by housing them separately in different chambers or aquaria. Separation of... different tests. (b) A testing facility shall have a number of animal rooms or other test system areas...

  14. Extraction-Separation Performance and Dynamic Modeling of Orion Test Vehicles with Adams Simulation: 2nd Edition

    NASA Technical Reports Server (NTRS)

    Fraire, Usbaldo, Jr.; Anderson, Keith; Varela, Jose G.; Bernatovich, Michael A.

    2015-01-01

    NASA's Orion Capsule Parachute Assembly System (CPAS) project has advanced into the third generation of its parachute test campaign and requires technically comprehensive modeling capabilities to simulate multi-body dynamics (MBD) of test articles released from a C-17. Safely extracting a 30,000 lbm mated test article from a C-17 and performing stable mid-air separation maneuvers requires an understanding of the interaction between elements in the test configuration and how they are influenced by extraction parachute performance, aircraft dynamics, aerodynamics, separation dynamics, and kinetic energy experienced by the system. During the real-time extraction and deployment sequences, these influences can be highly unsteady and difficult to bound. An avionics logic window based on time, pitch, and pitch rate is used to account for these effects and target a favorable separation state in real time. The Adams simulation has been employed to fine-tune this window, as well as predict and reconstruct the coupled dynamics of the Parachute Test Vehicle (PTV) and Cradle Platform Separation System (CPSS) from aircraft extraction through the mid-air separation event. The test-technique for the extraction of CPAS test articles has evolved with increased complexity and requires new modeling concepts to ensure the test article is delivered to a stable test condition for the programmer phase. Prompted by unexpected dynamics and hardware malfunctions in drop tests, these modeling improvements provide a more accurate loads prediction by incorporating a spring-damper line-model derived from the material properties. The qualification phase of CPAS testing is on the horizon and modeling increasingly complex test-techniques with Adams is vital to successfully qualify the Orion parachute system for human spaceflight.

  15. Air separation with temperature and pressure swing

    DOEpatents

    Cassano, Anthony A.

    1986-01-01

    A chemical absorbent air separation process is set forth which uses a temperature swing absorption-desorption cycle in combination with a pressure swing wherein the pressure is elevated in the desorption stage of the process.

  16. 40 CFR 246.201-5 - Recommended procedures: Methods of separation and collection.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... compartmentalized vehicles. (b) For multi-family dwellings, separated materials may be placed in bulk containers... stations may be set up at convenient locations to which residents bring recyclables. These stations should...

  17. Measuring sexual orientation in adolescent health surveys: evaluation of eight school-based surveys.

    PubMed

    Saewyc, Elizabeth M; Bauer, Greta R; Skay, Carol L; Bearinger, Linda H; Resnick, Michael D; Reis, Elizabeth; Murphy, Aileen

    2004-10-01

    To examine the performance of various items measuring sexual orientation within 8 school-based adolescent health surveys in the United States and Canada from 1986 through 1999. Analyses examined nonresponse and unsure responses to sexual orientation items compared with other survey items, demographic differences in responses, tests for response set bias, and congruence of responses to multiple orientation items; analytical methods included frequencies, contingency tables with Chi-square, and ANOVA with least significant differences (LSD)post hoc tests; all analyses were conducted separately by gender. In all surveys, nonresponse rates for orientation questions were similar to other sexual questions, but not higher; younger students, immigrants, and students with learning disabilities were more likely to skip items or select "unsure." Sexual behavior items had the lowest nonresponse, but fewer than half of all students reported sexual behavior, limiting its usefulness for indicating orientation. Item placement in the survey, wording, and response set bias all appeared to influence nonresponse and unsure rates. Specific recommendations include standardizing wording across future surveys, and pilot testing items with diverse ages and ethnic groups of teens before use. All three dimensions of orientation should be assessed where possible; when limited to single items, sexual attraction may be the best choice. Specific wording suggestions are offered for future surveys.

  18. Use of Solid Phase Extraction in the Biochemistry Laboratory to Separate Different Lipids

    ERIC Educational Resources Information Center

    Flurkey, William H.

    2005-01-01

    Solid-phase extraction (SPE) was used to demonstrate how various lipids and lipid classes could be separated in a biochemistry laboratory setting. Three different SPE methods were chosen on their ability to separate a lipid mixture, consisting of a combination of a either a fatty acid, a triacylglycerol, a mono- or diacylglycerol, phospholipid,…

  19. High speed flow cytometric separation of viable cells

    DOEpatents

    Sasaki, D.T.; Van den Engh, G.J.; Buckie, A.M.

    1995-11-14

    Hematopoietic cell populations are separated to provide cell sets and subsets as viable cells with high purity and high yields, based on the number of original cells present in the mixture. High-speed flow cytometry is employed using light characteristics of the cells to separate the cells, where high flow speeds are used to reduce the sorting time.

  20. High speed flow cytometric separation of viable cells

    DOEpatents

    Sasaki, Dennis T.; Van den Engh, Gerrit J.; Buckie, Anne-Marie

    1995-01-01

    Hematopoietic cell populations are separated to provide cell sets and subsets as viable cells with high purity and high yields, based on the number of original cells present in the mixture. High-speed flow cytometry is employed using light characteristics of the cells to separate the cells, where high flow speeds are used to reduce the sorting time.

  1. Spectral multi-energy CT texture analysis with machine learning for tissue classification: an investigation using classification of benign parotid tumours as a testing paradigm.

    PubMed

    Al Ajmi, Eiman; Forghani, Behzad; Reinhold, Caroline; Bayat, Maryam; Forghani, Reza

    2018-06-01

    There is a rich amount of quantitative information in spectral datasets generated from dual-energy CT (DECT). In this study, we compare the performance of texture analysis performed on multi-energy datasets to that of virtual monochromatic images (VMIs) at 65 keV only, using classification of the two most common benign parotid neoplasms as a testing paradigm. Forty-two patients with pathologically proven Warthin tumour (n = 25) or pleomorphic adenoma (n = 17) were evaluated. Texture analysis was performed on VMIs ranging from 40 to 140 keV in 5-keV increments (multi-energy analysis) or 65-keV VMIs only, which is typically considered equivalent to single-energy CT. Random forest (RF) models were constructed for outcome prediction using separate randomly selected training and testing sets or the entire patient set. Using multi-energy texture analysis, tumour classification in the independent testing set had accuracy, sensitivity, specificity, positive predictive value, and negative predictive value of 92%, 86%, 100%, 100%, and 83%, compared to 75%, 57%, 100%, 100%, and 63%, respectively, for single-energy analysis. Multi-energy texture analysis demonstrates superior performance compared to single-energy texture analysis of VMIs at 65 keV for classification of benign parotid tumours. • We present and validate a paradigm for texture analysis of DECT scans. • Multi-energy dataset texture analysis is superior to single-energy dataset texture analysis. • DECT texture analysis has high accura\\cy for diagnosis of benign parotid tumours. • DECT texture analysis with machine learning can enhance non-invasive diagnostic tumour evaluation.

  2. Testing ΛCDM at the lowest redshifts with SN Ia and galaxy velocities

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Huterer, Dragan; Shafer, Daniel L.; Scolnic, Daniel M.

    2017-05-01

    Peculiar velocities of objects in the nearby universe are correlated due to the gravitational pull of large-scale structure. By measuring these velocities, we have a unique opportunity to test the cosmological model at the lowest redshifts. We perform this test, using current data to constrain the amplitude of the ''signal'' covariance matrix describing the velocities and their correlations. We consider a new, well-calibrated ''Supercal'' set of low-redshift SNe Ia as well as a set of distances derived from the fundamental plane relation of 6dFGS galaxies. Analyzing the SN and galaxy data separately, both results are consistent with the peculiar velocitymore » signal of our fiducial ΛCDM model, ruling out the noise-only model with zero peculiar velocities at greater than 7σ (SNe) and 8σ (galaxies). When the two data sets are combined appropriately, the precision of the test increases slightly, resulting in a constraint on the signal amplitude of A = 1.05{sub −0.21}{sup +0.25}, where A = 1 corresponds to our fiducial model. Equivalently, we report an 11% measurement of the product of the growth rate and amplitude of mass fluctuations evaluated at z {sub eff} = 0.02, f σ{sub 8} = 0.428{sub −0.045}{sup +0.048}, valid for our fiducial ΛCDM model. We explore the robustness of the results to a number of conceivable variations in the analysis and find that individual variations shift the preferred signal amplitude by less than ∼0.5σ. We briefly discuss our Supercal SN Ia results in comparison with our previous results using the JLA compilation.« less

  3. Enhancement of the NMSU Channel Error Simulator to Provide User-Selectable Link Delays

    NASA Technical Reports Server (NTRS)

    Horan, Stephen; Wang, Ru-Hai

    2000-01-01

    This is the third in a continuing series of reports describing the development of the Space-to-Ground Link Simulator (SGLS) to be used for testing data transfers under simulated space channel conditions. The SGLS is based upon Virtual Instrument (VI) software techniques for managing the error generation, link data rate configuration, and, now, selection of the link delay value. In this report we detail the changes that needed to be made to the SGLS VI configuration to permit link delays to be added to the basic error generation and link data rate control capabilities. This was accomplished by modifying the rate-splitting VIs to include a buffer the hold the incoming data for the duration selected by the user to emulate the channel link delay. In sample tests of this configuration, the TCP/IP(sub ftp) service and the SCPS(sub fp) service were used to transmit 10-KB data files using both symmetric (both forward and return links set to 115200 bps) and unsymmetric (forward link set at 2400 bps and a return link set at 115200 bps) link configurations. Transmission times were recorded at bit error rates of 0 through 10(exp -5) to give an indication of the link performance. In these tests. we noted separate timings for the protocol setup time to initiate the file transfer and the variation in the actual file transfer time caused by channel errors. Both protocols showed similar performance to that seen earlier for the symmetric and unsymmetric channels. This time, the delays in establishing the file protocol also showed that these delays could double the transmission time and need to be accounted for in mission planning. Both protocols also showed a difficulty in transmitting large data files over large link delays. In these tests, there was no clear favorite between the TCP/IP(sub ftp) and the SCPS(sub fp). Based upon these tests, further testing is recommended to extend the results to different file transfer configurations.

  4. Experimental study of separator effect and shift angle on crossflow wind turbine performance

    NASA Astrophysics Data System (ADS)

    Fahrudin, Tjahjana, Dominicus Danardono Dwi Prija; Santoso, Budi

    2018-02-01

    This paper present experimental test results of separator and shift angle influence on Crossflow vertical axis wind turbine. Modification by using a separator and shift angle is expected to improve the thrust on the blade so as to improve the efficiency. The design of the wind turbine is tested at different wind speeds. There are 2 variations of crossflow turbine design which will be analyzed using an experimental test scheme that is, 3 stage crossflow and 2 stage crossflow with the shift angle. Maximum power coefficient obtained as Cpmax = 0.13 at wind speed 4.05 m/s for 1 separator and Cpmax = 0.12 for 12° shear angle of wind speed 4.05 m/s. In this study, power characteristics of the crossflow rotor with separator and shift angle have been tested. The experimental data was collected by variation of 2 separator and shift angle 0°, 6°, 12° and wind speed 3.01 - 4.85 m/s.

  5. Measurement of glomerular filtration rate in the conscious rat.

    PubMed

    Pestel, Sabine; Krzykalla, Volker; Weckesser, Gerhard

    2007-01-01

    Glomerular filtration rate (GFR) is an important parameter for studying drug-induced impairments on renal function in rats. The GFR is calculated from the concentration of creatinine and blood urea nitrogen (BUN) in serum and in urine, respectively. Following current protocols serum and urine samples must be taken from the same animal. Thus, in order to determine time-dependent effects it is necessary to use for each time point one separated group of animals. We developed a statistical test which allows analyzing the GFR from two different groups of animals: one used for repeated serum and the other one used for repeated urine analysis. Serum and urine samples were taken from two different sets of rats which were otherwise treated identically, i.e. drug doses, routes of administration (per os or per inhalation) and tap water loading. For each dose group GFR mean, standard deviation and statistical analysis to identify differences between the dose groups were determined. After determination of the optimal time points for measurements, the effect on GFR of the three reference compounds, furosemide, hydrochlorothiazide and formoterol, was calculated. The results showed that the diuretic drugs furosemide and hydrochlorothiazide decreased the GFR and the antidiuretic drug formoterol increased the GFR, as counter regulation on urine loss or urine retention, respectively. A mathematical model and the corresponding algorithm were developed, which can be used to calculate the GFR, and to test for differences between groups from two separated sets of rats, one used for urine, and the other one for serum analysis. This new method has the potential to reduce the number of animals needed and to improve the quality of data generated from various groups of animals in renal function studies.

  6. Bio-geographic classification of the Caspian Sea

    NASA Astrophysics Data System (ADS)

    Fendereski, F.; Vogt, M.; Payne, M. R.; Lachkar, Z.; Gruber, N.; Salmanmahiny, A.; Hosseini, S. A.

    2014-03-01

    Like other inland seas, the Caspian Sea (CS) has been influenced by climate change and anthropogenic disturbance during recent decades, yet the scientific understanding of this water body remains poor. In this study, an eco-geographical classification of the CS based on physical information derived from space and in-situ data is developed and tested against a set of biological observations. We used a two-step classification procedure, consisting of (i) a data reduction with self-organizing maps (SOMs) and (ii) a synthesis of the most relevant features into a reduced number of marine ecoregions using the Hierarchical Agglomerative Clustering (HAC) method. From an initial set of 12 potential physical variables, 6 independent variables were selected for the classification algorithm, i.e., sea surface temperature (SST), bathymetry, sea ice, seasonal variation of sea surface salinity (DSSS), total suspended matter (TSM) and its seasonal variation (DTSM). The classification results reveal a robust separation between the northern and the middle/southern basins as well as a separation of the shallow near-shore waters from those off-shore. The observed patterns in ecoregions can be attributed to differences in climate and geochemical factors such as distance from river, water depth and currents. A comparison of the annual and monthly mean Chl a concentrations between the different ecoregions shows significant differences (Kruskal-Wallis rank test, P < 0.05). In particular, we found differences in phytoplankton phenology, with differences in the date of bloom initiation, its duration and amplitude between ecoregions. A first qualitative evaluation of differences in community composition based on recorded presence-absence patterns of 27 different species of plankton, fish and benthic invertebrate also confirms the relevance of the ecoregions as proxies for habitats with common biological characteristics.

  7. A novel automated spike sorting algorithm with adaptable feature extraction.

    PubMed

    Bestel, Robert; Daus, Andreas W; Thielemann, Christiane

    2012-10-15

    To study the electrophysiological properties of neuronal networks, in vitro studies based on microelectrode arrays have become a viable tool for analysis. Although in constant progress, a challenging task still remains in this area: the development of an efficient spike sorting algorithm that allows an accurate signal analysis at the single-cell level. Most sorting algorithms currently available only extract a specific feature type, such as the principal components or Wavelet coefficients of the measured spike signals in order to separate different spike shapes generated by different neurons. However, due to the great variety in the obtained spike shapes, the derivation of an optimal feature set is still a very complex issue that current algorithms struggle with. To address this problem, we propose a novel algorithm that (i) extracts a variety of geometric, Wavelet and principal component-based features and (ii) automatically derives a feature subset, most suitable for sorting an individual set of spike signals. Thus, there is a new approach that evaluates the probability distribution of the obtained spike features and consequently determines the candidates most suitable for the actual spike sorting. These candidates can be formed into an individually adjusted set of spike features, allowing a separation of the various shapes present in the obtained neuronal signal by a subsequent expectation maximisation clustering algorithm. Test results with simulated data files and data obtained from chick embryonic neurons cultured on microelectrode arrays showed an excellent classification result, indicating the superior performance of the described algorithm approach. Copyright © 2012 Elsevier B.V. All rights reserved.

  8. Characterization of microporous separators for lithium-ion batteries

    NASA Astrophysics Data System (ADS)

    Venugopal, Ganesh; Moore, John; Howard, Jason; Pendalwar, Shekhar

    Several properties including porosity, pore-size distribution, thickness value, electrochemical stability and mechanical properties have to be optimized before a membrane can qualify as a separator for a lithium-ion battery. In this paper we present results of characterization studies carried out on some commercially available lithium-ion battery separators. The relevance of these results to battery performance and safety are also discussed. Porosity values were measured using a simple liquid absorption test and gas permeabilities were measured using a novel pressure drop technique that is similar in principle to the Gurley test. For separators from one particular manufacturer, the trend observed in the pressure drop times was found to be in agreement with the Gurley numbers reported by the separator manufacturer. Shutdown characteristics of the separators were studied by measuring the impedance of batteries containing the separators as a function of temperature. Overcharge tests were also performed to confirm that separator shutdown is indeed a useful mechanism for preventing thermal runaway situations. Polyethylene containing separators, in particular trilayer laminates of polypropylene, polyethylene and polypropylene, appear to have the most attractive properties for preventing thermal runaway in lithium ion cells.

  9. CFD Validation Studies for Hypersonic Flow Prediction

    NASA Technical Reports Server (NTRS)

    Gnoffo, Peter A.

    2001-01-01

    A series of experiments to measure pressure and heating for code validation involving hypersonic, laminar, separated flows was conducted at the Calspan-University at Buffalo Research Center (CUBRC) in the Large Energy National Shock (LENS) tunnel. The experimental data serves as a focus for a code validation session but are not available to the authors until the conclusion of this session. The first set of experiments considered here involve Mach 9.5 and Mach 11.3 N2 flow over a hollow cylinder-flare with 30 degree flare angle at several Reynolds numbers sustaining laminar, separated flow. Truncated and extended flare configurations are considered. The second set of experiments, at similar conditions, involves flow over a sharp, double cone with fore-cone angle of 25 degrees and aft-cone angle of 55 degrees. Both sets of experiments involve 30 degree compressions. Location of the separation point in the numerical simulation is extremely sensitive to the level of grid refinement in the numerical predictions. The numerical simulations also show a significant influence of Reynolds number on extent of separation. Flow unsteadiness was easily introduced into the double cone simulations using aggressive relaxation parameters that normally promote convergence.

  10. CFD Validation Studies for Hypersonic Flow Prediction

    NASA Technical Reports Server (NTRS)

    Gnoffo, Peter A.

    2001-01-01

    A series of experiments to measure pressure and heating for code validation involving hypersonic, laminar, separated flows was conducted at the Calspan-University at Buffalo Research Center (CUBRC) in the Large Energy National Shock (LENS) tunnel. The experimental data serves as a focus for a code validation session but are not available to the authors until the conclusion of this session. The first set of experiments considered here involve Mach 9.5 and Mach 11.3 N, flow over a hollow cylinder-flare with 30 deg flare angle at several Reynolds numbers sustaining laminar, separated flow. Truncated and extended flare configurations are considered. The second set of experiments, at similar conditions, involves flow over a sharp, double cone with fore-cone angle of 25 deg and aft-cone angle of 55 deg. Both sets of experiments involve 30 deg compressions. Location of the separation point in the numerical simulation is extremely sensitive to the level of grid refinement in the numerical predictions. The numerical simulations also show a significant influence of Reynolds number on extent of separation. Flow unsteadiness was easily introduced into the double cone simulations using aggressive relaxation parameters that normally promote convergence.

  11. Parental genomes mix in mule and human cell nuclei.

    PubMed

    Hepperger, Claudia; Mayer, Andreas; Merz, Julia; Vanderwall, Dirk K; Dietzel, Steffen

    2009-06-01

    Whether chromosome sets inherited from father and mother occupy separate spaces in the cell nucleus is a question first asked over 110 years ago. Recently, the nuclear organization of the genome has come increasingly into focus as an important level of epigenetic regulation. In this context, it is indispensable to know whether or not parental genomes are spatially separated. Genome separation had been demonstrated for plant hybrids and for the early mammalian embryo. Conclusive studies for somatic mammalian cell nuclei are lacking because homologous chromosomes from the two parents cannot be distinguished within a species. We circumvented this problem by investigating the three-dimensional distribution of chromosomes in mule lymphocytes and fibroblasts. Genomic DNA of horse and donkey was used as probes in fluorescence in situ hybridization under conditions where only tandem repetitive sequences were detected. We thus could determine the distribution of maternal and paternal chromosome sets in structurally preserved interphase nuclei for the first time. In addition, we investigated the distribution of several pairs of chromosomes in human bilobed granulocytes. Qualitative and quantitative image evaluation did not reveal any evidence for the separation of parental genomes. On the contrary, we observed mixing of maternal and paternal chromosome sets.

  12. Testing for ontological errors in probabilistic forecasting models of natural systems

    PubMed Central

    Marzocchi, Warner; Jordan, Thomas H.

    2014-01-01

    Probabilistic forecasting models describe the aleatory variability of natural systems as well as our epistemic uncertainty about how the systems work. Testing a model against observations exposes ontological errors in the representation of a system and its uncertainties. We clarify several conceptual issues regarding the testing of probabilistic forecasting models for ontological errors: the ambiguity of the aleatory/epistemic dichotomy, the quantification of uncertainties as degrees of belief, the interplay between Bayesian and frequentist methods, and the scientific pathway for capturing predictability. We show that testability of the ontological null hypothesis derives from an experimental concept, external to the model, that identifies collections of data, observed and not yet observed, that are judged to be exchangeable when conditioned on a set of explanatory variables. These conditional exchangeability judgments specify observations with well-defined frequencies. Any model predicting these behaviors can thus be tested for ontological error by frequentist methods; e.g., using P values. In the forecasting problem, prior predictive model checking, rather than posterior predictive checking, is desirable because it provides more severe tests. We illustrate experimental concepts using examples from probabilistic seismic hazard analysis. Severe testing of a model under an appropriate set of experimental concepts is the key to model validation, in which we seek to know whether a model replicates the data-generating process well enough to be sufficiently reliable for some useful purpose, such as long-term seismic forecasting. Pessimistic views of system predictability fail to recognize the power of this methodology in separating predictable behaviors from those that are not. PMID:25097265

  13. rPM6 parameters for phosphorous and sulphur-containing open-shell molecules

    NASA Astrophysics Data System (ADS)

    Saito, Toru; Takano, Yu

    2018-03-01

    In this article, we have introduced a reparameterisation of PM6 (rPM6) for phosphorus and sulphur to achieve a better description of open-shell species containing the two elements. Two sets of the parameters have been optimised separately using our training sets. The performance of the spin-unrestricted rPM6 (UrPM6) method with the optimised parameters is evaluated against 14 radical species, which contain either phosphorus or sulphur atom, comparing with the original UPM6 and the spin-unrestricted density functional theory (UDFT) methods. The standard UPM6 calculations fail to describe the adiabatic singlet-triplet energy gaps correctly, and may cause significant structural mismatches with UDFT-optimised geometries. Leaving aside three difficult cases, tests on 11 open-shell molecules strongly indicate the superior performance of UrPM6, which provides much better agreement with the results of UDFT methods for geometric and electronic properties.

  14. Application of point-to-point matching algorithms for background correction in on-line liquid chromatography-Fourier transform infrared spectrometry (LC-FTIR).

    PubMed

    Kuligowski, J; Quintás, G; Garrigues, S; de la Guardia, M

    2010-03-15

    A new background correction method for the on-line coupling of gradient liquid chromatography and Fourier transform infrared spectrometry has been developed. It is based on the use of a point-to-point matching algorithm that compares the absorption spectra of the sample data set with those of a previously recorded reference data set in order to select an appropriate reference spectrum. The spectral range used for the point-to-point comparison is selected with minimal user-interaction, thus facilitating considerably the application of the whole method. The background correction method has been successfully tested on a chromatographic separation of four nitrophenols running acetonitrile (0.08%, v/v TFA):water (0.08%, v/v TFA) gradients with compositions ranging from 35 to 85% (v/v) acetonitrile, giving accurate results for both, baseline resolved and overlapped peaks. Copyright (c) 2009 Elsevier B.V. All rights reserved.

  15. Cycle accurate and cycle reproducible memory for an FPGA based hardware accelerator

    DOEpatents

    Asaad, Sameh W.; Kapur, Mohit

    2016-03-15

    A method, system and computer program product are disclosed for using a Field Programmable Gate Array (FPGA) to simulate operations of a device under test (DUT). The DUT includes a device memory having a number of input ports, and the FPGA is associated with a target memory having a second number of input ports, the second number being less than the first number. In one embodiment, a given set of inputs is applied to the device memory at a frequency Fd and in a defined cycle of time, and the given set of inputs is applied to the target memory at a frequency Ft. Ft is greater than Fd and cycle accuracy is maintained between the device memory and the target memory. In an embodiment, a cycle accurate model of the DUT memory is created by separating the DUT memory interface protocol from the target memory storage array.

  16. Correct machine learning on protein sequences: a peer-reviewing perspective.

    PubMed

    Walsh, Ian; Pollastri, Gianluca; Tosatto, Silvio C E

    2016-09-01

    Machine learning methods are becoming increasingly popular to predict protein features from sequences. Machine learning in bioinformatics can be powerful but carries also the risk of introducing unexpected biases, which may lead to an overestimation of the performance. This article espouses a set of guidelines to allow both peer reviewers and authors to avoid common machine learning pitfalls. Understanding biology is necessary to produce useful data sets, which have to be large and diverse. Separating the training and test process is imperative to avoid over-selling method performance, which is also dependent on several hidden parameters. A novel predictor has always to be compared with several existing methods, including simple baseline strategies. Using the presented guidelines will help nonspecialists to appreciate the critical issues in machine learning. © The Author 2015. Published by Oxford University Press. For Permissions, please email: journals.permissions@oup.com.

  17. Assessment of two-dimensional induced accelerations from measured kinematic and kinetic data.

    PubMed

    Hof, A L; Otten, E

    2005-11-01

    A simple algorithm is presented to calculate the induced accelerations of body segments in human walking for the sagittal plane. The method essentially consists of setting up 2x4 force equations, 4 moment equations, 2x3 joint constraint equations and two constraints related to the foot-ground interaction. Data needed for the equations are, next to masses and moments of inertia, the positions of ankle, knee and hip. This set of equations is put in the form of an 18x18 matrix or 20x20 matrix, the solution of which can be found by inversion. By applying input vectors related to gravity, to centripetal accelerations or to muscle moments, the 'induced' accelerations and reaction forces related to these inputs can be found separately. The method was tested for walking in one subject. Good agreement was found with published results obtained by much more complicated three-dimensional forward dynamic models.

  18. Comparison of plateletpheresis on the Fresenius AS.TEC 204 and Haemonetics MCS 3p.

    PubMed

    Ranganathan, Sudha

    2007-02-01

    This is an attempt at comparing two cell separators for plateletpheresis, namely the Fresenius AS.TEC 204 and Haemonetics MCS 3p, at a tertiary care center in India. Donors who weighed between 55-75 kg, who had a hematocrit of 41-43%, and platelet counts of 250x10(3)-400x10(3)/microl were selected for the study. The comparability of the donors who donated on the two cell separators were analysed by t-test independent samples and no significant differences were found (P>0.05). The features compared were time taken for the procedure, volume processed on the separators, adverse reactions of the donors, quality control of the product, separation efficiency of the separators, platelet loss in the donors after the procedure, and the predictor versus the actual yield of platelets given by the cell separator. The volume processed to get a target yield of >3x10(11) was equal to 2.8-3.2 l and equal in both the cell separators. Symptoms of citrate toxicity were seen in 4 and 2.5% of donors who donated on the MCS 3p and the AS.TEC 204, respectively, and 3 and 1% of donors, respectively, had vasovagal reactions. All the platelet products collected had a platelet count of >3x10(11); 90% of the platelet products collected on the AS.TEC 204 attained the predicted yield that was set on the cell separator where as 75% of the platelet products collected on the MCS 3p attained the target yield. Quality control of the platelets collected on both the cell separators complied with the standards except that 3% of the platelets collected on the MCS 3p had a visible red cell contamination. The separation efficiency of the MCS 3p was higher, 50-52% as compared to the 40-45% on the AS.TEC 204. A provision of double venous access, less adverse reactions, negligible RBC contamination with a better predictor yield of platelets makes the AS.TEC 204 a safer and more reliable alternative than the widely used Haemonetics MCS 3p. Copyright (c) 2006 Wiley-Liss, Inc.

  19. Extended version of the "Sniffin' Sticks" identification test: test-retest reliability and validity.

    PubMed

    Sorokowska, A; Albrecht, E; Haehner, A; Hummel, T

    2015-03-30

    The extended, 32-item version of the Sniffin' Sticks identification test was developed in order to create a precise tool enabling repeated, longitudinal testing of individual olfactory subfunctions. Odors of the previous test version had to be changed for technical reasons, and the odor identification test needed re-investigation in terms of reliability, validity, and normative values. In our study we investigated olfactory abilities of a group of 100 patients with olfactory dysfunction and 100 controls. We reconfirmed the high test-retest reliability of the extended version of the Sniffin' Sticks identification test and high correlations between the new and the original part of this tool. In addition, we confirmed the validity of the test as it discriminated clearly between controls and patients with olfactory loss. The additional set of 16 odor identification sticks can be either included in the current olfactory test, thus creating a more detailed diagnosis tool, or it can be used separately, enabling to follow olfactory function over time. Additionally, the normative values presented in our paper might provide useful guidelines for interpretation of the extended identification test results. The revised version of the Sniffin' Sticks 32-item odor identification test is a reliable and valid tool for the assessment of olfactory function. Copyright © 2015 Elsevier B.V. All rights reserved.

  20. The efficacy of a whole body sprint-interval training intervention in an office setting: A feasibility study.

    PubMed

    Gurd, Brendon J; Patel, Jugal; Edgett, Brittany A; Scribbans, Trisha D; Quadrilatero, Joe; Fischer, Steven L

    2018-05-28

    Whole body sprint-interval training (WB-SIT) represents a mode of exercise training that is both time-efficient and does not require access to an exercise facility. The current study examined the feasibility of implementing a WB-SIT intervention in a workplace setting. A total of 747 employees from a large office building were invited to participate with 31 individuals being enrolled in the study. Anthropometrics, aerobic fitness, core and upper body strength, and lower body mobility were assessed before and after a 12-week exercise intervention consisting of 2-4 training sessions per week. Each training session required participants to complete 8, 20-second intervals (separated by 10 seconds of rest) of whole body exercise. Proportion of participation was 4.2% while the response rate was 35% (11/31 participants completed post training testing). In responders, compliance to prescribed training was 83±17%, and significant (p <  0.05) improvements were observed for aerobic fitness, push-up performance and lower body mobility. These results demonstrate the efficacy of WB-FIT for improving fitness and mobility in an office setting, but highlight the difficulties in achieving high rates of participation and response in this setting.

  1. Asset Analysis and Operational Concepts for Separation Assurance Flight Testing at Dryden Flight Research Center

    NASA Technical Reports Server (NTRS)

    Costa, Guillermo J.; Arteaga, Ricardo A.

    2011-01-01

    A preliminary survey of existing separation assurance and collision avoidance advancements, technologies, and efforts has been conducted in order to develop a concept of operations for flight testing autonomous separation assurance at Dryden Flight Research Center. This effort was part of the Unmanned Aerial Systems in the National Airspace System project. The survey focused primarily on separation assurance projects validated through flight testing (including lessons learned), however current forays into the field were also examined. Comparisons between current Dryden flight and range assets were conducted using House of Quality matrices in order to allow project management to make determinations regarding asset utilization for future flight tests. This was conducted in order to establish a body of knowledge of the current collision avoidance landscape, and thus focus Dryden s efforts more effectively towards the providing of assets and test ranges for future flight testing within this research field.

  2. Cost-effectiveness of rapid syphilis screening in prenatal HIV testing programs in Haiti.

    PubMed

    Schackman, Bruce R; Neukermans, Christopher P; Fontain, Sandy N Nerette; Nolte, Claudine; Joseph, Patrice; Pape, Jean W; Fitzgerald, Daniel W

    2007-05-01

    New rapid syphilis tests permit simple and immediate diagnosis and treatment at a single clinic visit. We compared the cost-effectiveness, projected health outcomes, and annual cost of screening pregnant women using a rapid syphilis test as part of scaled-up prenatal testing to prevent mother-to-child HIV transmission in Haiti. A decision analytic model simulated health outcomes and costs separately for pregnant women in rural and urban areas. We compared syphilis syndromic surveillance (rural standard of care), rapid plasma reagin test with results and treatment at 1-wk follow-up (urban standard of care), and a new rapid test with immediate results and treatment. Test performance data were from a World Health Organization-Special Programme for Research and Training in Tropical Diseases field trial conducted at the GHESKIO Center Groupe Haitien d'Etude du Sarcome de Kaposi et des Infections Opportunistes in Port-au-Prince. Health outcomes were projected using historical data on prenatal syphilis treatment efficacy and included disability-adjusted life years (DALYs) of newborns, congenital syphilis cases, neonatal deaths, and stillbirths. Cost-effectiveness ratios are in US dollars/DALY from a societal perspective; annual costs are in US dollars from a payer perspective. Rapid testing with immediate treatment has a cost-effectiveness ratio of $6.83/DALY in rural settings and $9.95/DALY in urban settings. Results are sensitive to regional syphilis prevalence, rapid test sensitivity, and the return rate for follow-up visits. Integrating rapid syphilis testing into a scaled-up national HIV testing and prenatal care program would prevent 1,125 congenital syphilis cases and 1,223 stillbirths or neonatal deaths annually at a cost of $525,000. In Haiti, integrating a new rapid syphilis test into prenatal care and HIV testing would prevent congenital syphilis cases and stillbirths, and is cost-effective. A similar approach may be beneficial in other resource-poor countries that are scaling up prenatal HIV testing.

  3. SEPARATIONS AND WASTE FORMS CAMPAIGN IMPLEMENTATION PLAN

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vienna, John D.; Todd, Terry A.; Peterson, Mary E.

    2012-11-26

    This Separations and Waste Forms Campaign Implementation Plan provides summary level detail describing how the Campaign will achieve the objectives set-forth by the Fuel Cycle Reasearch and Development (FCRD) Program. This implementation plan will be maintained as a living document and will be updated as needed in response to changes or progress in separations and waste forms research and the FCRD Program priorities.

  4. Localization of skeletal and aortic landmarks in trauma CT data based on the discriminative generalized Hough transform

    NASA Astrophysics Data System (ADS)

    Lorenz, Cristian; Hansis, Eberhard; Weese, Jürgen; Carolus, Heike

    2016-03-01

    Computed tomography is the modality of choice for poly-trauma patients to assess rapidly skeletal and vascular integrity of the whole body. Often several scans with and without contrast medium or with different spatial resolution are acquired. Efficient reading of the resulting extensive set of image data is vital, since it is often time critical to initiate the necessary therapeutic actions. A set of automatically found landmarks can facilitate navigation in the data and enables anatomy oriented viewing. Following this intention, we selected a comprehensive set of 17 skeletal and 5 aortic landmarks. Landmark localization models for the Discriminative Generalized Hough Transform (DGHT) were automatically created based on a set of about 20 training images with ground truth landmark positions. A hierarchical setup with 4 resolution levels was used. Localization results were evaluated on a separate test set, consisting of 50 to 128 images (depending on the landmark) with available ground truth landmark locations. The image data covers a large amount of variability caused by differences of field-of-view, resolution, contrast agent, patient gender and pathologies. The median localization error for the set of aortic landmarks was 14.4 mm and for the set of skeleton landmarks 5.5 mm. Median localization errors for individual landmarks ranged from 3.0 mm to 31.0 mm. The runtime performance for the whole landmark set is about 5s on a typical PC.

  5. Propagations of fluctuations and flow separation on an unsteadily loaded airfoil

    NASA Astrophysics Data System (ADS)

    Tenney, Andrew; Lewalle, Jacques

    2014-11-01

    We analyze pressure data from 18 taps located along the surface of a DU-96-W180 airfoil in bothand steady flow conditions. The conditions were set to mimic the flow conditions experienced by a wind turbine blade under unsteady loading to test and to quantify the effects of several flow control schemes. Here we are interested in the propagation of fluctuations along the pressure and suction sides, particularly in relation to the fluctuating separation point. An unsteady phase of the incoming fluctuations is defined using Morlet wavelets, and phase-conditioned cross-correlations are calculated. Using wavelet-based pattern recognition, individual events in the pressure data are identified with several different algorithms utilizing both the original time series pressure signals and their corresponding scalograms. The data analyzed in this study was collected by G. Wang in the Skytop anechoic chamber at Syracuse University in the spring of 2013; the work of Zhe Bai on this data is also acknowledged.

  6. Influence of the temperature on the cement disintegration in cement-retained implant restorations.

    PubMed

    Linkevicius, Tomas; Vindasiute, Egle; Puisys, Algirdas; Linkeviciene, Laura; Svediene, Olga

    2012-01-01

    The aim of this study was to estimate the average disintegration temperature of three dental cements used for the cementation of the implant-supported prostheses. One hundred and twenty metal frameworks were fabricated and cemented on the prosthetic abutments with different dental cements. After heat treatment in the dental furnace, the samples were set for the separation to test the integration of the cement. Results have shown that resin-modified glass-ionomer cement (RGIC) exhibited the lowest disintegration temperature (p<0.05), but there was no difference between zinc phosphate cement (ZPC) and dual cure resin cement (RC) (p>0.05). Average separation temperatures: RGIC - 306 ± 23 °C, RC - 363 ± 71 °C, it could not be calculated for the ZPC due to the eight unseparated specimens. Within the limitations of the study, it could be concluded that RGIC cement disintegrates at the lowest temperature and ZPC is not prone to break down after exposure to temperature.

  7. Effect of food service form on eating rate: meal served in a separated form might lower eating rate.

    PubMed

    Suh, Hyung Joo; Jung, Eun Young

    2016-01-01

    In this study, we investigated the association between food form (mixed vs separated) and eating rate. The experiment used a within-subjects design (n=29, young healthy women with normal weight). Test meals (white rice and side dishes) with the same content and volume were served at lunch in a mixed or separated form. The form in which the food was served had significant effects on consumption volume and eating rate; subjects ate significantly more (p<0.05) when a test meal was served as a mixed form (285 g, 575 kcal) compared to a separated form (244 g, 492 kcal). Moreover, subjects also ate significantly faster (p<0.05) when the test meal was served as a mixed form (22.4 g/min) as compared to a separated form (16.2 g/min). Despite consuming more when the test meal was served as a mixed form than when served as a separated form, the subjects did not feel significantly fuller. In conclusion, we confirmed that meals served in a separated form might lower the eating rate and, moreover, slower eating might be associated with less energy intake, without compromising satiety.

  8. Embryogenic callus proliferation and regeneration conditions for genetic transformation of diverse sugarcane cultivars.

    PubMed

    Basnayake, Shiromani W V; Moyle, Richard; Birch, Robert G

    2011-03-01

    Amenability to tissue culture stages required for gene transfer, selection and plant regeneration are the main determinants of genetic transformation efficiency via particle bombardment into sugarcane. The technique is moving from the experimental phase, where it is sufficient to work in a few amenable genotypes, to practical application in a diverse and changing set of elite cultivars. Therefore, we investigated the response to callus initiation, proliferation, regeneration and selection steps required for microprojectile-mediated transformation, in a diverse set of Australian sugarcane cultivars. 12 of 16 tested cultivars were sufficiently amenable to existing routine tissue-culture conditions for practical genetic transformation. Three cultivars required adjustments to 2,4-D levels during callus proliferation, geneticin concentration during selection, and/or light intensity during regeneration. One cultivar gave an extreme necrotic response in leaf spindle explants and produced no callus tissue under the tested culture conditions. It was helpful to obtain spindle explants for tissue culture from plants with good water supply for growth, especially for genotypes that were harder to culture. It was generally possible to obtain several independent transgenic plants per bombardment, with time in callus culture limited to 11-15 weeks. A caution with this efficient transformation system is that separate shoots arose from different primary transformed cells in more than half of tested calli after selection for geneticin resistance. The results across this diverse cultivar set are likely to be a useful guide to key variables for rapid optimisation of tissue culture conditions for efficient genetic transformation of other sugarcane cultivars.

  9. Comparison of the Predictive Accuracy of DNA Array-Based Multigene Classifiers across cDNA Arrays and Affymetrix GeneChips

    PubMed Central

    Stec, James; Wang, Jing; Coombes, Kevin; Ayers, Mark; Hoersch, Sebastian; Gold, David L.; Ross, Jeffrey S; Hess, Kenneth R.; Tirrell, Stephen; Linette, Gerald; Hortobagyi, Gabriel N.; Symmans, W. Fraser; Pusztai, Lajos

    2005-01-01

    We examined how well differentially expressed genes and multigene outcome classifiers retain their class-discriminating values when tested on data generated by different transcriptional profiling platforms. RNA from 33 stage I-III breast cancers was hybridized to both Affymetrix GeneChip and Millennium Pharmaceuticals cDNA arrays. Only 30% of all corresponding gene expression measurements on the two platforms had Pearson correlation coefficient r ≥ 0.7 when UniGene was used to match probes. There was substantial variation in correlation between different Affymetrix probe sets matched to the same cDNA probe. When cDNA and Affymetrix probes were matched by basic local alignment tool (BLAST) sequence identity, the correlation increased substantially. We identified 182 genes in the Affymetrix and 45 in the cDNA data (including 17 common genes) that accurately separated 91% of cases in supervised hierarchical clustering in each data set. Cross-platform testing of these informative genes resulted in lower clustering accuracy of 45 and 79%, respectively. Several sets of accurate five-gene classifiers were developed on each platform using linear discriminant analysis. The best 100 classifiers showed average misclassification error rate of 2% on the original data that rose to 19.5% when tested on data from the other platform. Random five-gene classifiers showed misclassification error rate of 33%. We conclude that multigene predictors optimized for one platform lose accuracy when applied to data from another platform due to missing genes and sequence differences in probes that result in differing measurements for the same gene. PMID:16049308

  10. CT Urography: Segmentation of Urinary Bladder using CLASS with Local Contour Refinement

    PubMed Central

    Cha, Kenny; Hadjiiski, Lubomir; Chan, Heang-Ping; Caoili, Elaine M.; Cohan, Richard H.; Zhou, Chuan

    2016-01-01

    Purpose We are developing a computerized system for bladder segmentation on CT urography (CTU), as a critical component for computer-aided detection of bladder cancer. Methods The presence of regions filled with intravenous contrast and without contrast presents a challenge for bladder segmentation. Previously, we proposed a Conjoint Level set Analysis and Segmentation System (CLASS). In case the bladder is partially filled with contrast, CLASS segments the non-contrast (NC) region and the contrast-filled (C) region separately and automatically conjoins the NC and C region contours; however, inaccuracies in the NC and C region contours may cause the conjoint contour to exclude portions of the bladder. To alleviate this problem, we implemented a local contour refinement (LCR) method that exploits model-guided refinement (MGR) and energy-driven wavefront propagation (EDWP). MGR propagates the C region contours if the level set propagation in the C region stops prematurely due to substantial non-uniformity of the contrast. EDWP with regularized energies further propagates the conjoint contours to the correct bladder boundary. EDWP uses changes in energies, smoothness criteria of the contour, and previous slice contour to determine when to stop the propagation, following decision rules derived from training. A data set of 173 cases was collected for this study: 81 cases in the training set (42 lesions, 21 wall thickenings, 18 normal bladders) and 92 cases in the test set (43 lesions, 36 wall thickenings, 13 normal bladders). For all cases, 3D hand segmented contours were obtained as reference standard and used for the evaluation of the computerized segmentation accuracy. Results For CLASS with LCR, the average volume intersection ratio, average volume error, absolute average volume error, average minimum distance and Jaccard index were 84.2±11.4%, 8.2±17.4%, 13.0±14.1%, 3.5±1.9 mm, 78.8±11.6%, respectively, for the training set and 78.0±14.7%, 16.4±16.9%, 18.2±15.0%, 3.8±2.3 mm, 73.8±13.4% respectively, for the test set. With CLASS only, the corresponding values were 75.1±13.2%, 18.7±19.5%, 22.5±14.9%, 4.3±2.2 mm, 71.0±12.6%, respectively, for the training set and 67.3±14.3%, 29.3±15.9%, 29.4±15.6%, 4.9±2.6 mm, 65.0±13.3%, respectively, for the test set. The differences between the two methods for all five measures were statistically significant (p<0.001) for both the training and test sets. Conclusions The results demonstrate the potential of CLASS with LCR for segmentation of the bladder. PMID:24801066

  11. Study of recreational land and open space using Skylab imagery

    NASA Technical Reports Server (NTRS)

    Sattinger, I. J. (Principal Investigator)

    1975-01-01

    The author has identified the following significant results. An analysis of the statistical uniqueness of each of the signatures of the Gratiot-Saginaw State Game Area was made by computing a matrix of probabilities of misclassification for all possible signature pairs. Within each data set, the 35 signatures were then aggregated into a smaller set of composite signatures by combining groups of signatures having high probabilities of misclassification. Computer separation of forest denisty classes was poor with multispectral scanner data collected on 5 August 1973. Signatures from the scanner data were further analyzed to determine the ranking of spectral channels for computer separation of the scene classes. Probabilities of misclassification were computed for composite signatures using four separate combinations of data source and channel selection.

  12. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bauer, Todd; Hamlet, Jason; Martin, Mitchell Tyler

    We are using the DoD MIL-STD as our guide for microelectronics aging (MIL-STD 883J, Method 1016.2: Life/Reliability Characterization Tests). In that document they recommend aging at 3 temperatures between 200-300C, separated by at least 25C, with the supply voltage at the maximum recommended voltage for the devices at 125C (3.6V in our case). If that voltage causes excessive current or power then it can be reduced and the duration of the tests extended. The MIL-STD also recommends current limiting resistors in series with the supply. Since we don’t have much time and we may not have enough ovens and othermore » equipment, two temperatures separated by at least 50C would be an acceptable backup plan. To ensure a safe range of conditions is used, we are executing 24-hour step tests. For these, we will apply the stress for 24 hours and then measure the device to make sure it wasn’t damaged. During the stress the PUFs should be exercised, but we don’t need to measure their response. Rather, at set intervals our devices should be returned to nominal temperature (under bias), and then measured. The MIL-STD puts these intervals at 4, 8, 16, 32, 64, 128, 256, 512 and 1000 hours, although the test can be stopped early if 75% of the devices have failed. A final recommendation per the MIL-STD is that at least 40 devices should be measured under each condition. Since we only have 25 parts, we will place 10 devices in each of two stress conditions.« less

  13. Experimental Evaluation of Suitability of Selected Multi-Criteria Decision-Making Methods for Large-Scale Agent-Based Simulations

    PubMed Central

    2016-01-01

    Multi-criteria decision-making (MCDM) can be formally implemented by various methods. This study compares suitability of four selected MCDM methods, namely WPM, TOPSIS, VIKOR, and PROMETHEE, for future applications in agent-based computational economic (ACE) models of larger scale (i.e., over 10 000 agents in one geographical region). These four MCDM methods were selected according to their appropriateness for computational processing in ACE applications. Tests of the selected methods were conducted on four hardware configurations. For each method, 100 tests were performed, which represented one testing iteration. With four testing iterations conducted on each hardware setting and separated testing of all configurations with the–server parameter de/activated, altogether, 12800 data points were collected and consequently analyzed. An illustrational decision-making scenario was used which allows the mutual comparison of all of the selected decision making methods. Our test results suggest that although all methods are convenient and can be used in practice, the VIKOR method accomplished the tests with the best results and thus can be recommended as the most suitable for simulations of large-scale agent-based models. PMID:27806061

  14. Pre-test genetic counseling services for hereditary breast and ovarian cancer delivered by non-genetics professionals in the state of Florida.

    PubMed

    Vadaparampil, S T; Scherr, C L; Cragun, D; Malo, T L; Pal, T

    2015-05-01

    Genetic counseling and testing for hereditary breast and ovarian cancer now includes practitioners from multiple healthcare professions, specialties, and settings. This study examined whether non-genetics professionals (NGPs) perform guideline-based patient intake and informed consent before genetic testing. NGPs offering BRCA testing services in Florida (n = 386) were surveyed about clinical practices. Among 81 respondents (response rate = 22%), approximately half reported: sometimes scheduling a separate session for pre-test counseling lasting 11-30 min prior to testing, discussing familial implications of testing, benefits and limitations of risk management options, and discussing the potential psychological impact and insurance-related issues. Few constructed a three-generation pedigree, discussed alternative hereditary cancer syndromes, or the meaning of a variant result. This lack of adherence to guideline-based practice may result in direct harm to patients and their family members. NGPs who are unable to deliver guideline adherent cancer genetics services should focus on identification and referral of at-risk patients to in person or telephone services provided by genetics professionals. © 2014 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  15. Mycobacterium paratuberculosis detection in cow's milk in Argentina by immunomagnetic separation-PCR.

    PubMed

    Gilardoni, Liliana Rosa; Fernández, Bárbara; Morsella, Claudia; Mendez, Laura; Jar, Ana María; Paolicchi, Fernando Alberto; Mundo, Silvia Leonor

    2016-01-01

    The aim of this study was to standardize a diagnosis procedure to detect Mycobacterium avium subsp. paratuberculosis (Map) DNA in raw cow milk samples under field conditions. A procedure that combines both immunomagnetic separation and IS900-PCR detection (IMS-IS1 PCR) was employed on milk samples from 265 lactating Holstein cows from Map infected and uninfected herds in Argentina. IMS-IS1 PCR results were analyzed and compared with those obtained from milk and fecal culture and serum ELISA. The extent of agreement between both tests was determined by the Kappa test. IMS-IS1 PCR showed a detection limit of 10(1) CFU of Map/mL of milk, when 50:50 mix of monoclonal and polyclonal antibodies were used to coat magnetic beads. All of the 118 samples from the Map uninfected herds were negative for the set of the tests. In Map infected herds, 80 out of 147 cows tested positive by milk IMS-IS1 PCR (55%), of which 2 (1.4%) were also positive by milk culture, 15 (10%) by fecal culture, and 20 (14%) by serum ELISA. Kappa statistics (95% CI) showed a slight agreement between the different tests (<0.20), and the proportions of agreement were ≤0.55. The IMS-IS1 PCR method detected Map in milk of the cows that were not positive in other techniques. This is the first report dealing with the application of IMS-IS1 PCR in the detection of Map in raw milk samples under field conditions in Argentina. Copyright © 2016 Sociedade Brasileira de Microbiologia. Published by Elsevier Editora Ltda. All rights reserved.

  16. Development Status of Low-Shock Payload Separation Mechanism for H-IIA Launch Vehicle

    NASA Astrophysics Data System (ADS)

    Terashima, Keita; Kamita, Toru; Horie, Youichi; Kobayashi, Masakazu; Onikura, Hiroki

    2013-09-01

    This paper presents the design, analysis and test results of the low-shock payload separation mechanism for the H-IIA launch vehicle. The mechanism is based on a simple and reliable four-bar linkage, which makes the release speed of the marman clamp band tension lower than the current system.The adequacy of the principle for low-shock mechanism was evaluated by some simulations and results of fundamental tests. Then, we established the reliability design model of this mechanism, and the adequacy of this model was evaluated by elemental tests.Finally, we conducted the system separation tests using the payload adapter to which the mechanism was assembled, to confirm that the actual separation shock level satisfied our target.

  17. Postactivation Potentation Effects From Accommodating Resistance Combined With Heavy Back Squats on Short Sprint Performance.

    PubMed

    Wyland, Timothy P; Van Dorin, Joshua D; Reyes, G Francis Cisco

    2015-11-01

    Applying accommodating resistance combined with isoinertial resistance has been demonstrated to be effective in improving neuromuscular attributes important for sport performance. The main purpose of this study was to determine whether short sprints can be acutely enhanced after several sets of back squats with or without accommodating resistance. Twenty recreationally resistance-trained males (age: 23.3 ± 4.4 years; height: 178.9 ± 6.5 cm; weight: 88.3 ± 10.8 kg) performed pre-post testing on 9.1-m sprint time. Three different interventions were implemented in randomized order between pre-post 9.1-m sprints. On 3 separate days, subjects either sat for 5 minutes (CTRL), performed 5 sets of 3 repetitions at 85% of their 1 repetition maximum (1RM) with isoinertial load (STND), or performed 5 sets of 3 repetitions at 85% of their 1RM, with 30% of the total resistance coming from elastic band tension (BAND) between pre-post 9.1-m sprint testing. Posttesting for 9.1-m sprint time occurred immediately after the last set of squats (Post-Immediate) and on every minute for 4 minutes after the last set of squats (Post-1min, Post-2min, Post-3min, and Post-4min). Repeated-measures analysis of variance statistical analyses revealed no significant changes in sprint time across posttesting times during the CTRL and STND condition. During the BAND condition, sprint time significantly decreased from Post-Immediate to Post-4min (p = 0.002). The uniqueness of accommodating resistance could create an optimal postactivation potentiation effect to increase neuromuscular performance. Coaches and athletes can implement heavy accommodating resistance exercises to their warm-up when improving acute sprint time is desired.

  18. GNSS Signal Authentication Via Power and Distortion Monitoring

    NASA Astrophysics Data System (ADS)

    Wesson, Kyle D.; Gross, Jason N.; Humphreys, Todd E.; Evans, Brian L.

    2018-04-01

    We propose a simple low-cost technique that enables civil Global Positioning System (GPS) receivers and other civil global navigation satellite system (GNSS) receivers to reliably detect carry-off spoofing and jamming. The technique, which we call the Power-Distortion detector, classifies received signals as interference-free, multipath-afflicted, spoofed, or jammed according to observations of received power and correlation function distortion. It does not depend on external hardware or a network connection and can be readily implemented on many receivers via a firmware update. Crucially, the detector can with high probability distinguish low-power spoofing from ordinary multipath. In testing against over 25 high-quality empirical data sets yielding over 900,000 separate detection tests, the detector correctly alarms on all malicious spoofing or jamming attacks while maintaining a <0.6% single-channel false alarm rate.

  19. Wireless physiological monitoring system for psychiatric patients.

    PubMed

    Rademeyer, A J; Blanckenberg, M M; Scheffer, C

    2009-01-01

    Patients in psychiatric hospitals that are sedated or secluded are at risk of death or injury if they are not continuously monitored. Some psychiatric patients are restless and aggressive, and hence the monitoring device should be robust and must transmit the data wirelessly. Two devices, a glove that measures oxygen saturation and a dorsally-mounted device that measures heart rate, skin temperature and respiratory rate were designed and tested. Both devices connect to one central monitoring station using two separate Bluetooth connections, ensuring a completely wireless setup. A Matlab graphical user interface (GUI) was developed for signal processing and monitoring of the vital signs of the psychiatric patient. Detection algorithms were implemented to detect ECG arrhythmias such as premature ventricular contraction and atrial fibrillation. The prototypes were manufactured and tested in a laboratory setting on healthy volunteers.

  20. Probing gravity at cosmological scales by measurements which test the relationship between gravitational lensing and matter overdensity.

    PubMed

    Zhang, Pengjie; Liguori, Michele; Bean, Rachel; Dodelson, Scott

    2007-10-05

    The standard cosmology is based on general relativity (GR) and includes dark matter and dark energy and predicts a fixed relationship between the gravitational potentials responsible for gravitational lensing and the matter overdensity. Alternative theories of gravity often make different predictions. We propose a set of measurements which can test this relationship, thereby distinguishing between dark energy or matter models and models in which gravity differs from GR. Planned surveys will be able to measure E(G), an observational quantity whose expectation value is equal to the ratio of the Laplacian of the Newtonian potentials to the peculiar velocity divergence, to percent accuracy. This will easily separate alternatives such as the cold dark matter model with a cosmological constant, Dvali-Gabadadze-Porrati, TeVeS, and f(R) gravity.

  1. Scheduling Earth Observing Satellites with Evolutionary Algorithms

    NASA Technical Reports Server (NTRS)

    Globus, Al; Crawford, James; Lohn, Jason; Pryor, Anna

    2003-01-01

    We hypothesize that evolutionary algorithms can effectively schedule coordinated fleets of Earth observing satellites. The constraints are complex and the bottlenecks are not well understood, a condition where evolutionary algorithms are often effective. This is, in part, because evolutionary algorithms require only that one can represent solutions, modify solutions, and evaluate solution fitness. To test the hypothesis we have developed a representative set of problems, produced optimization software (in Java) to solve them, and run experiments comparing techniques. This paper presents initial results of a comparison of several evolutionary and other optimization techniques; namely the genetic algorithm, simulated annealing, squeaky wheel optimization, and stochastic hill climbing. We also compare separate satellite vs. integrated scheduling of a two satellite constellation. While the results are not definitive, tests to date suggest that simulated annealing is the best search technique and integrated scheduling is superior.

  2. Treatment of atomic and molecular line blanketing by opacity sampling

    NASA Technical Reports Server (NTRS)

    Johnson, H. R.; Krupp, B. M.

    1976-01-01

    A sampling technique for treating the radiative opacity of large numbers of atomic and molecular lines in cool stellar atmospheres is subjected to several tests. In this opacity sampling (OS) technique, the global opacity is sampled at only a selected set of frequencies, and at each of these frequencies the total monochromatic opacity is obtained by summing the contribution of every relevant atomic and molecular line. In accord with previous results, we find that the structure of atmospheric models is accurately fixed by the use of 1000 frequency points, and 100 frequency points are adequate for many purposes. The effects of atomic and molecular lines are separately studied. A test model computed using the OS method agrees very well with a model having identical atmospheric parameters, but computed with the giant line (opacity distribution function) method.

  3. Electrical characterization of standard and radiation-hardened RCA CDP1856D 4-BIT, CMOS, bus buffer/separator

    NASA Technical Reports Server (NTRS)

    Stokes, R. L.

    1979-01-01

    Tests performed to determine accuracy and efficiency of bus separators used in microprocessors are presented. Functional, AC parametric, and DC parametric tests were performed in a Tektronix S-3260 automated test system. All the devices passed the functional tests and yielded nominal values in the parametric test.

  4. Network-based differential gene expression analysis suggests cell cycle related genes regulated by E2F1 underlie the molecular difference between smoker and non-smoker lung adenocarcinoma

    PubMed Central

    2013-01-01

    Background Differential gene expression (DGE) analysis is commonly used to reveal the deregulated molecular mechanisms of complex diseases. However, traditional DGE analysis (e.g., the t test or the rank sum test) tests each gene independently without considering interactions between them. Top-ranked differentially regulated genes prioritized by the analysis may not directly relate to the coherent molecular changes underlying complex diseases. Joint analyses of co-expression and DGE have been applied to reveal the deregulated molecular modules underlying complex diseases. Most of these methods consist of separate steps: first to identify gene-gene relationships under the studied phenotype then to integrate them with gene expression changes for prioritizing signature genes, or vice versa. It is warrant a method that can simultaneously consider gene-gene co-expression strength and corresponding expression level changes so that both types of information can be leveraged optimally. Results In this paper, we develop a gene module based method for differential gene expression analysis, named network-based differential gene expression (nDGE) analysis, a one-step integrative process for prioritizing deregulated genes and grouping them into gene modules. We demonstrate that nDGE outperforms existing methods in prioritizing deregulated genes and discovering deregulated gene modules using simulated data sets. When tested on a series of smoker and non-smoker lung adenocarcinoma data sets, we show that top differentially regulated genes identified by the rank sum test in different sets are not consistent while top ranked genes defined by nDGE in different data sets significantly overlap. nDGE results suggest that a differentially regulated gene module, which is enriched for cell cycle related genes and E2F1 targeted genes, plays a role in the molecular differences between smoker and non-smoker lung adenocarcinoma. Conclusions In this paper, we develop nDGE to prioritize deregulated genes and group them into gene modules by simultaneously considering gene expression level changes and gene-gene co-regulations. When applied to both simulated and empirical data, nDGE outperforms the traditional DGE method. More specifically, when applied to smoker and non-smoker lung cancer sets, nDGE results illustrate the molecular differences between smoker and non-smoker lung cancer. PMID:24341432

  5. Exploring the genetic architecture and improving genomic prediction accuracy for mastitis and milk production traits in dairy cattle by mapping variants to hepatic transcriptomic regions responsive to intra-mammary infection.

    PubMed

    Fang, Lingzhao; Sahana, Goutam; Ma, Peipei; Su, Guosheng; Yu, Ying; Zhang, Shengli; Lund, Mogens Sandø; Sørensen, Peter

    2017-05-12

    A better understanding of the genetic architecture of complex traits can contribute to improve genomic prediction. We hypothesized that genomic variants associated with mastitis and milk production traits in dairy cattle are enriched in hepatic transcriptomic regions that are responsive to intra-mammary infection (IMI). Genomic markers [e.g. single nucleotide polymorphisms (SNPs)] from those regions, if included, may improve the predictive ability of a genomic model. We applied a genomic feature best linear unbiased prediction model (GFBLUP) to implement the above strategy by considering the hepatic transcriptomic regions responsive to IMI as genomic features. GFBLUP, an extension of GBLUP, includes a separate genomic effect of SNPs within a genomic feature, and allows differential weighting of the individual marker relationships in the prediction equation. Since GFBLUP is computationally intensive, we investigated whether a SNP set test could be a computationally fast way to preselect predictive genomic features. The SNP set test assesses the association between a genomic feature and a trait based on single-SNP genome-wide association studies. We applied these two approaches to mastitis and milk production traits (milk, fat and protein yield) in Holstein (HOL, n = 5056) and Jersey (JER, n = 1231) cattle. We observed that a majority of genomic features were enriched in genomic variants that were associated with mastitis and milk production traits. Compared to GBLUP, the accuracy of genomic prediction with GFBLUP was marginally improved (3.2 to 3.9%) in within-breed prediction. The highest increase (164.4%) in prediction accuracy was observed in across-breed prediction. The significance of genomic features based on the SNP set test were correlated with changes in prediction accuracy of GFBLUP (P < 0.05). GFBLUP provides a framework for integrating multiple layers of biological knowledge to provide novel insights into the biological basis of complex traits, and to improve the accuracy of genomic prediction. The SNP set test might be used as a first-step to improve GFBLUP models. Approaches like GFBLUP and SNP set test will become increasingly useful, as the functional annotations of genomes keep accumulating for a range of species and traits.

  6. Acoustical Detection Of Leakage In A Combustor

    NASA Technical Reports Server (NTRS)

    Puster, Richard L.; Petty, Jeffrey L.

    1993-01-01

    Abnormal combustion excites characteristic standing wave. Acoustical leak-detection system gives early warning of failure, enabling operating personnel to stop combustion process and repair spray bar before leak grows large enough to cause damage. Applicable to engines, gas turbines, furnaces, and other machines in which acoustic emissions at known frequencies signify onset of damage. Bearings in rotating machines monitored for emergence of characteristic frequencies shown in previous tests associated with incipient failure. Also possible to monitor for signs of trouble at multiple frequencies by feeding output of transducer simultaneously to multiple band-pass filters and associated circuitry, including separate trigger circuit set to appropriate level for each frequency.

  7. Human speckle perception threshold for still images from a laser projection system.

    PubMed

    Roelandt, Stijn; Meuret, Youri; Jacobs, An; Willaert, Koen; Janssens, Peter; Thienpont, Hugo; Verschaffelt, Guy

    2014-10-06

    We study the perception of speckle by human observers in a laser projector based on a 40 persons survey. The speckle contrast is first objectively measured making use of a well-defined speckle measurement method. We statistically analyse the results of the user quality scores, revealing that the speckle perception is not only influenced by the speckle contrast settings of the projector, but it is also strongly influenced by the type of image shown. Based on the survey, we derive a speckle contrast threshold for which speckle can be seen, and separately we investigate a speckle disturbance limit that is tolerated by the majority of test persons.

  8. Specialized Laboratory Information Systems.

    PubMed

    Dangott, Bryan

    2015-06-01

    Some laboratories or laboratory sections have unique needs that traditional anatomic and clinical pathology systems may not address. A specialized laboratory information system (LIS), which is designed to perform a limited number of functions, may perform well in areas where a traditional LIS falls short. Opportunities for specialized LISs continue to evolve with the introduction of new testing methodologies. These systems may take many forms, including stand-alone architecture, a module integrated with an existing LIS, a separate vendor-supplied module, and customized software. This article addresses the concepts underlying specialized LISs, their characteristics, and in what settings they are found. Copyright © 2015 Elsevier Inc. All rights reserved.

  9. Performance of a Diaphragmed Microlens for a Packaged Microspectrometer

    PubMed Central

    Lo, Joe; Chen, Shih-Jui; Fang, Qiyin; Papaioannou, Thanassis; Kim, Eun-Sok; Gundersen, Martin; Marcu, Laura

    2009-01-01

    This paper describes the design, fabrication, packaging and testing of a microlens integrated in a multi-layered MEMS microspectrometer. The microlens was fabricated using modified PDMS molding to form a suspended lens diaphragm. Gaussian beam propagation model was used to measure the focal length and quantify M2 value of the microlens. A tunable calibration source was set up to measure the response of the packaged device. Dual wavelength separation by the packaged device was demonstrated by CCD imaging and beam profiling of the spectroscopic output. We demonstrated specific techniques to measure critical parameters of microoptics systems for future optimization of spectroscopic devices. PMID:22399943

  10. Specialized Laboratory Information Systems.

    PubMed

    Dangott, Bryan

    2016-03-01

    Some laboratories or laboratory sections have unique needs that traditional anatomic and clinical pathology systems may not address. A specialized laboratory information system (LIS), which is designed to perform a limited number of functions, may perform well in areas where a traditional LIS falls short. Opportunities for specialized LISs continue to evolve with the introduction of new testing methodologies. These systems may take many forms, including stand-alone architecture, a module integrated with an existing LIS, a separate vendor-supplied module, and customized software. This article addresses the concepts underlying specialized LISs, their characteristics, and in what settings they are found. Copyright © 2016 Elsevier Inc. All rights reserved.

  11. Flight set 360L003 instrumentation final test report, volume 9

    NASA Technical Reports Server (NTRS)

    1989-01-01

    Post-flight instrumentation hardware and data evaluation for 360L003 is summarized. The 360L003 motors were equipped with Developmental Flight Instrumentation (DFI), Operational Flight Instrumentation (OFI), and Ground Environmental Instrumentation (GEI). The DFI was designed to measure strain, temperature, pressure, and vibration at various locations on the motor during flight. The DFI is used to validate engineering models in a flight environment. The OFI consists of six Operational Pressure Tranducers which monitor chamber pressure during flight. These pressure transducers are used in the SRB separation cue. GEI measures the motor case, igniter flange, and nozzle temperature prior to launch.

  12. Isolation and characterization of macaroni penguin (Eudyptes chrysolophus) microsatellite loci and their utility in other penguin species (Spheniscidae, AVES).

    PubMed

    Ahmed, Sophia; Hart, Tom; Dawson, Deborah A; Horsburgh, Gavin J; Trathan, Philip N; Rogers, Alex D

    2009-11-01

    We report the characterization of 25 microsatellite loci isolated from the macaroni penguin (Eudyptes chrysolophus). Thirteen loci were arranged into four multiplex sets for future genetic studies of macaroni penguin populations. All 25 loci were tested separately in each of four other penguin species [Adélie penguin (Pygoscelis adeliae), chinstrap penguin (Pygoscelis antarctica), gentoo penguin (Pygoscelis papua) and king penguin (Aptenodytes patagonicus)]. Between eight and 12 loci were polymorphic per species. These loci are expected to be useful for studies of population genetic structure in a range of penguin species. © 2009 Blackwell Publishing Ltd.

  13. Robust Maneuvering Envelope Estimation Based on Reachability Analysis in an Optimal Control Formulation

    NASA Technical Reports Server (NTRS)

    Lombaerts, Thomas; Schuet, Stefan R.; Wheeler, Kevin; Acosta, Diana; Kaneshige, John

    2013-01-01

    This paper discusses an algorithm for estimating the safe maneuvering envelope of damaged aircraft. The algorithm performs a robust reachability analysis through an optimal control formulation while making use of time scale separation and taking into account uncertainties in the aerodynamic derivatives. Starting with an optimal control formulation, the optimization problem can be rewritten as a Hamilton- Jacobi-Bellman equation. This equation can be solved by level set methods. This approach has been applied on an aircraft example involving structural airframe damage. Monte Carlo validation tests have confirmed that this approach is successful in estimating the safe maneuvering envelope for damaged aircraft.

  14. Computer code for gas-liquid two-phase vortex motions: GLVM

    NASA Technical Reports Server (NTRS)

    Yeh, T. T.

    1986-01-01

    A computer program aimed at the phase separation between gas and liquid at zero gravity, induced by vortex motion, is developed. It utilizes an explicit solution method for a set of equations describing rotating gas-liquid flows. The vortex motion is established by a tangential fluid injection. A Lax-Wendroff two-step (McCormack's) numerical scheme is used. The program can be used to study the fluid dynamical behavior of the rotational two-phase fluids in a cylindrical tank. It provides a quick/easy sensitivity test on various parameters and thus provides the guidance for the design and use of actual physical systems for handling two-phase fluids.

  15. Factors responsible for performance on the day-night task: response set or semantics?

    PubMed

    Simpson, Andrew; Riggs, Kevin J

    2005-07-01

    In a recent study Diamond, Kirkham and Amso (2002) obtained evidence consistent with the claim that the day-night task requires inhibition because the picture and its corresponding conflicting response are semantically related. In their study children responded more accurately in a dog-pig condition (see /day picture/ say "dog"; see /night picture/ say "pig") than the standard day-night condition (see /day picture/ say "night"; see /night picture/ say "day"). However, there is another effect that may have made the day-night condition harder than the dog-pig condition: the response set effect. In the day-night condition the names of the two stimuli ("day" and "night") and the two corresponding conflicting responses ("night" and "day") are from the same response set: both "day" and "night". In the dog-pig condition the names of the stimuli ("day", "night") and the corresponding responses ("dog", "pig") are from a different response set. In two experiments (Experiment 1 with 4-year-olds (n = 25); Experiment 2 with , 4-, 5-, 7- and 11-year-olds (n = 81)) children were tested on four experimental conditions that enabled the effects of semantics and response set to be separated. Overall, our data suggest that response set is a major factor in creating the inhibitory demands of the day-night task in children of all ages. Results are discussed in relation to other inhibitory tasks.

  16. Separator Materials Used in Secondary Alkaline Batteries Characterized and Evaluated

    NASA Technical Reports Server (NTRS)

    1996-01-01

    Nickel-cadmium (Ni/Cd) and nickel-hydrogen (Ni/H2) secondary alkaline batteries are vital to aerospace applications. Battery performance and cycle life are significantly affected by the type of separators used in those batteries. A team from NASA Lewis Research Center's Electrochemical Technology Branch developed standardized testing procedures to characterize and evaluate new and existing separator materials to improve performance and cycle life of secondary alkaline batteries. Battery separators must function as good electronic insulators and as efficient electrolyte reservoirs. At present, new types of organic and inorganic separator materials are being developed for Ni/Cd and Ni/H2 batteries. The separator material previously used in the NASA standard Ni/Cd was Pellon 2505, a 100-percent nylon-6 polymer that must be treated with zinc chloride (ZnCl2) to bond the fibers. Because of stricter Environmental Protection Agency regulation of ZnCl2 emissions, the battery community has been searching for new separators to replace Pellon 2505. As of today, two candidate separator materials have been identified; however, neither of the two materials have performed as well as Pellon 2505. The separator test procedures that were devised at Lewis are being implemented to expedite the search for new battery separators. The new test procedures, which are being carried out in the Separator Laboratory at Lewis, have been designed to guarantee accurate evaluations of the properties that are critical for sustaining proper battery operation. These properties include physical and chemical stability, chemical purity, gas permeability, electrolyte retention and distribution, uniformity, porosity, and area resistivity. A manual containing a detailed description of 12 separator test procedures has been drafted and will be used by the battery community to evaluate candidate separator materials for specific applications. These standardized procedures will allow for consistent, uniform, and reliable results that will ensure that separator materials have the desired properties for long life and good performance in secondary alkaline cells.

  17. The hippocampus and memory for orderly stimulus relations

    PubMed Central

    Dusek, Jeffery A.; Eichenbaum, Howard

    1997-01-01

    Human declarative memory involves a systematic organization of information that supports generalizations and inferences from acquired knowledge. This kind of memory depends on the hippocampal region in humans, but the extent to which animals also have declarative memory, and whether inferential expression of memory depends on the hippocampus in animals, remains a major challenge in cognitive neuroscience. To examine these issues, we used a test of transitive inference pioneered by Piaget to assess capacities for systematic organization of knowledge and logical inference in children. In our adaptation of the test, rats were trained on a set of four overlapping odor discrimination problems that could be encoded either separately or as a single representation of orderly relations among the odor stimuli. Normal rats learned the problems and demonstrated the relational memory organization through appropriate transitive inferences about items not presented together during training. By contrast, after disconnection of the hippocampus from either its cortical or subcortical pathway, rats succeeded in acquiring the separate discrimination problems but did not demonstrate transitive inference, indicating that they had failed to develop or could not inferentially express the orderly organization of the stimulus elements. These findings strongly support the view that the hippocampus mediates a general declarative memory capacity in animals, as it does in humans. PMID:9192700

  18. A stepped wedge cluster randomized control trial of dried blood spot testing to improve the uptake of hepatitis C antibody testing within UK prisons

    PubMed Central

    Whitaker, Rhiannon; Perrett, Stephanie; Zou, Lu; Hickman, Matthew; Lyons, Marion

    2015-01-01

    Background: The prevalence of hepatitis C (HCV) is elevated within prison populations, yet diagnosis in prisons remains low. Dried blood spot testing (DBST) is a simple procedure for the detection of HCV antibodies; its impact on testing in the prison context is unknown. Methods: We carried out a stepped-wedge cluster-randomized control trial of DBST for HCV among prisoners within five male prisons and one female prison. Each prison was a separate cluster. The order in which the intervention (training in use of DBST for HCV testing and logistic support) was introduced was randomized across clusters. The outcome measure was the HCV testing rate by prison. Imputation analysis was carried out to account for missing data. Planned and actual intervention times differed in some prisons; data were thus analysed by intention to treat (ITT) and by observed step times. Results: There was insufficient evidence of an effect of the intervention on testing rate using either the ITT intervention time (OR: 0.84; 95% CI: 0.68–1.03; P = 0.088) or using the actual intervention time (OR: 0.86; 95% CI: 0.71–1.06; P = 0.153). This was confirmed by the pooled results of five imputed data sets. Conclusions: DBST as a stand-alone intervention was insufficient to increase HCV diagnosis within the UK prison setting. Factors such as staff training and allocation of staff time for regular clinics are key to improving service delivery. We demonstrate that prisons can conduct rigorous studies of new interventions, but data collection can be problematic. Trial registration: International Standard Randomized Controlled Trial Number Register (ISRCTN number ISRCTN05628482). PMID:25061233

  19. Distributed modelling of hydrologic regime at three subcatchments of Kopaninský tok catchment

    NASA Astrophysics Data System (ADS)

    Žlábek, Pavel; Tachecí, Pavel; Kaplická, Markéta; Bystřický, Václav

    2010-05-01

    Kopaninský tok catchment is situated in crystalline area of Bohemo-Moravian highland hilly region, with cambisol cover and prevailing agricultural land use. It is a subject of long term (since 1980's) observation. Time series (discharge, precipitation, climatic parameters...) are nowadays available in 10 min. time step, water quality average daily composit samples plus samples during events are available. Soil survey resulting in reference soil hydraulic properties for horizons and vegetation cover survey incl. LAI measurement has been done. All parameters were analysed and used for establishing of distributed mathematical models of P6, P52 and P53 subcatchments, using MIKE SHE 2009 WM deterministic hydrologic modelling system. The aim is to simulate long-term hydrologic regime as well as rainfall-runoff events, serving the base for modelling of nitrate regime and agricultural management influence in the next step. Mentioned subcatchments differs in ratio of artificial drainage area, soil types, land use and slope angle. The models are set-up in a regular computational grid of 2 m size. Basic time step was set to 2 hrs, total simulated period covers 3 years. Runoff response and moisture regime is compared using spatially distributed simulation results. Sensitivity analysis revealed most important parameters influencing model response. Importance of spatial distribution of initial conditions was underlined. Further on, different runoff components in terms of their origin, flow paths and travel time were separated using a combination of two runoff separation techniques (a digital filter and a simple conceptual model GROUND) in 12 subcatchments of Kopaninský tok catchment. These two methods were chosen based on a number of methods testing. Ordinations diagrams performed with Canoco software were used to evaluate influence of different catchment parameters on different runoff components. A canonical ordination method analyses (RDA) was used to explain one data set (runoff components - either volumes of each runoff component or occurence of baseflow) with another data set (catchment parameters - proportion of arable land, proportion of forest, proportion of vulnerable zones with high infiltration capacity, average slope, topographic index and runoff coefficient). The influence was analysed both for long-term runoff balance and selected rainfall-runoff events. Keywords: small catchment, water balance modelling, rainfall-runoff modelling, distributed deterministic model, runoff separation, sensitivity analysis

  20. 48 CFR 206.203 - Set-asides for small business concerns.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... REGULATIONS SYSTEM, DEPARTMENT OF DEFENSE ACQUISITION PLANNING COMPETITION REQUIREMENTS Full and Open Competition After Exclusion of Sources 206.203 Set-asides for small business concerns. (b) Also no separate...

Top