Sample records for discrete sampling test

  1. Goodness-of-fit tests for discrete data: a review and an application to a health impairment scale.

    PubMed

    Horn, S D

    1977-03-01

    We review the advantages and disadvantages of several goodness-of-fit tests which may be used with discrete data: the multinomial test, the likelihood ratio test, the X2 test, the two-stage X2 test and the discrete Kolmogorov-Smirnov test. Although the X2 test is the best known and most widely used of these tests, its use with small sample sizes is controversial. If one has data which fall into ordered categories, then the discrete Kolmogorov-Smirnov test is an exact test which uses the information from the ordering and can be used for small sample sizes. We illustrate these points with an example of several analyses of health impairment data.

  2. 40 CFR 1045.505 - How do I test engines using discrete-mode or ramped-modal duty cycles?

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 40 Protection of Environment 32 2010-07-01 2010-07-01 false How do I test engines using discrete... MARINE ENGINES AND VESSELS Test Procedures § 1045.505 How do I test engines using discrete-mode or ramped... allow you to perform tests with either discrete-mode or ramped-modal sampling. You must use the modal...

  3. Results from laboratory and field testing of nitrate measuring spectrophotometers

    USGS Publications Warehouse

    Snazelle, Teri T.

    2015-01-01

    In Phase II, the analyzers were deployed in field conditions at three diferent USGS sites. The measured nitrate concentrations were compared to discrete (reference) samples analyzed by the Direct UV method on a Shimadzu UV1800 bench top spectrophotometer, and by the National Environmental Methods Index (NEMI) method I-2548-11 at the USGS National Water Quality Laboratory. The first deployment at USGS site 0249620 on the East Pearl River in Hancock County, Mississippi, tested the ability of the TriOs ProPs (10-mm path length), Hach NITRATAX (5 mm), Satlantic SUNA (10 mm), and the S::CAN Spectro::lyser (5 mm) to accurately measure low-level (less than 2 mg-N/L) nitrate concentrations while observing the effect turbidity and colored dissolved organic matter (CDOM) would have on the analyzers' measurements. The second deployment at USGS site 01389005 Passaic River below Pompton River at Two Bridges, New Jersey, tested the analyzer's accuracy in mid-level (2-8 mg-N/L) nitrate concentrations. This site provided the means to test the analyzers' performance in two distinct matrices—the Passaic and the Pompton Rivers. In this deployment, three instruments tested in Phase I (TriOS, Hach, and SUNA) were deployed with the S::CAN Spectro::lyser (35 mm) already placed by the New Jersey Water Science Center (WSC). The third deployment at USGS site 05579610 Kickapoo Creek at 2100E Road near Bloomington, Illinois, tested the ability of the analyzers to measure high nitrate concentrations (greater than 8 mg-N/L) in turbid waters. For Kickapoo Creek, the HIF provided the TriOS (10 mm) and S::CAN (5 mm) from Phase I, and a SUNA V2 (5 mm) to be deployed adjacent to the Illinois WSC-owned Hach (2 mm). A total of 40 discrete samples were collected from the three deployment sites and analyzed. The nitrate concentration of the samples ranged from 0.3–22.2 mg-N/L. The average absolute difference between the TriOS measurements and discrete samples was 0.46 mg-N/L. For the combined data from the Hach 5-mm and 2-mm analyzers, the average absolute difference between the Hach samples and the discrete samples was 0.13 mg-N/L. For the SUNA and SUNA V2 combined data, the average absolute difference between the SUNA samples and the discrete samples was 0.66 mg-N/L. The average absolute difference between the S::CAN samples and the discrete samples was 0.63 mg-N/L.

  4. Hydraulically controlled discrete sampling from open boreholes

    USGS Publications Warehouse

    Harte, Philip T.

    2013-01-01

    Groundwater sampling from open boreholes in fractured-rock aquifers is particularly challenging because of mixing and dilution of fluid within the borehole from multiple fractures. This note presents an alternative to traditional sampling in open boreholes with packer assemblies. The alternative system called ZONFLO (zonal flow) is based on hydraulic control of borehole flow conditions. Fluid from discrete fractures zones are hydraulically isolated allowing for the collection of representative samples. In rough-faced open boreholes and formations with less competent rock, hydraulic containment may offer an attractive alternative to physical containment with packers. Preliminary test results indicate a discrete zone can be effectively hydraulically isolated from other zones within a borehole for the purpose of groundwater sampling using this new method.

  5. Comparisons of discrete and integrative sampling accuracy in estimating pulsed aquatic exposures.

    PubMed

    Morrison, Shane A; Luttbeg, Barney; Belden, Jason B

    2016-11-01

    Most current-use pesticides have short half-lives in the water column and thus the most relevant exposure scenarios for many aquatic organisms are pulsed exposures. Quantifying exposure using discrete water samples may not be accurate as few studies are able to sample frequently enough to accurately determine time-weighted average (TWA) concentrations of short aquatic exposures. Integrative sampling methods that continuously sample freely dissolved contaminants over time intervals (such as integrative passive samplers) have been demonstrated to be a promising measurement technique. We conducted several modeling scenarios to test the assumption that integrative methods may require many less samples for accurate estimation of peak 96-h TWA concentrations. We compared the accuracies of discrete point samples and integrative samples while varying sampling frequencies and a range of contaminant water half-lives (t 50  = 0.5, 2, and 8 d). Differences the predictive accuracy of discrete point samples and integrative samples were greatest at low sampling frequencies. For example, when the half-life was 0.5 d, discrete point samples required 7 sampling events to ensure median values > 50% and no sampling events reporting highly inaccurate results (defined as < 10% of the true 96-h TWA). Across all water half-lives investigated, integrative sampling only required two samples to prevent highly inaccurate results and measurements resulting in median values > 50% of the true concentration. Regardless, the need for integrative sampling diminished as water half-life increased. For an 8-d water half-life, two discrete samples produced accurate estimates and median values greater than those obtained for two integrative samples. Overall, integrative methods are the more accurate method for monitoring contaminants with short water half-lives due to reduced frequency of extreme values, especially with uncertainties around the timing of pulsed events. However, the acceptability of discrete sampling methods for providing accurate concentration measurements increases with increasing aquatic half-lives. Copyright © 2016 Elsevier Ltd. All rights reserved.

  6. What influences participation in genetic carrier testing? Results from a discrete choice experiment.

    PubMed

    Hall, Jane; Fiebig, Denzil G; King, Madeleine T; Hossain, Ishrat; Louviere, Jordan J

    2006-05-01

    This study explores factors that influence participation in genetic testing programs and the acceptance of multiple tests. Tay Sachs and cystic fibrosis are both genetically determined recessive disorders with differing severity, treatment availability, and prevalence in different population groups. We used a discrete choice experiment with a general community and an Ashkenazi Jewish sample; data were analysed using multinomial logit with random coefficients. Although Jewish respondents were more likely to be tested, both groups seem to be making very similar tradeoffs across attributes when they make genetic testing choices.

  7. Cone penetrometer testing and discrete-depth ground water sampling techniques: A cost-effective method of site characterization in a multiple-aquifer setting

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zemo, D.A.; Pierce, Y.G.; Gallinatti, J.D.

    Cone penetrometer testing (CPT), combined with discrete-depth ground water sampling methods, can significantly reduce the time and expense required to characterize large sites that have multiple aquifers. Results from the screening site characterization can then be used to design and install a cost-effective monitoring well network. At a site in northern California, it was necessary to characterize the stratigraphy and the distribution of volatile organic compounds (VOCs). To expedite characterization, a five-week field screening program was implemented that consisted of a shallow ground water survey, CPT soundings and pore-pressure measurements, and discrete-depth ground water sampling. Based on continuous lithologic informationmore » provided by the CPT soundings, four predominantly coarse-grained, water yielding stratigraphic packages were identified. Seventy-nine discrete-depth ground water samples were collected using either shallow ground water survey techniques, the BAT Enviroprobe, or the QED HydroPunch I, depending on subsurface conditions. Using results from these efforts, a 20-well monitoring network was designed and installed to monitor critical points within each stratigraphic package. Good correlation was found for hydraulic head and chemical results between discrete-depth screening data and monitoring well data. Understanding the vertical VOC distribution and concentrations produced substantial time and cost savings by minimizing the number of permanent monitoring wells and reducing the number of costly conductor casings that had to be installed. Additionally, significant long-term cost savings will result from reduced sampling costs, because fewer wells comprise the monitoring network. The authors estimate these savings to be 50% for site characterization costs, 65% for site characterization time, and 60% for long-term monitoring costs.« less

  8. USGS Arctic Ocean Carbon Cruise 2012: Field Activity L-01-12-AR to collect carbon data in the Arctic Ocean, August-September 2012

    USGS Publications Warehouse

    Robbins, Lisa L.; Wynn, Jonathan; Knorr, Paul O.; Onac, Bogdan; Lisle, John T.; McMullen, Katherine Y.; Yates, Kimberly K.; Byrne, Robert H.; Liu, Xuewu

    2014-01-01

    During the cruise, underway continuous and discrete water samples were collected, and discrete water samples were collected at stations to document the carbonate chemistry of the Arctic waters and quantify the saturation state of seawater with respect to calcium carbonate. These data are critical for providing baseline information in areas where no data have existed prior and will also be used to test existing models and predict future trends.

  9. Investigation of discrete component chip mounting technology for hybrid microelectronic circuits

    NASA Technical Reports Server (NTRS)

    Caruso, S. V.; Honeycutt, J. O.

    1975-01-01

    The use of polymer adhesives for high reliability microcircuit applications is a radical deviation from past practices in electronic packaging. Bonding studies were performed using two gold-filled conductive adhesives, 10/90 tin/lead solder and Indalloy no. 7 solder. Various types of discrete components were mounted on ceramic substrates using both thick-film and thin-film metallization. Electrical and mechanical testing were performed on the samples before and after environmental exposure to MIL-STD-883 screening tests.

  10. GEE-based SNP set association test for continuous and discrete traits in family-based association studies.

    PubMed

    Wang, Xuefeng; Lee, Seunggeun; Zhu, Xiaofeng; Redline, Susan; Lin, Xihong

    2013-12-01

    Family-based genetic association studies of related individuals provide opportunities to detect genetic variants that complement studies of unrelated individuals. Most statistical methods for family association studies for common variants are single marker based, which test one SNP a time. In this paper, we consider testing the effect of an SNP set, e.g., SNPs in a gene, in family studies, for both continuous and discrete traits. Specifically, we propose a generalized estimating equations (GEEs) based kernel association test, a variance component based testing method, to test for the association between a phenotype and multiple variants in an SNP set jointly using family samples. The proposed approach allows for both continuous and discrete traits, where the correlation among family members is taken into account through the use of an empirical covariance estimator. We derive the theoretical distribution of the proposed statistic under the null and develop analytical methods to calculate the P-values. We also propose an efficient resampling method for correcting for small sample size bias in family studies. The proposed method allows for easily incorporating covariates and SNP-SNP interactions. Simulation studies show that the proposed method properly controls for type I error rates under both random and ascertained sampling schemes in family studies. We demonstrate through simulation studies that our approach has superior performance for association mapping compared to the single marker based minimum P-value GEE test for an SNP-set effect over a range of scenarios. We illustrate the application of the proposed method using data from the Cleveland Family GWAS Study. © 2013 WILEY PERIODICALS, INC.

  11. Microfabricated, flowthrough porous apparatus for discrete detection of binding reactions

    DOEpatents

    Beattie, Kenneth L.

    1998-01-01

    An improved microfabricated apparatus for conducting a multiplicity of individual and simultaneous binding reactions is described. The apparatus comprises a substrate on which are located discrete and isolated sites for binding reactions. The apparatus is characterized by discrete and isolated regions that extend through said substrate and terminate on a second surface thereof such that when a test sample is allowed to the substrate, it is capable of penetrating through each such region during the course of said binding reaction. The apparatus is especially useful for sequencing by hybridization of DNA molecules.

  12. Parametric methods outperformed non-parametric methods in comparisons of discrete numerical variables.

    PubMed

    Fagerland, Morten W; Sandvik, Leiv; Mowinckel, Petter

    2011-04-13

    The number of events per individual is a widely reported variable in medical research papers. Such variables are the most common representation of the general variable type called discrete numerical. There is currently no consensus on how to compare and present such variables, and recommendations are lacking. The objective of this paper is to present recommendations for analysis and presentation of results for discrete numerical variables. Two simulation studies were used to investigate the performance of hypothesis tests and confidence interval methods for variables with outcomes {0, 1, 2}, {0, 1, 2, 3}, {0, 1, 2, 3, 4}, and {0, 1, 2, 3, 4, 5}, using the difference between the means as an effect measure. The Welch U test (the T test with adjustment for unequal variances) and its associated confidence interval performed well for almost all situations considered. The Brunner-Munzel test also performed well, except for small sample sizes (10 in each group). The ordinary T test, the Wilcoxon-Mann-Whitney test, the percentile bootstrap interval, and the bootstrap-t interval did not perform satisfactorily. The difference between the means is an appropriate effect measure for comparing two independent discrete numerical variables that has both lower and upper bounds. To analyze this problem, we encourage more frequent use of parametric hypothesis tests and confidence intervals.

  13. Input-output characterization of an ultrasonic testing system by digital signal analysis

    NASA Technical Reports Server (NTRS)

    Williams, J. H., Jr.; Lee, S. S.; Karagulle, H.

    1986-01-01

    Ultrasonic test system input-output characteristics were investigated by directly coupling the transmitting and receiving transducers face to face without a test specimen. Some of the fundamentals of digital signal processing were summarized. Input and output signals were digitized by using a digital oscilloscope, and the digitized data were processed in a microcomputer by using digital signal-processing techniques. The continuous-time test system was modeled as a discrete-time, linear, shift-invariant system. In estimating the unit-sample response and frequency response of the discrete-time system, it was necessary to use digital filtering to remove low-amplitude noise, which interfered with deconvolution calculations. A digital bandpass filter constructed with the assistance of a Blackman window and a rectangular time window were used. Approximations of the impulse response and the frequency response of the continuous-time test system were obtained by linearly interpolating the defining points of the unit-sample response and the frequency response of the discrete-time system. The test system behaved as a linear-phase bandpass filter in the frequency range 0.6 to 2.3 MHz. These frequencies were selected in accordance with the criterion that they were 6 dB below the maximum peak of the amplitude of the frequency response. The output of the system to various inputs was predicted and the results were compared with the corresponding measurements on the system.

  14. Power and sample size evaluation for the Cochran-Mantel-Haenszel mean score (Wilcoxon rank sum) test and the Cochran-Armitage test for trend.

    PubMed

    Lachin, John M

    2011-11-10

    The power of a chi-square test, and thus the required sample size, are a function of the noncentrality parameter that can be obtained as the limiting expectation of the test statistic under an alternative hypothesis specification. Herein, we apply this principle to derive simple expressions for two tests that are commonly applied to discrete ordinal data. The Wilcoxon rank sum test for the equality of distributions in two groups is algebraically equivalent to the Mann-Whitney test. The Kruskal-Wallis test applies to multiple groups. These tests are equivalent to a Cochran-Mantel-Haenszel mean score test using rank scores for a set of C-discrete categories. Although various authors have assessed the power function of the Wilcoxon and Mann-Whitney tests, herein it is shown that the power of these tests with discrete observations, that is, with tied ranks, is readily provided by the power function of the corresponding Cochran-Mantel-Haenszel mean scores test for two and R > 2 groups. These expressions yield results virtually identical to those derived previously for rank scores and also apply to other score functions. The Cochran-Armitage test for trend assesses whether there is an monotonically increasing or decreasing trend in the proportions with a positive outcome or response over the C-ordered categories of an ordinal independent variable, for example, dose. Herein, it is shown that the power of the test is a function of the slope of the response probabilities over the ordinal scores assigned to the groups that yields simple expressions for the power of the test. Copyright © 2011 John Wiley & Sons, Ltd.

  15. Progressive sparse representation-based classification using local discrete cosine transform evaluation for image recognition

    NASA Astrophysics Data System (ADS)

    Song, Xiaoning; Feng, Zhen-Hua; Hu, Guosheng; Yang, Xibei; Yang, Jingyu; Qi, Yunsong

    2015-09-01

    This paper proposes a progressive sparse representation-based classification algorithm using local discrete cosine transform (DCT) evaluation to perform face recognition. Specifically, the sum of the contributions of all training samples of each subject is first taken as the contribution of this subject, then the redundant subject with the smallest contribution to the test sample is iteratively eliminated. Second, the progressive method aims at representing the test sample as a linear combination of all the remaining training samples, by which the representation capability of each training sample is exploited to determine the optimal "nearest neighbors" for the test sample. Third, the transformed DCT evaluation is constructed to measure the similarity between the test sample and each local training sample using cosine distance metrics in the DCT domain. The final goal of the proposed method is to determine an optimal weighted sum of nearest neighbors that are obtained under the local correlative degree evaluation, which is approximately equal to the test sample, and we can use this weighted linear combination to perform robust classification. Experimental results conducted on the ORL database of faces (created by the Olivetti Research Laboratory in Cambridge), the FERET face database (managed by the Defense Advanced Research Projects Agency and the National Institute of Standards and Technology), AR face database (created by Aleix Martinez and Robert Benavente in the Computer Vision Center at U.A.B), and USPS handwritten digit database (gathered at the Center of Excellence in Document Analysis and Recognition at SUNY Buffalo) demonstrate the effectiveness of the proposed method.

  16. Are Health State Valuations from the General Public Biased? A Test of Health State Reference Dependency Using Self-assessed Health and an Efficient Discrete Choice Experiment.

    PubMed

    Jonker, Marcel F; Attema, Arthur E; Donkers, Bas; Stolk, Elly A; Versteegh, Matthijs M

    2017-12-01

    Health state valuations of patients and non-patients are not the same, whereas health state values obtained from general population samples are a weighted average of both. The latter constitutes an often-overlooked source of bias. This study investigates the resulting bias and tests for the impact of reference dependency on health state valuations using an efficient discrete choice experiment administered to a Dutch nationally representative sample of 788 respondents. A Bayesian discrete choice experiment design consisting of eight sets of 24 (matched pairwise) choice tasks was developed, with each set providing full identification of the included parameters. Mixed logit models were used to estimate health state preferences with respondents' own health included as an additional predictor. Our results indicate that respondents with impaired health worse than or equal to the health state levels under evaluation have approximately 30% smaller health state decrements. This confirms that reference dependency can be observed in general population samples and affirms the relevance of prospect theory in health state valuations. At the same time, the limited number of respondents with severe health impairments does not appear to bias social tariffs as obtained from general population samples. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  17. Design and Operation of a Borehole Straddle Packer for Ground-Water Sampling and Hydraulic Testing of Discrete Intervals at U.S. Air Force Plant 6, Marietta, Georgia

    USGS Publications Warehouse

    Holloway, Owen G.; Waddell, Jonathan P.

    2008-01-01

    A borehole straddle packer was developed and tested by the U.S. Geological Survey to characterize the vertical distribution of contaminants, head, and hydraulic properties in open-borehole wells as part of an ongoing investigation of ground-water contamination at U.S. Air Force Plant 6 (AFP6) in Marietta, Georgia. To better understand contaminant fate and transport in a crystalline bedrock setting and to support remedial activities at AFP6, numerous wells have been constructed that include long open-hole intervals in the crystalline bedrock. These wells can include several discontinuities that produce water, which may contain contaminants. Because of the complexity of ground-water flow and contaminant movement in the crystalline bedrock, it is important to characterize the hydraulic and water-quality characteristics of discrete intervals in these wells. The straddle packer facilitates ground-water sampling and hydraulic testing of discrete intervals, and delivery of fluids including tracer suites and remedial agents into these discontinuities. The straddle packer consists of two inflatable packers, a dual-pump system, a pressure-sensing system, and an aqueous injection system. Tests were conducted to assess the accuracy of the pressure-sensing systems, and water samples were collected for analysis of volatile organic compound (VOCs) concentrations. Pressure-transducer readings matched computed water-column height, with a coefficient of determination of greater than 0.99. The straddle packer incorporates both an air-driven piston pump and a variable-frequency, electronic, submersible pump. Only slight differences were observed between VOC concentrations in samples collected using the two different types of sampling pumps during two sampling events in July and August 2005. A test conducted to assess the effect of stagnation on VOC concentrations in water trapped in the system's pump-tubing reel showed that concentrations were not affected. A comparison was conducted to assess differences between three water-sampling methods - collecting samples from the well by pumping a packer-isolated zone using a submersible pump, by using a grab sampler, and by using a passive diffusion sampler. Concentrations of tetrachloroethylene, trichloroethylene and 1,2-dichloropropane were greatest for samples collected using the submersible pump in the packed-isolated interval, suggesting that the straddle packer yielded the least dilute sample.

  18. Electromagnetic radiation screening of microcircuits for long life applications

    NASA Technical Reports Server (NTRS)

    Brammer, W. G.; Erickson, J. J.; Levy, M. E.

    1974-01-01

    The utility of X-rays as a stimulus for screening high reliability semiconductor microcircuits was studied. The theory of the interaction of X-rays with semiconductor materials and devices was considered. Experimental measurements of photovoltages, photocurrents, and effects on specified parameters were made on discrete devices and on microcircuits. The test specimens included discrete devices with certain types of identified flaws and symptoms of flaws, and microcircuits exhibiting deviant electrical behavior. With a necessarily limited sample of test specimens, no useful correlation could be found between the X-ray-induced electrical response and the known or suspected presence of flaws.

  19. 40 CFR 1033.515 - Discrete-mode steady-state emission tests of locomotives and locomotive engines.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... the provisions of 40 CFR part 1065, subpart F for general pre-test procedures (including engine and... 1065. (b) Begin the test by operating the locomotive over the pre-test portion of the cycle specified... Sample averagingperiod for emissions 1 Pre-test idle Lowest idle setting 10 to 15 3 Not applicable A Low...

  20. 40 CFR 1033.515 - Discrete-mode steady-state emission tests of locomotives and locomotive engines.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... the provisions of 40 CFR part 1065, subpart F for general pre-test procedures (including engine and... 1065. (b) Begin the test by operating the locomotive over the pre-test portion of the cycle specified... Sample averagingperiod for emissions 1 Pre-test idle Lowest idle setting 10 to 15 3 Not applicable A Low...

  1. 40 CFR 1033.515 - Discrete-mode steady-state emission tests of locomotives and locomotive engines.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... the provisions of 40 CFR part 1065, subpart F for general pre-test procedures (including engine and... 1065. (b) Begin the test by operating the locomotive over the pre-test portion of the cycle specified... Sample averagingperiod for emissions 1 Pre-test idle Lowest idle setting 10 to 15 3 Not applicable A Low...

  2. 40 CFR 1033.515 - Discrete-mode steady-state emission tests of locomotives and locomotive engines.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... the provisions of 40 CFR part 1065, subpart F for general pre-test procedures (including engine and... 1065. (b) Begin the test by operating the locomotive over the pre-test portion of the cycle specified... Sample averagingperiod for emissions 1 Pre-test idle Lowest idle setting 10 to 15 3 Not applicable A Low...

  3. 40 CFR 1033.515 - Discrete-mode steady-state emission tests of locomotives and locomotive engines.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... the provisions of 40 CFR part 1065, subpart F for general pre-test procedures (including engine and... 1065. (b) Begin the test by operating the locomotive over the pre-test portion of the cycle specified... Sample averagingperiod for emissions 1 Pre-test idle Lowest idle setting 10 to 15 3 Not applicable A Low...

  4. Reconstruction of the modified discrete Langevin equation from persistent time series

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Czechowski, Zbigniew

    The discrete Langevin-type equation, which can describe persistent processes, was introduced. The procedure of reconstruction of the equation from time series was proposed and tested on synthetic data, with short and long-tail distributions, generated by different Langevin equations. Corrections due to the finite sampling rates were derived. For an exemplary meteorological time series, an appropriate Langevin equation, which constitutes a stochastic macroscopic model of the phenomenon, was reconstructed.

  5. 40 CFR Appendix E to Part 403 - Sampling Procedures

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... done manually or automatically, and discretely or continuously. If discrete sampling is employed, at least 12 aliquots should be composited. Discrete sampling may be flow proportioned either by varying the...

  6. System for δ13C-CO2 and xCO2 analysis of discrete gas samples by cavity ring-down spectroscopy

    NASA Astrophysics Data System (ADS)

    Dickinson, Dane; Bodé, Samuel; Boeckx, Pascal

    2017-11-01

    A method was devised for analysing small discrete gas samples (50 mL syringe) by cavity ring-down spectroscopy (CRDS). Measurements were accomplished by inletting 50 mL syringed samples into an isotopic-CO2 CRDS analyser (Picarro G2131-i) between baseline readings of a reference air standard, which produced sharp peaks in the CRDS data feed. A custom software script was developed to manage the measurement process and aggregate sample data in real time. The method was successfully tested with CO2 mole fractions (xCO2) ranging from < 0.1 to > 20 000 ppm and δ13C-CO2 values from -100 up to +30 000 ‰ in comparison to VPDB (Vienna Pee Dee Belemnite). Throughput was typically 10 samples h-1, with 13 h-1 possible under ideal conditions. The measurement failure rate in routine use was ca. 1 %. Calibration to correct for memory effects was performed with gravimetric gas standards ranging from 0.05 to 2109 ppm xCO2 and δ13C-CO2 levels varying from -27.3 to +21 740 ‰. Repeatability tests demonstrated that method precision for 50 mL samples was ca. 0.05 % in xCO2 and 0.15 ‰ in δ13C-CO2 for CO2 compositions from 300 to 2000 ppm with natural abundance 13C. Long-term method consistency was tested over a 9-month period, with results showing no systematic measurement drift over time. Standardised analysis of discrete gas samples expands the scope of application for isotopic-CO2 CRDS and enhances its potential for replacing conventional isotope ratio measurement techniques. Our method involves minimal set-up costs and can be readily implemented in Picarro G2131-i and G2201-i analysers or tailored for use with other CRDS instruments and trace gases.

  7. 40 CFR 86.1363-2007 - Steady-state testing with a discrete-mode cycle.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... The percent torque is relative to the maximum torque at the commanded test speed. 3 Upon Administrator... ±50 rpm and the specified torque must be held to within plus or minus two percent of the maximum torque at the test speed. (d) One filter shall be used for sampling PM over the 13-mode test procedure...

  8. 40 CFR 86.1363-2007 - Steady-state testing with a discrete-mode cycle.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... The percent torque is relative to the maximum torque at the commanded test speed. 3 Upon Administrator... ±50 rpm and the specified torque must be held to within plus or minus two percent of the maximum torque at the test speed. (d) One filter shall be used for sampling PM over the 13-mode test procedure...

  9. 40 CFR 86.1363-2007 - Steady-state testing with a discrete-mode cycle.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... The percent torque is relative to the maximum torque at the commanded test speed. 3 Upon Administrator... ±50 rpm and the specified torque must be held to within plus or minus two percent of the maximum torque at the test speed. (d) One filter shall be used for sampling PM over the 13-mode test procedure...

  10. Transistor step stress program for JANTX2N4150

    NASA Technical Reports Server (NTRS)

    1979-01-01

    Reliability analysis of the transistor JANTX2N4150 manufactured by General Semiconductor and Transitron is reported. The discrete devices were subjected to power and temperature step stress tests and then to electrical tests after completing the power/temperature step stress point. Control sample units were maintained for verification of the electrical parametric testing. Results are presented.

  11. Evaluating sample allocation and effort in detecting population differentiation for discrete and continuously distributed individuals

    Treesearch

    Erin L. Landguth; Michael K. Schwartz

    2014-01-01

    One of the most pressing issues in spatial genetics concerns sampling. Traditionally, substructure and gene flow are estimated for individuals sampled within discrete populations. Because many species may be continuously distributed across a landscape without discrete boundaries, understanding sampling issues becomes paramount. Given large-scale, geographically broad...

  12. 49 CFR Appendix to Subpart G of... - Required Knowledge and Skills-Sample Guidelines

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... knowledge and skills tests that it administers to CDL applicants. This appendix closely follows the... discretion provided their CDL program tests for the general areas of knowledge and skill specified in §§ 383.111 and 383.113. Examples of specific knowledge elements (a) Safe operations regulations. Driver...

  13. 40 CFR 1042.515 - Test procedures related to not-to-exceed standards.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    .... (g) For engines equipped with emission controls that include discrete regeneration events, if a regeneration event occurs during the NTE test, the averaging period must be at least as long as the time between the events multiplied by the number of full regeneration events within the sampling period. This...

  14. 40 CFR 1042.515 - Test procedures related to not-to-exceed standards.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    .... (g) For engines equipped with emission controls that include discrete regeneration events, if a regeneration event occurs during the NTE test, the averaging period must be at least as long as the time between the events multiplied by the number of full regeneration events within the sampling period. This...

  15. 40 CFR 1042.515 - Test procedures related to not-to-exceed standards.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    .... (g) For engines equipped with emission controls that include discrete regeneration events, if a regeneration event occurs during the NTE test, the averaging period must be at least as long as the time between the events multiplied by the number of full regeneration events within the sampling period. This...

  16. 40 CFR 1042.515 - Test procedures related to not-to-exceed standards.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    .... (g) For engines equipped with emission controls that include discrete regeneration events, if a regeneration event occurs during the NTE test, the averaging period must be at least as long as the time between the events multiplied by the number of full regeneration events within the sampling period. This...

  17. Control of discrete time systems based on recurrent Super-Twisting-like algorithm.

    PubMed

    Salgado, I; Kamal, S; Bandyopadhyay, B; Chairez, I; Fridman, L

    2016-09-01

    Most of the research in sliding mode theory has been carried out to in continuous time to solve the estimation and control problems. However, in discrete time, the results in high order sliding modes have been less developed. In this paper, a discrete time super-twisting-like algorithm (DSTA) was proposed to solve the problems of control and state estimation. The stability proof was developed in terms of the discrete time Lyapunov approach and the linear matrix inequalities theory. The system trajectories were ultimately bounded inside a small region dependent on the sampling period. Simulation results tested the DSTA. The DSTA was applied as a controller for a Furuta pendulum and for a DC motor supplied by a DSTA signal differentiator. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.

  18. Comparison of Inoculation with the InoqulA and WASP Automated Systems with Manual Inoculation

    PubMed Central

    Croxatto, Antony; Dijkstra, Klaas; Prod'hom, Guy

    2015-01-01

    The quality of sample inoculation is critical for achieving an optimal yield of discrete colonies in both monomicrobial and polymicrobial samples to perform identification and antibiotic susceptibility testing. Consequently, we compared the performance between the InoqulA (BD Kiestra), the WASP (Copan), and manual inoculation methods. Defined mono- and polymicrobial samples of 4 bacterial species and cloudy urine specimens were inoculated on chromogenic agar by the InoqulA, the WASP, and manual methods. Images taken with ImagA (BD Kiestra) were analyzed with the VisionLab version 3.43 image analysis software to assess the quality of growth and to prevent subjective interpretation of the data. A 3- to 10-fold higher yield of discrete colonies was observed following automated inoculation with both the InoqulA and WASP systems than that with manual inoculation. The difference in performance between automated and manual inoculation was mainly observed at concentrations of >106 bacteria/ml. Inoculation with the InoqulA system allowed us to obtain significantly more discrete colonies than the WASP system at concentrations of >107 bacteria/ml. However, the level of difference observed was bacterial species dependent. Discrete colonies of bacteria present in 100- to 1,000-fold lower concentrations than the most concentrated populations in defined polymicrobial samples were not reproducibly recovered, even with the automated systems. The analysis of cloudy urine specimens showed that InoqulA inoculation provided a statistically significantly higher number of discrete colonies than that with WASP and manual inoculation. Consequently, the automated InoqulA inoculation greatly decreased the requirement for bacterial subculture and thus resulted in a significant reduction in the time to results, laboratory workload, and laboratory costs. PMID:25972424

  19. High density array fabrication and readout method for a fiber optic biosensor

    DOEpatents

    Pinkel, Daniel; Gray, Joe

    1997-01-01

    The invention relates to the fabrication and use of biosensors comprising a plurality of optical fibers each fiber having attached to its "sensor end" biological "binding partners" (molecules that specifically bind other molecules to form a binding complex such as antibody-antigen, lectin-carbohydrate, nucleic acid-nucleic acid, biotin-avidin, etc.). The biosensor preferably bears two or more different species of biological binding partner. The sensor is fabricated by providing a plurality of groups of optical fibers. Each group is treated as a batch to attach a different species of biological binding partner to the sensor ends of the fibers comprising that bundle. Each fiber, or group of fibers within a bundle, may be uniquely identified so that the fibers, or group of fibers, when later combined in an array of different fibers, can be discretely addressed. Fibers or groups of fibers are then selected and discretely separated from different bundles. The discretely separated fibers are then combined at their sensor ends to produce a high density sensor array of fibers capable of assaying simultaneously the binding of components of a test sample to the various binding partners on the different fibers of the sensor array. The transmission ends of the optical fibers are then discretely addressed to detectors--such as a multiplicity of optical sensors. An optical signal, produced by binding of the binding partner to its substrate to form a binding complex, is conducted through the optical fiber or group of fibers to a detector for each discrete test. By examining the addressed transmission ends of fibers, or groups of fibers, the addressed transmission ends can transmit unique patterns assisting in rapid sample identification by the sensor.

  20. High density array fabrication and readout method for a fiber optic biosensor

    DOEpatents

    Pinkel, Daniel; Gray, Joe; Albertson, Donna G.

    2000-01-01

    The invention relates to the fabrication and use of biosensors comprising a plurality of optical fibers each fiber having attached to its "sensor end" biological "binding partners" (molecules that specifically bind other molecules to form a binding complex such as antibody-antigen, lectin-carbohydrate, nucleic acid-nucleic acid, biotin-avidin, etc.). The biosensor preferably bears two or more different species of biological binding partner. The sensor is fabricated by providing a plurality of groups of optical fibers. Each group is treated as a batch to attach a different species of biological binding partner to the sensor ends of the fibers comprising that bundle. Each fiber, or group of fibers within a bundle, may be uniquely identified so that the fibers, or group of fibers, when later combined in an array of different fibers, can be discretely addressed. Fibers or groups of fibers are then selected and discretely separated from different bundles. The discretely separated fibers are then combined at their sensor ends to produce a high density sensor array of fibers capable of assaying simultaneously the binding of components of a test sample to the various binding partners on the different fibers of the sensor array. The transmission ends of the optical fibers are then discretely addressed to detectors--such as a multiplicity of optical sensors. An optical signal, produced by binding of the binding partner to its substrate to form a binding complex, is conducted through the optical fiber or group of fibers to a detector for each discrete test. By examining the addressed transmission ends of fibers, or groups of fibers, the addressed transmission ends can transmit unique patterns assisting in rapid sample identification by the sensor.

  1. High density array fabrication and readout method for a fiber optic biosensor

    DOEpatents

    Pinkel, Daniel; Gray, Joe; Albertson, Donna G.

    2002-01-01

    The invention relates to the fabrication and use of biosensors comprising a plurality of optical fibers each fiber having attached to its "sensor end" biological "binding partners" (molecules that specifically bind other molecules to form a binding complex such as antibody-antigen, lectin-carbohydrate, nucleic acid-nucleic acid, biotin-avidin, etc.). The biosensor preferably bears two or more different species of biological binding partner. The sensor is fabricated by providing a plurality of groups of optical fibers. Each group is treated as a batch to attach a different species of biological binding partner to the sensor ends of the fibers comprising that bundle. Each fiber, or group of fibers within a bundle, may be uniquely identified so that the fibers, or group of fibers, when later combined in an array of different fibers, can be discretely addressed. Fibers or groups of fibers are then selected and discretely separated from different bundles. The discretely separated fibers are then combined at their sensor ends to produce a high density sensor array of fibers capable of assaying simultaneously the binding of components of a test sample to the various binding partners on the different fibers of the sensor array. The transmission ends of the optical fibers are then discretely addressed to detectors--such as a multiplicity of optical sensors. An optical signal, produced by binding of the binding partner to its substrate to form a binding complex, is conducted through the optical fiber or group of fibers to a detector for each discrete test. By examining the addressed transmission ends of fibers, or groups of fibers, the addressed transmission ends can transmit unique patterns assisting in rapid sample identification by the sensor.

  2. High density array fabrication and readout method for a fiber optic biosensor

    DOEpatents

    Pinkel, D.; Gray, J.

    1997-11-25

    The invention relates to the fabrication and use of biosensors comprising a plurality of optical fibers each fiber having attached to its ``sensor end`` biological ``binding partners`` (molecules that specifically bind other molecules to form a binding complex such as antibody-antigen, lectin-carbohydrate, nucleic acid-nucleic acid, biotin-avidin, etc.). The biosensor preferably bears two or more different species of biological binding partner. The sensor is fabricated by providing a plurality of groups of optical fibers. Each group is treated as a batch to attach a different species of biological binding partner to the sensor ends of the fibers comprising that bundle. Each fiber, or group of fibers within a bundle, may be uniquely identified so that the fibers, or group of fibers, when later combined in an array of different fibers, can be discretely addressed. Fibers or groups of fibers are then selected and discretely separated from different bundles. The discretely separated fibers are then combined at their sensor ends to produce a high density sensor array of fibers capable of assaying simultaneously the binding of components of a test sample to the various binding partners on the different fibers of the sensor array. The transmission ends of the optical fibers are then discretely addressed to detectors--such as a multiplicity of optical sensors. An optical signal, produced by binding of the binding partner to its substrate to form a binding complex, is conducted through the optical fiber or group of fibers to a detector for each discrete test. By examining the addressed transmission ends of fibers, or groups of fibers, the addressed transmission ends can transmit unique patterns assisting in rapid sample identification by the sensor. 9 figs.

  3. Noise deconvolution based on the L1-metric and decomposition of discrete distributions of postsynaptic responses.

    PubMed

    Astrelin, A V; Sokolov, M V; Behnisch, T; Reymann, K G; Voronin, L L

    1997-04-25

    A statistical approach to analysis of amplitude fluctuations of postsynaptic responses is described. This includes (1) using a L1-metric in the space of distribution functions for minimisation with application of linear programming methods to decompose amplitude distributions into a convolution of Gaussian and discrete distributions; (2) deconvolution of the resulting discrete distribution with determination of the release probabilities and the quantal amplitude for cases with a small number (< 5) of discrete components. The methods were tested against simulated data over a range of sample sizes and signal-to-noise ratios which mimicked those observed in physiological experiments. In computer simulation experiments, comparisons were made with other methods of 'unconstrained' (generalized) and constrained reconstruction of discrete components from convolutions. The simulation results provided additional criteria for improving the solutions to overcome 'over-fitting phenomena' and to constrain the number of components with small probabilities. Application of the programme to recordings from hippocampal neurones demonstrated its usefulness for the analysis of amplitude distributions of postsynaptic responses.

  4. 40 CFR 1042.505 - Testing engines using discrete-mode or ramped-modal duty cycles.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 40 Protection of Environment 32 2010-07-01 2010-07-01 false Testing engines using discrete-mode or...-IGNITION ENGINES AND VESSELS Test Procedures § 1042.505 Testing engines using discrete-mode or ramped-modal... the Clean Air Act. (a) You may perform steady-state testing with either discrete-mode or ramped-modal...

  5. On-line analysis of algae in water by discrete three-dimensional fluorescence spectroscopy.

    PubMed

    Zhao, Nanjing; Zhang, Xiaoling; Yin, Gaofang; Yang, Ruifang; Hu, Li; Chen, Shuang; Liu, Jianguo; Liu, Wenqing

    2018-03-19

    In view of the problem of the on-line measurement of algae classification, a method of algae classification and concentration determination based on the discrete three-dimensional fluorescence spectra was studied in this work. The discrete three-dimensional fluorescence spectra of twelve common species of algae belonging to five categories were analyzed, the discrete three-dimensional standard spectra of five categories were built, and the recognition, classification and concentration prediction of algae categories were realized by the discrete three-dimensional fluorescence spectra coupled with non-negative weighted least squares linear regression analysis. The results show that similarities between discrete three-dimensional standard spectra of different categories were reduced and the accuracies of recognition, classification and concentration prediction of the algae categories were significantly improved. By comparing with that of the chlorophyll a fluorescence excitation spectra method, the recognition accuracy rate in pure samples by discrete three-dimensional fluorescence spectra is improved 1.38%, and the recovery rate and classification accuracy in pure diatom samples 34.1% and 46.8%, respectively; the recognition accuracy rate of mixed samples by discrete-three dimensional fluorescence spectra is enhanced by 26.1%, the recovery rate of mixed samples with Chlorophyta 37.8%, and the classification accuracy of mixed samples with diatoms 54.6%.

  6. Biochemical Characteristics, Adhesion, and Cytotoxicity of Environmental and Clinical Isolates of Herbaspirillum spp.

    PubMed Central

    Marques, Ana C. Q.; Paludo, Katia S.; Dallagassa, Cibelle B.; Surek, Monica; Pedrosa, Fábio O.; Souza, Emanuel M.; Cruz, Leonardo M.; LiPuma, John J.; Zanata, Sílvio M.; Rego, Fabiane G. M.

    2014-01-01

    Herbaspirillum bacteria are best known as plant growth-promoting rhizobacteria but have also been recovered from clinical samples. Here, biochemical tests, matrix-assisted laser deionization–time of flight (MALDI-TOF) mass spectrometry, adherence, and cytotoxicity to eukaryotic cells were used to compare clinical and environmental isolates of Herbaspirillum spp. Discrete biochemical differences were observed between human and environmental strains. All strains adhered to HeLa cells at low densities, and cytotoxic effects were discrete, supporting the view that Herbaspirillum bacteria are opportunists with low virulence potential. PMID:25355763

  7. Sample Design for Discrete Choice Analysis of Travel Behavior

    DOT National Transportation Integrated Search

    1978-07-01

    Discrete choice models represent the choices of individuals among alternatives such as modes of travel, auto types and destinations. This paper presents a review of the state-of-the-art in designing samples for discrete choice analysis of traveller b...

  8. Bell-Curve Genetic Algorithm for Mixed Continuous and Discrete Optimization Problems

    NASA Technical Reports Server (NTRS)

    Kincaid, Rex K.; Griffith, Michelle; Sykes, Ruth; Sobieszczanski-Sobieski, Jaroslaw

    2002-01-01

    In this manuscript we have examined an extension of BCB that encompasses a mix of continuous and quasi-discrete, as well as truly-discrete applications. FVe began by testing two refinements to the discrete version of BCB. The testing of midpoint versus fitness (Tables 1 and 2) proved inconclusive. The testing of discrete normal tails versus standard mutation showed was conclusive and demonstrated that the discrete normal tails are better. Next, we implemented these refinements in a combined continuous and discrete BCB and compared the performance of two discrete distance on the hub problem. Here we found when "order does matter" it pays to take it into account.

  9. Numerical sedimentation particle-size analysis using the Discrete Element Method

    NASA Astrophysics Data System (ADS)

    Bravo, R.; Pérez-Aparicio, J. L.; Gómez-Hernández, J. J.

    2015-12-01

    Sedimentation tests are widely used to determine the particle size distribution of a granular sample. In this work, the Discrete Element Method interacts with the simulation of flow using the well known one-way-coupling method, a computationally affordable approach for the time-consuming numerical simulation of the hydrometer, buoyancy and pipette sedimentation tests. These tests are used in the laboratory to determine the particle-size distribution of fine-grained aggregates. Five samples with different particle-size distributions are modeled by about six million rigid spheres projected on two-dimensions, with diameters ranging from 2.5 ×10-6 m to 70 ×10-6 m, forming a water suspension in a sedimentation cylinder. DEM simulates the particle's movement considering laminar flow interactions of buoyant, drag and lubrication forces. The simulation provides the temporal/spatial distributions of densities and concentrations of the suspension. The numerical simulations cannot replace the laboratory tests since they need the final granulometry as initial data, but, as the results show, these simulations can identify the strong and weak points of each method and eventually recommend useful variations and draw conclusions on their validity, aspects very difficult to achieve in the laboratory.

  10. Input-output characterization of an ultrasonic testing system by digital signal analysis

    NASA Technical Reports Server (NTRS)

    Karaguelle, H.; Lee, S. S.; Williams, J., Jr.

    1984-01-01

    The input/output characteristics of an ultrasonic testing system used for stress wave factor measurements were studied. The fundamentals of digital signal processing are summarized. The inputs and outputs are digitized and processed in a microcomputer using digital signal processing techniques. The entire ultrasonic test system, including transducers and all electronic components, is modeled as a discrete-time linear shift-invariant system. Then the impulse response and frequency response of the continuous time ultrasonic test system are estimated by interpolating the defining points in the unit sample response and frequency response of the discrete time system. It is found that the ultrasonic test system behaves as a linear phase bandpass filter. Good results were obtained for rectangular pulse inputs of various amplitudes and durations and for tone burst inputs whose center frequencies are within the passband of the test system and for single cycle inputs of various amplitudes. The input/output limits on the linearity of the system are determined.

  11. Discrete Element Method Simulation of a Boulder Extraction From an Asteroid

    NASA Technical Reports Server (NTRS)

    Kulchitsky, Anton K.; Johnson, Jerome B.; Reeves, David M.; Wilkinson, Allen

    2014-01-01

    The force required to pull 7t and 40t polyhedral boulders from the surface of an asteroid is simulated using the discrete element method considering the effects of microgravity, regolith cohesion and boulder acceleration. The connection between particle surface energy and regolith cohesion is estimated by simulating a cohesion sample tearing test. An optimal constant acceleration is found where the peak net force from inertia and cohesion is a minimum. Peak pulling forces can be further reduced by using linear and quadratic acceleration functions with up to a 40% reduction in force for quadratic acceleration.

  12. Biochemical characteristics, adhesion, and cytotoxicity of environmental and clinical isolates of Herbaspirillum spp.

    PubMed

    Marques, Ana C Q; Paludo, Katia S; Dallagassa, Cibelle B; Surek, Monica; Pedrosa, Fábio O; Souza, Emanuel M; Cruz, Leonardo M; LiPuma, John J; Zanata, Sílvio M; Rego, Fabiane G M; Fadel-Picheth, Cyntia M T

    2015-01-01

    Herbaspirillum bacteria are best known as plant growth-promoting rhizobacteria but have also been recovered from clinical samples. Here, biochemical tests, matrix-assisted laser deionization-time of flight (MALDI-TOF) mass spectrometry, adherence, and cytotoxicity to eukaryotic cells were used to compare clinical and environmental isolates of Herbaspirillum spp. Discrete biochemical differences were observed between human and environmental strains. All strains adhered to HeLa cells at low densities, and cytotoxic effects were discrete, supporting the view that Herbaspirillum bacteria are opportunists with low virulence potential. Copyright © 2015, American Society for Microbiology. All Rights Reserved.

  13. Prediction of Fracture Behavior in Rock and Rock-like Materials Using Discrete Element Models

    NASA Astrophysics Data System (ADS)

    Katsaga, T.; Young, P.

    2009-05-01

    The study of fracture initiation and propagation in heterogeneous materials such as rock and rock-like materials are of principal interest in the field of rock mechanics and rock engineering. It is crucial to study and investigate failure prediction and safety measures in civil and mining structures. Our work offers a practical approach to predict fracture behaviour using discrete element models. In this approach, the microstructures of materials are presented through the combination of clusters of bonded particles with different inter-cluster particle and bond properties, and intra-cluster bond properties. The geometry of clusters is transferred from information available from thin sections, computed tomography (CT) images and other visual presentation of the modeled material using customized AutoCAD built-in dialog- based Visual Basic Application. Exact microstructures of the tested sample, including fractures, faults, inclusions and void spaces can be duplicated in the discrete element models. Although the microstructural fabrics of rocks and rock-like structures may have different scale, fracture formation and propagation through these materials are alike and will follow similar mechanics. Synthetic material provides an excellent condition for validating the modelling approaches, as fracture behaviours are known with the well-defined composite's properties. Calibration of the macro-properties of matrix material and inclusions (aggregates), were followed with the overall mechanical material responses calibration by adjusting the interfacial properties. The discrete element model predicted similar fracture propagation features and path as that of the real sample material. The path of the fractures and matrix-inclusion interaction was compared using computed tomography images. Initiation and fracture formation in the model and real material were compared using Acoustic Emission data. Analysing the temporal and spatial evolution of AE events, collected during the sample testing, in relation to the CT images allows the precise reconstruction of the failure sequence. Our proposed modelling approach illustrates realistic fracture formation and growth predictions at different loading conditions.

  14. Using Electrical Resistivity Imaging to Evaluate Permanganate Performance During an In Situ Treatment of a RDX-Contaminated Aquifer

    DTIC Science & Technology

    2009-08-01

    assess the performance of remedial efforts. These techniques are expensive and, by themselves, are effectively random samples guided by the training...technology should be further explored and developed for use in pre-amendment tracer tests and quantitative remedial assessments . 15. SUBJECT TERMS...and flow of injectate. Site assessment following groundwater remediation efforts typically involves discrete point sampling using wells or

  15. Investigation of a Hybrid Wafer Scale Integration Technique that Mounts Discrete Integrated Circuit Die in a Silicon Substrate.

    DTIC Science & Technology

    1988-03-01

    Polyimides as Planarizing and Insulative Coatings 2-21 III. Experimental Procedure, Equipment, and Materials 3-1 Wet Orientation Dependent Etching Study 3...1 Die Bond Adhesives Study .3-7 Fabrication of Samples for Electrical Testing 3-21 Evaluation of the Final Samples 3-45 IV. Experimental Results and...Discussion .. 4-1 We :ientation Dependent Etching Study Results 4-1 Die Attach Adhesives Study Results 4-21 Fabrication of Samples for Electrical

  16. Estimating the proportion of true null hypotheses when the statistics are discrete.

    PubMed

    Dialsingh, Isaac; Austin, Stefanie R; Altman, Naomi S

    2015-07-15

    In high-dimensional testing problems π0, the proportion of null hypotheses that are true is an important parameter. For discrete test statistics, the P values come from a discrete distribution with finite support and the null distribution may depend on an ancillary statistic such as a table margin that varies among the test statistics. Methods for estimating π0 developed for continuous test statistics, which depend on a uniform or identical null distribution of P values, may not perform well when applied to discrete testing problems. This article introduces a number of π0 estimators, the regression and 'T' methods that perform well with discrete test statistics and also assesses how well methods developed for or adapted from continuous tests perform with discrete tests. We demonstrate the usefulness of these estimators in the analysis of high-throughput biological RNA-seq and single-nucleotide polymorphism data. implemented in R. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  17. Sampling trace organic compounds in water: a comparison of a continuous active sampler to continuous passive and discrete sampling methods

    USGS Publications Warehouse

    Coes, Alissa L.; Paretti, Nicholas V.; Foreman, William T.; Iverson, Jana L.; Alvarez, David A.

    2014-01-01

    A continuous active sampling method was compared to continuous passive and discrete sampling methods for the sampling of trace organic compounds (TOCs) in water. Results from each method are compared and contrasted in order to provide information for future investigators to use while selecting appropriate sampling methods for their research. The continuous low-level aquatic monitoring (CLAM) sampler (C.I.Agent® Storm-Water Solutions) is a submersible, low flow-rate sampler, that continuously draws water through solid-phase extraction media. CLAM samplers were deployed at two wastewater-dominated stream field sites in conjunction with the deployment of polar organic chemical integrative samplers (POCIS) and the collection of discrete (grab) water samples. All samples were analyzed for a suite of 69 TOCs. The CLAM and POCIS samples represent time-integrated samples that accumulate the TOCs present in the water over the deployment period (19–23 h for CLAM and 29 days for POCIS); the discrete samples represent only the TOCs present in the water at the time and place of sampling. Non-metric multi-dimensional scaling and cluster analysis were used to examine patterns in both TOC detections and relative concentrations between the three sampling methods. A greater number of TOCs were detected in the CLAM samples than in corresponding discrete and POCIS samples, but TOC concentrations in the CLAM samples were significantly lower than in the discrete and (or) POCIS samples. Thirteen TOCs of varying polarity were detected by all of the three methods. TOC detections and concentrations obtained by the three sampling methods, however, are dependent on multiple factors. This study found that stream discharge, constituent loading, and compound type all affected TOC concentrations detected by each method. In addition, TOC detections and concentrations were affected by the reporting limits, bias, recovery, and performance of each method.

  18. Sampling trace organic compounds in water: a comparison of a continuous active sampler to continuous passive and discrete sampling methods.

    PubMed

    Coes, Alissa L; Paretti, Nicholas V; Foreman, William T; Iverson, Jana L; Alvarez, David A

    2014-03-01

    A continuous active sampling method was compared to continuous passive and discrete sampling methods for the sampling of trace organic compounds (TOCs) in water. Results from each method are compared and contrasted in order to provide information for future investigators to use while selecting appropriate sampling methods for their research. The continuous low-level aquatic monitoring (CLAM) sampler (C.I.Agent® Storm-Water Solutions) is a submersible, low flow-rate sampler, that continuously draws water through solid-phase extraction media. CLAM samplers were deployed at two wastewater-dominated stream field sites in conjunction with the deployment of polar organic chemical integrative samplers (POCIS) and the collection of discrete (grab) water samples. All samples were analyzed for a suite of 69 TOCs. The CLAM and POCIS samples represent time-integrated samples that accumulate the TOCs present in the water over the deployment period (19-23 h for CLAM and 29 days for POCIS); the discrete samples represent only the TOCs present in the water at the time and place of sampling. Non-metric multi-dimensional scaling and cluster analysis were used to examine patterns in both TOC detections and relative concentrations between the three sampling methods. A greater number of TOCs were detected in the CLAM samples than in corresponding discrete and POCIS samples, but TOC concentrations in the CLAM samples were significantly lower than in the discrete and (or) POCIS samples. Thirteen TOCs of varying polarity were detected by all of the three methods. TOC detections and concentrations obtained by the three sampling methods, however, are dependent on multiple factors. This study found that stream discharge, constituent loading, and compound type all affected TOC concentrations detected by each method. In addition, TOC detections and concentrations were affected by the reporting limits, bias, recovery, and performance of each method. Published by Elsevier B.V.

  19. Depth-dependent groundwater quality sampling at City of Tallahassee test well 32, Leon County, Florida, 2013

    USGS Publications Warehouse

    McBride, W. Scott; Wacker, Michael A.

    2015-01-01

    A test well was drilled by the City of Tallahassee to assess the suitability of the site for the installation of a new well for public water supply. The test well is in Leon County in north-central Florida. The U.S. Geological Survey delineated high-permeability zones in the Upper Floridan aquifer, using borehole-geophysical data collected from the open interval of the test well. A composite water sample was collected from the open interval during high-flow conditions, and three discrete water samples were collected from specified depth intervals within the test well during low-flow conditions. Water-quality, source tracer, and age-dating results indicate that the open interval of the test well produces water of consistently high quality throughout its length. The cavernous nature of the open interval makes it likely that the highly permeable zones are interconnected in the aquifer by secondary porosity features.

  20. Genomic testing to determine drug response: measuring preferences of the public and patients using Discrete Choice Experiment (DCE)

    PubMed Central

    2013-01-01

    Background The extent to which a genomic test will be used in practice is affected by factors such as ability of the test to correctly predict response to treatment (i.e. sensitivity and specificity of the test), invasiveness of the testing procedure, test cost, and the probability and severity of side effects associated with treatment. Methods Using discrete choice experimentation (DCE), we elicited preferences of the public (Sample 1, N = 533 and Sample 2, N = 525) and cancer patients (Sample 3, N = 38) for different attributes of a hypothetical genomic test for guiding cancer treatment. Samples 1 and 3 considered the test/treatment in the context of an aggressive curable cancer (scenario A) while the scenario for sample 2 was based on a non-aggressive incurable cancer (scenario B). Results In aggressive curable cancer (scenario A), everything else being equal, the odds ratio (OR) of choosing a test with 95% sensitivity was 1.41 (versus a test with 50% sensitivity) and willingness to pay (WTP) was $1331, on average, for this amount of improvement in test sensitivity. In this scenario, the OR of choosing a test with 95% specificity was 1.24 times that of a test with 50% specificity (WTP = $827). In non-aggressive incurable cancer (scenario B), the OR of choosing a test with 95% sensitivity was 1.65 (WTP = $1344), and the OR of choosing a test with 95% specificity was 1.50 (WTP = $1080). Reducing severity of treatment side effects from severe to mild was associated with large ORs in both scenarios (OR = 2.10 and 2.24 in scenario A and B, respectively). In contrast, patients had a very large preference for 95% sensitivity of the test (OR = 5.23). Conclusion The type and prognosis of cancer affected preferences for genomically-guided treatment. In aggressive curable cancer, individuals emphasized more on the sensitivity rather than the specificity of the test. In contrast, for a non-aggressive incurable cancer, individuals put similar emphasis on sensitivity and specificity of the test. While the public expressed strong preference toward lowering severity of side effects, improving sensitivity of the test had by far the largest influence on patients’ decision to use genomic testing. PMID:24176050

  1. Cast Stone Oxidation Front Evaluation: Preliminary Results For Samples Exposed To Moist Air

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Langton, C. A.; Almond, P. M.

    The rate of oxidation is important to the long-term performance of reducing salt waste forms because the solubility of some contaminants, e.g., technetium, is a function of oxidation state. TcO{sub 4}{sup -} in the salt solution is reduced to Tc(IV) and has been shown to react with ingredients in the waste form to precipitate low solubility sulfide and/or oxide phases. Upon exposure to oxygen, the compounds containing Tc(IV) oxidize to the pertechnetate ion, Tc(VII)O{sub 4}{sup -}, which is very soluble. Consequently the rate of technetium oxidation front advancement into a monolith and the technetium leaching profile as a function ofmore » depth from an exposed surface are important to waste form performance and ground water concentration predictions. An approach for measuring contaminant oxidation rate (effective contaminant specific oxidation rate) based on leaching of select contaminants of concern is described in this report. In addition, the relationship between reduction capacity and contaminant oxidation is addressed. Chromate (Cr(VI) was used as a non-radioactive surrogate for pertechnetate, Tc(VII), in Cast Stone samples prepared with 5 M Simulant. Cast Stone spiked with pertechnetate was also prepared and tested. Depth discrete subsamples spiked with Cr were cut from Cast Stone exposed to Savannah River Site (SRS) outdoor ambient temperature fluctuations and moist air. Depth discrete subsamples spiked with Tc-99 were cut from Cast Stone exposed to laboratory ambient temperature fluctuations and moist air. Similar conditions are expected to be encountered in the Cast Stone curing container. The leachability of Cr and Tc-99 and the reduction capacities, measured by the Angus-Glasser method, were determined for each subsample as a function of depth from the exposed surface. The results obtained to date were focused on continued method development and are preliminary and apply to the sample composition and curing / exposure conditions described in this report. The Cr oxidation front (depth to which soluble Cr was detected) for the Cast Stone sample exposed for 68 days to ambient outdoor temperatures and humid air (total age of sample was 131 days) was determined to be about 35 mm below the top sample surface exposed. The Tc oxidation front, depth at which Tc was insoluble, was not determined. Interpretation of the results indicates that the oxidation front is at least 38 mm below the exposed surface. The sample used for this measurement was exposed to ambient laboratory conditions and humid air for 50 days. The total age of the sample was 98 days. Technetium appears to be more easily oxidized than Cr in the Cast Stone matrix. The oxidized forms of Tc and Cr are soluble and therefore leachable. Longer exposure times are required for both the Cr and Tc spiked samples to better interpret the rate of oxidation. Tc spiked subsamples need to be taken further from the exposed surface to better define and interpret the leachable Tc profile. Finally Tc(VII) reduction to Tc(IV) appears to occur relatively fast. Results demonstrated that about 95 percent of the Tc(VII) was reduced to Tc(IV) during the setting and very early stage setting for a Cast Stone sample cured 10 days. Additional testing at longer curing times is required to determine whether additional time is required to reduce 100 % of the Tc(VII) in Cast Stone or whether the Tc loading exceeded the ability of the waste form to reduce 100 % of the Tc(VII). Additional testing is required for samples cured for longer times. Depth discrete subsampling in a nitrogen glove box is also required to determine whether the 5 percent Tc extracted from the subsamples was the result of the sampling process which took place in air. Reduction capacity measurements (per the Angus-Glasser method) performed on depth discrete samples could not be correlated with the amount of chromium or technetium leached from the depth discrete subsamples or with the oxidation front inferred from soluble chromium and technetium (i.e., effective Cr and Tc oxidation fronts). Residual reduction capacity in the oxidized region of the test samples indicates that the remaining reduction capacity is not effective in re-reducing Cr(VI) or Tc(VII) in the presence of oxygen. Depth discrete sampling and leaching is a useful for evaluating Cast Stone and other chemically reducing waste forms containing ground granulated blast furnace slag (GGBFS) or other reduction / sequestration reagents to control redox sensitive contaminant chemistry and leachability in the near surface disposal environment. Based on results presented in this report, reduction capacity measured by the Angus-Glasser Ce(IV) method is not an appropriate or meaningful parameter for determining or predicting Tc and Cr oxidation / retentions, speciation, or solubilities in cementitious materials such as Cast Stone. A model for predicting Tc(IV) oxidation to soluble Tc(VII) should consider the waste form porosity (pathway for oxygen ingress), oxygen source, and the contaminant specific oxidation rates and oxidation fronts. Depth discrete sampling of materials exposed to realistic conditions in combination with short term leaching of crushed samples has potential for advancing the understanding of factors influencing performance. This information can be used to support conceptual model development.« less

  2. 40 CFR 1051.505 - What special provisions apply for testing snowmobiles?

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... provisions for testing snowmobiles: (a) You may perform steady-state testing with either discrete-mode or... both discrete-mode and ramped-modal testing (either in your original application or in an amendment to... as allowed by the Clean Air Act. Measure steady-state emissions as follows: (1) For discrete-mode...

  3. 40 CFR 86.1363-2007 - Steady-state testing with a discrete-mode cycle.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 40 Protection of Environment 19 2010-07-01 2010-07-01 false Steady-state testing with a discrete-mode cycle. 86.1363-2007 Section 86.1363-2007 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY... Exhaust Test Procedures § 86.1363-2007 Steady-state testing with a discrete-mode cycle. This section...

  4. Adaptive wavelet collocation methods for initial value boundary problems of nonlinear PDE's

    NASA Technical Reports Server (NTRS)

    Cai, Wei; Wang, Jian-Zhong

    1993-01-01

    We have designed a cubic spline wavelet decomposition for the Sobolev space H(sup 2)(sub 0)(I) where I is a bounded interval. Based on a special 'point-wise orthogonality' of the wavelet basis functions, a fast Discrete Wavelet Transform (DWT) is constructed. This DWT transform will map discrete samples of a function to its wavelet expansion coefficients in O(N log N) operations. Using this transform, we propose a collocation method for the initial value boundary problem of nonlinear PDE's. Then, we test the efficiency of the DWT transform and apply the collocation method to solve linear and nonlinear PDE's.

  5. Fast transient digitizer

    DOEpatents

    Villa, Francesco

    1982-01-01

    Method and apparatus for sequentially scanning a plurality of target elements with an electron scanning beam modulated in accordance with variations in a high-frequency analog signal to provide discrete analog signal samples representative of successive portions of the analog signal; coupling the discrete analog signal samples from each of the target elements to a different one of a plurality of high speed storage devices; converting the discrete analog signal samples to equivalent digital signals; and storing the digital signals in a digital memory unit for subsequent measurement or display.

  6. Discrete Structure-Point Testing: Problems and Alternatives. TESL Reporter, Vol. 9, No. 4.

    ERIC Educational Resources Information Center

    Aitken, Kenneth G.

    This paper presents some reasons for reconsidering the use of discrete structure-point tests of language proficiency, and suggests an alternative basis for designing proficiency tests. Discrete point tests are one of the primary tools of the audio-lingual method of teaching a foreign language and are based on certain assumptions, including the…

  7. Beta oscillations define discrete perceptual cycles in the somatosensory domain.

    PubMed

    Baumgarten, Thomas J; Schnitzler, Alfons; Lange, Joachim

    2015-09-29

    Whether seeing a movie, listening to a song, or feeling a breeze on the skin, we coherently experience these stimuli as continuous, seamless percepts. However, there are rare perceptual phenomena that argue against continuous perception but, instead, suggest discrete processing of sensory input. Empirical evidence supporting such a discrete mechanism, however, remains scarce and comes entirely from the visual domain. Here, we demonstrate compelling evidence for discrete perceptual sampling in the somatosensory domain. Using magnetoencephalography (MEG) and a tactile temporal discrimination task in humans, we find that oscillatory alpha- and low beta-band (8-20 Hz) cycles in primary somatosensory cortex represent neurophysiological correlates of discrete perceptual cycles. Our results agree with several theoretical concepts of discrete perceptual sampling and empirical evidence of perceptual cycles in the visual domain. Critically, these results show that discrete perceptual cycles are not domain-specific, and thus restricted to the visual domain, but extend to the somatosensory domain.

  8. Enhancing adaptive sparse grid approximations and improving refinement strategies using adjoint-based a posteriori error estimates

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jakeman, J.D., E-mail: jdjakem@sandia.gov; Wildey, T.

    2015-01-01

    In this paper we present an algorithm for adaptive sparse grid approximations of quantities of interest computed from discretized partial differential equations. We use adjoint-based a posteriori error estimates of the physical discretization error and the interpolation error in the sparse grid to enhance the sparse grid approximation and to drive adaptivity of the sparse grid. Utilizing these error estimates provides significantly more accurate functional values for random samples of the sparse grid approximation. We also demonstrate that alternative refinement strategies based upon a posteriori error estimates can lead to further increases in accuracy in the approximation over traditional hierarchicalmore » surplus based strategies. Throughout this paper we also provide and test a framework for balancing the physical discretization error with the stochastic interpolation error of the enhanced sparse grid approximation.« less

  9. Discrete Biogeography Based Optimization for Feature Selection in Molecular Signatures.

    PubMed

    Liu, Bo; Tian, Meihong; Zhang, Chunhua; Li, Xiangtao

    2015-04-01

    Biomarker discovery from high-dimensional data is a complex task in the development of efficient cancer diagnoses and classification. However, these data are usually redundant and noisy, and only a subset of them present distinct profiles for different classes of samples. Thus, selecting high discriminative genes from gene expression data has become increasingly interesting in the field of bioinformatics. In this paper, a discrete biogeography based optimization is proposed to select the good subset of informative gene relevant to the classification. In the proposed algorithm, firstly, the fisher-markov selector is used to choose fixed number of gene data. Secondly, to make biogeography based optimization suitable for the feature selection problem; discrete migration model and discrete mutation model are proposed to balance the exploration and exploitation ability. Then, discrete biogeography based optimization, as we called DBBO, is proposed by integrating discrete migration model and discrete mutation model. Finally, the DBBO method is used for feature selection, and three classifiers are used as the classifier with the 10 fold cross-validation method. In order to show the effective and efficiency of the algorithm, the proposed algorithm is tested on four breast cancer dataset benchmarks. Comparison with genetic algorithm, particle swarm optimization, differential evolution algorithm and hybrid biogeography based optimization, experimental results demonstrate that the proposed method is better or at least comparable with previous method from literature when considering the quality of the solutions obtained. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  10. A practical method to test the validity of the standard Gumbel distribution in logit-based multinomial choice models of travel behavior

    DOE PAGES

    Ye, Xin; Garikapati, Venu M.; You, Daehyun; ...

    2017-11-08

    Most multinomial choice models (e.g., the multinomial logit model) adopted in practice assume an extreme-value Gumbel distribution for the random components (error terms) of utility functions. This distributional assumption offers a closed-form likelihood expression when the utility maximization principle is applied to model choice behaviors. As a result, model coefficients can be easily estimated using the standard maximum likelihood estimation method. However, maximum likelihood estimators are consistent and efficient only if distributional assumptions on the random error terms are valid. It is therefore critical to test the validity of underlying distributional assumptions on the error terms that form the basismore » of parameter estimation and policy evaluation. In this paper, a practical yet statistically rigorous method is proposed to test the validity of the distributional assumption on the random components of utility functions in both the multinomial logit (MNL) model and multiple discrete-continuous extreme value (MDCEV) model. Based on a semi-nonparametric approach, a closed-form likelihood function that nests the MNL or MDCEV model being tested is derived. The proposed method allows traditional likelihood ratio tests to be used to test violations of the standard Gumbel distribution assumption. Simulation experiments are conducted to demonstrate that the proposed test yields acceptable Type-I and Type-II error probabilities at commonly available sample sizes. The test is then applied to three real-world discrete and discrete-continuous choice models. For all three models, the proposed test rejects the validity of the standard Gumbel distribution in most utility functions, calling for the development of robust choice models that overcome adverse effects of violations of distributional assumptions on the error terms in random utility functions.« less

  11. A practical method to test the validity of the standard Gumbel distribution in logit-based multinomial choice models of travel behavior

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ye, Xin; Garikapati, Venu M.; You, Daehyun

    Most multinomial choice models (e.g., the multinomial logit model) adopted in practice assume an extreme-value Gumbel distribution for the random components (error terms) of utility functions. This distributional assumption offers a closed-form likelihood expression when the utility maximization principle is applied to model choice behaviors. As a result, model coefficients can be easily estimated using the standard maximum likelihood estimation method. However, maximum likelihood estimators are consistent and efficient only if distributional assumptions on the random error terms are valid. It is therefore critical to test the validity of underlying distributional assumptions on the error terms that form the basismore » of parameter estimation and policy evaluation. In this paper, a practical yet statistically rigorous method is proposed to test the validity of the distributional assumption on the random components of utility functions in both the multinomial logit (MNL) model and multiple discrete-continuous extreme value (MDCEV) model. Based on a semi-nonparametric approach, a closed-form likelihood function that nests the MNL or MDCEV model being tested is derived. The proposed method allows traditional likelihood ratio tests to be used to test violations of the standard Gumbel distribution assumption. Simulation experiments are conducted to demonstrate that the proposed test yields acceptable Type-I and Type-II error probabilities at commonly available sample sizes. The test is then applied to three real-world discrete and discrete-continuous choice models. For all three models, the proposed test rejects the validity of the standard Gumbel distribution in most utility functions, calling for the development of robust choice models that overcome adverse effects of violations of distributional assumptions on the error terms in random utility functions.« less

  12. 3D imaging of nanomaterials by discrete tomography.

    PubMed

    Batenburg, K J; Bals, S; Sijbers, J; Kübel, C; Midgley, P A; Hernandez, J C; Kaiser, U; Encina, E R; Coronado, E A; Van Tendeloo, G

    2009-05-01

    The field of discrete tomography focuses on the reconstruction of samples that consist of only a few different materials. Ideally, a three-dimensional (3D) reconstruction of such a sample should contain only one grey level for each of the compositions in the sample. By exploiting this property in the reconstruction algorithm, either the quality of the reconstruction can be improved significantly, or the number of required projection images can be reduced. The discrete reconstruction typically contains fewer artifacts and does not have to be segmented, as it already contains one grey level for each composition. Recently, a new algorithm, called discrete algebraic reconstruction technique (DART), has been proposed that can be used effectively on experimental electron tomography datasets. In this paper, we propose discrete tomography as a general reconstruction method for electron tomography in materials science. We describe the basic principles of DART and show that it can be applied successfully to three different types of samples, consisting of embedded ErSi(2) nanocrystals, a carbon nanotube grown from a catalyst particle and a single gold nanoparticle, respectively.

  13. An alternative to traditional goodness-of-fit tests for discretely measured continuous data

    Treesearch

    KaDonna C. Randolph; Bill Seaver

    2007-01-01

    Traditional goodness-of-fit tests such as the Kolmogorov-Smirnov and x2 tests are easily applied to data of the continuous or discrete type, respectively. Occasionally, however, the case arises when continuous data are recorded into discrete categories due to an imprecise measurement system. In this instance, the traditional goodness-of-fit...

  14. Discrete False-Discovery Rate Improves Identification of Differentially Abundant Microbes.

    PubMed

    Jiang, Lingjing; Amir, Amnon; Morton, James T; Heller, Ruth; Arias-Castro, Ery; Knight, Rob

    2017-01-01

    Differential abundance testing is a critical task in microbiome studies that is complicated by the sparsity of data matrices. Here we adapt for microbiome studies a solution from the field of gene expression analysis to produce a new method, discrete false-discovery rate (DS-FDR), that greatly improves the power to detect differential taxa by exploiting the discreteness of the data. Additionally, DS-FDR is relatively robust to the number of noninformative features, and thus removes the problem of filtering taxonomy tables by an arbitrary abundance threshold. We show by using a combination of simulations and reanalysis of nine real-world microbiome data sets that this new method outperforms existing methods at the differential abundance testing task, producing a false-discovery rate that is up to threefold more accurate, and halves the number of samples required to find a given difference (thus increasing the efficiency of microbiome experiments considerably). We therefore expect DS-FDR to be widely applied in microbiome studies. IMPORTANCE DS-FDR can achieve higher statistical power to detect significant findings in sparse and noisy microbiome data compared to the commonly used Benjamini-Hochberg procedure and other FDR-controlling procedures.

  15. Stochastic Stability of Sampled Data Systems with a Jump Linear Controller

    NASA Technical Reports Server (NTRS)

    Gonzalez, Oscar R.; Herencia-Zapana, Heber; Gray, W. Steven

    2004-01-01

    In this paper an equivalence between the stochastic stability of a sampled-data system and its associated discrete-time representation is established. The sampled-data system consists of a deterministic, linear, time-invariant, continuous-time plant and a stochastic, linear, time-invariant, discrete-time, jump linear controller. The jump linear controller models computer systems and communication networks that are subject to stochastic upsets or disruptions. This sampled-data model has been used in the analysis and design of fault-tolerant systems and computer-control systems with random communication delays without taking into account the inter-sample response. This paper shows that the known equivalence between the stability of a deterministic sampled-data system and the associated discrete-time representation holds even in a stochastic framework.

  16. fixedTimeEvents: An R package for the distribution of distances between discrete events in fixed time

    NASA Astrophysics Data System (ADS)

    Liland, Kristian Hovde; Snipen, Lars

    When a series of Bernoulli trials occur within a fixed time frame or limited space, it is often interesting to assess if the successful outcomes have occurred completely at random, or if they tend to group together. One example, in genetics, is detecting grouping of genes within a genome. Approximations of the distribution of successes are possible, but they become inaccurate for small sample sizes. In this article, we describe the exact distribution of time between random, non-overlapping successes in discrete time of fixed length. A complete description of the probability mass function, the cumulative distribution function, mean, variance and recurrence relation is included. We propose an associated test for the over-representation of short distances and illustrate the methodology through relevant examples. The theory is implemented in an R package including probability mass, cumulative distribution, quantile function, random number generator, simulation functions, and functions for testing.

  17. Modeling of brittle-viscous flow using discrete particles

    NASA Astrophysics Data System (ADS)

    Thordén Haug, Øystein; Barabasch, Jessica; Virgo, Simon; Souche, Alban; Galland, Olivier; Mair, Karen; Abe, Steffen; Urai, Janos L.

    2017-04-01

    Many geological processes involve both viscous flow and brittle fractures, e.g. boudinage, folding and magmatic intrusions. Numerical modeling of such viscous-brittle materials poses challenges: one has to account for the discrete fracturing, the continuous viscous flow, the coupling between them, and potential pressure dependence of the flow. The Discrete Element Method (DEM) is a numerical technique, widely used for studying fracture of geomaterials. However, the implementation of viscous fluid flow in discrete element models is not trivial. In this study, we model quasi-viscous fluid flow behavior using Esys-Particle software (Abe et al., 2004). We build on the methodology of Abe and Urai (2012) where a combination of elastic repulsion and dashpot interactions between the discrete particles is implemented. Several benchmarks are presented to illustrate the material properties. Here, we present extensive, systematic material tests to characterize the rheology of quasi-viscous DEM particle packing. We present two tests: a simple shear test and a channel flow test, both in 2D and 3D. In the simple shear tests, simulations were performed in a box, where the upper wall is moved with a constant velocity in the x-direction, causing shear deformation of the particle assemblage. Here, the boundary conditions are periodic on the sides, with constant forces on the upper and lower walls. In the channel flow tests, a piston pushes a sample through a channel by Poisseuille flow. For both setups, we present the resulting stress-strain relationships over a range of material parameters, confining stress and strain rate. Results show power-law dependence between stress and strain rate, with a non-linear dependence on confining force. The material is strain softening under some conditions (which). Additionally, volumetric strain can be dilatant or compactant, depending on porosity, confining pressure and strain rate. Constitutive relations are implemented in a way that limits the range of viscosities. For identical pressure and strain rate, an order of magnitude range in viscosity can be investigated. The extensive material testing indicates that DEM particles interacting by a combination of elastic repulsion and dashpots can be used to model viscous flows. This allows us to exploit the fracturing capabilities of the discrete element methods and study systems that involve both viscous flow and brittle fracturing. However, the small viscosity range achievable using this approach does constraint the applicability for systems where larger viscosity ranges are required, such as folding of viscous layers of contrasting viscosities. References: Abe, S., Place, D., & Mora, P. (2004). A parallel implementation of the lattice solid model for the simulation of rock mechanics and earthquake dynamics. PAGEOPH, 161(11-12), 2265-2277. http://doi.org/10.1007/s00024-004-2562-x Abe, S., and J. L. Urai (2012), Discrete element modeling of boudinage: Insights on rock rheology, matrix flow, and evolution of geometry, JGR., 117, B01407, doi:10.1029/2011JB00855

  18. Diurnal variations in metal concentrations in the Alamosa River and Wightman Fork, southwestern Colorado, 1995-97

    USGS Publications Warehouse

    Ortiz, Roderick F.; Stogner, Sr., Robert W.

    2001-01-01

    A comprehensive sampling network was implemented in the Alamosa River Basin from 1995 to 1997 to address data gaps identified as part of the ecological risk assessment of the Summitville Superfund site. Aluminum, copper, iron, and zinc were identified as the constituents of concern for the risk assessment. Water-quality samples were collected at six sites on the Alamosa River and Wightman Fork by automatic samplers. Several discrete (instantaneous) samples were collected over 24 hours at each site during periods of high diurnal variations in streamflow (May through September). The discrete samples were analyzed individually and duplicate samples were composited to produce a single sample that represented the daily-mean concentration. The diurnal variations in concentration with respect to the theoretical daily-mean concentration (maximum minus minimum divided by daily mean) are presented. Diurnal metal concentrations were highly variable in the Alamosa River and Wightman Fork. The concentration of a metal at a single site could change by several hundred percent during one diurnal cycle. The largest percent change in metal concentrations was observed for aluminum and iron. Zinc concentrations varied the least of the four metals. No discernible or predictable pattern was indicated in the timing of the daily mean, maximum, or minimum concentrations. The percentage of discrete sample concentrations that varied from the daily-mean concentration by thresholds of plus or minus 10, 25, and 50 percent was evaluated. Between 50 and 75 percent of discrete-sample concentrations varied from the daily-mean concentration by more than plus or minus 10 percent. The percentage of samples exceeding given thresholds generally was smaller during the summer period than the snowmelt period. Sampling strategies are critical to accurately define variability in constituent concentration, and conversely, understanding constituent variability is important in determining appropriate sampling strategies. During nonsteady-state periods, considerable errors in estimates of daily-mean concentration are possible if based on one discrete sample. Flow-weighting multiple discrete samples collected over a diurnal cycle provides a better estimate of daily-mean concentrations during nonsteady-state periods.

  19. Blood Based Biomarkers of Early Onset Breast Cancer

    DTIC Science & Technology

    2016-12-01

    discretizes the data, and also using logistic elastic net – a form of linear regression - we were unable to build a classifier that could accurately...classifier for differentiating cases from controls off discretized data. The first pass analysis demonstrated a 35 gene signature that differentiated...to the discretized data for mRNA gene signature, the samples used to “train” were also included in the final samples used to “test” the algorithm

  20. Small-kernel, constrained least-squares restoration of sampled image data

    NASA Technical Reports Server (NTRS)

    Hazra, Rajeeb; Park, Stephen K.

    1992-01-01

    Following the work of Park (1989), who extended a derivation of the Wiener filter based on the incomplete discrete/discrete model to a more comprehensive end-to-end continuous/discrete/continuous model, it is shown that a derivation of the constrained least-squares (CLS) filter based on the discrete/discrete model can also be extended to this more comprehensive continuous/discrete/continuous model. This results in an improved CLS restoration filter, which can be efficiently implemented as a small-kernel convolution in the spatial domain.

  1. 40 CFR 1042.525 - How do I adjust emission levels to account for infrequently regenerating aftertreatment devices?

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... this section for how to adjust discrete-mode testing. For this section, “regeneration” means an... ramped-modal cycle, or on average less than once per typical mode in a discrete-mode test. (a) Developing... modes of a discrete-mode steady-state test. You may use either of the following different approaches for...

  2. Reaction times to weak test lights. [psychophysics biological model

    NASA Technical Reports Server (NTRS)

    Wandell, B. A.; Ahumada, P.; Welsh, D.

    1984-01-01

    Maloney and Wandell (1984) describe a model of the response of a single visual channel to weak test lights. The initial channel response is a linearly filtered version of the stimulus. The filter output is randomly sampled over time. Each time a sample occurs there is some probability increasing with the magnitude of the sampled response - that a discrete detection event is generated. Maloney and Wandell derive the statistics of the detection events. In this paper a test is conducted of the hypothesis that the reaction time responses to the presence of a weak test light are initiated at the first detection event. This makes it possible to extend the application of the model to lights that are slightly above threshold, but still within the linear operating range of the visual system. A parameter-free prediction of the model proposed by Maloney and Wandell for lights detected by this statistic is tested. The data are in agreement with the prediction.

  3. Eliciting population preferences for mass colorectal cancer screening organization.

    PubMed

    Nayaradou, Maximilien; Berchi, Célia; Dejardin, Olivier; Launoy, Guy

    2010-01-01

    The implementation of mass colorectal cancer (CRC) screening is a public health priority. Population participation is fundamental for the success of CRC screening as for any cancer screening program. The preferences of the population may influence their likelihood of participation. The authors sought to elicit population preferences for CRC screening test characteristics to improve the design of CRC screening campaigns. A discrete choice experiment was used. Questionnaires were compiled with a set of pairs of hypothetical CRC screening scenarios. The survey was conducted by mail from June 2006 to October 2006 on a representative sample of 2000 inhabitants, aged 50 to 74 years from the northwest of France, who were randomly selected from electoral lists. Questionnaires were sent to 2000 individuals, each of whom made 3 or 4 discrete choices between hypothetical tests that differed in 7 attributes: how screening is offered, process, sensitivity, rate of unnecessary colonoscopy, expected mortality reduction, method of screening test result transmission, and cost. Complete responses were received from 656 individuals (32.8%). The attributes that influenced population preferences included expected mortality reduction, sensitivity, cost, and process. Participants from high social classes were particularly influenced by sensitivity. The results demonstrate that the discrete choice experiment provides information on patient preferences for CRC screening: improving screening program effectiveness, for instance, by improving test sensitivity (the most valued attribute) would increase satisfaction among the general population with regard to CRC screening programs. Additional studies are required to study how patient preferences actually affect adherence to regular screening programs.

  4. 40 CFR 1065.309 - Continuous gas analyzer system-response and updating-recording verification-for gas analyzers...

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... discrete-mode testing. For this check we consider water vapor a gaseous constituent. This verification does... for water removed from the sample done in post-processing according to § 1065.659 and it does not... humidification vessel that contains water. You must humidify NO2 span gas with another moist gas stream. We...

  5. 40 CFR 1065.309 - Continuous gas analyzer system-response and updating-recording verification-for gas analyzers...

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... discrete-mode testing. For this check we consider water vapor a gaseous constituent. This verification does... for water removed from the sample done in post-processing according to § 1065.659 (40 CFR 1066.620 for... contains water. You must humidify NO2 span gas with another moist gas stream. We recommend humidifying your...

  6. The effect of presenting information about invasive follow-up testing on individuals' noninvasive colorectal cancer screening participation decision: results from a discrete choice experiment.

    PubMed

    Benning, Tim M; Dellaert, Benedict G C; Severens, Johan L; Dirksen, Carmen D

    2014-07-01

    Many national colorectal cancer screening campaigns have a similar structure. First, individuals are invited to take a noninvasive screening test, and, second, in the case of a positive screening test result, they are advised to undergo a more invasive follow-up test. The objective of this study was to investigate how much individuals' participation decision in noninvasive screening is affected by the presence or absence of detailed information about invasive follow-up testing and how this effect varies over screening tests. We used a labeled discrete choice experiment of three noninvasive colorectal cancer screening types with two versions that did or did not present respondents with detailed information about the possible invasive follow-up test (i.e., colonoscopy) and its procedure. We used data from 631 Dutch respondents aged 55 to 75 years. Each respondent received only one of the two versions (N = 310 for the invasive follow-up test information specification version, and N = 321 for the no-information specification version). Mixed logit model results show that detailed information about the invasive follow-up test negatively affects screening participation decisions. This effect can be explained mainly by a decrease in choice shares for the most preferred screening test (a combined stool and blood sample test). Choice share simulations based on the discrete choice experiment indicated that presenting invasive follow-up test information decreases screening participation by 4.79%. Detailed information about the invasive follow-up test has a negative effect on individuals' screening participation decisions in noninvasive colorectal cancer screening campaigns. This result poses new challenges for policymakers who aim not only to increase uptake but also to provide full disclosure to potential screening participants. Copyright © 2014 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.

  7. Statistically optimal analysis of state-discretized trajectory data from multiple thermodynamic states

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wu, Hao; Mey, Antonia S. J. S.; Noé, Frank

    2014-12-07

    We propose a discrete transition-based reweighting analysis method (dTRAM) for analyzing configuration-space-discretized simulation trajectories produced at different thermodynamic states (temperatures, Hamiltonians, etc.) dTRAM provides maximum-likelihood estimates of stationary quantities (probabilities, free energies, expectation values) at any thermodynamic state. In contrast to the weighted histogram analysis method (WHAM), dTRAM does not require data to be sampled from global equilibrium, and can thus produce superior estimates for enhanced sampling data such as parallel/simulated tempering, replica exchange, umbrella sampling, or metadynamics. In addition, dTRAM provides optimal estimates of Markov state models (MSMs) from the discretized state-space trajectories at all thermodynamic states. Under suitablemore » conditions, these MSMs can be used to calculate kinetic quantities (e.g., rates, timescales). In the limit of a single thermodynamic state, dTRAM estimates a maximum likelihood reversible MSM, while in the limit of uncorrelated sampling data, dTRAM is identical to WHAM. dTRAM is thus a generalization to both estimators.« less

  8. DISCRETE-LEVEL GROUND-WATER MONITORING SYSTEM FOR CONTAINMENT AND REMEDIAL PERFORMANCE ASSESSMENT OBJECTIVES

    EPA Science Inventory

    A passive discrete-level multilayer ground-water sampler was evaluated to determine its capability to obtain representative discrete-interval samples within the screen intervals of traditional monitoring wells without purging. Results indicate that the device is able to provide ...

  9. An interface for the direct coupling of small liquid samples to AMS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ognibene, T. J.; Thomas, A. T.; Daley, P. F.

    We describe the moving wire interface attached to the 1-MV AMS system at LLNL’s Center for Accelerator Mass Spectrometry for the analysis of nonvolatile liquid samples as either discrete drops or from the direct output of biochemical separatory instrumentation, such as high-performance liquid chromatography (HPLC). Discrete samples containing at least a few 10 s of nanograms of carbon and as little as 50 zmol 14C can be measured with a 3–5% precision in a few minutes. The dynamic range of our system spans approximately 3 orders in magnitude. Sample to sample memory is minimized by the use of fresh targetsmore » for each discrete sample or by minimizing the amount of carbon present in a peak generated by an HPLC containing a significant amount of 14C. As a result, liquid sample AMS provides a new technology to expand our biomedical AMS program by enabling the capability to measure low-level biochemicals in extremely small samples that would otherwise be inaccessible.« less

  10. An interface for the direct coupling of small liquid samples to AMS

    DOE PAGES

    Ognibene, T. J.; Thomas, A. T.; Daley, P. F.; ...

    2015-05-28

    We describe the moving wire interface attached to the 1-MV AMS system at LLNL’s Center for Accelerator Mass Spectrometry for the analysis of nonvolatile liquid samples as either discrete drops or from the direct output of biochemical separatory instrumentation, such as high-performance liquid chromatography (HPLC). Discrete samples containing at least a few 10 s of nanograms of carbon and as little as 50 zmol 14C can be measured with a 3–5% precision in a few minutes. The dynamic range of our system spans approximately 3 orders in magnitude. Sample to sample memory is minimized by the use of fresh targetsmore » for each discrete sample or by minimizing the amount of carbon present in a peak generated by an HPLC containing a significant amount of 14C. As a result, liquid sample AMS provides a new technology to expand our biomedical AMS program by enabling the capability to measure low-level biochemicals in extremely small samples that would otherwise be inaccessible.« less

  11. Biological data extraction from imagery - How far can we go? A case study from the Mid-Atlantic Ridge.

    PubMed

    Cuvelier, Daphne; de Busserolles, Fanny; Lavaud, Romain; Floc'h, Estelle; Fabri, Marie-Claire; Sarradin, Pierre-Marie; Sarrazin, Jozée

    2012-12-01

    In the past few decades, hydrothermal vent research has progressed immensely, resulting in higher-quality samples and long-term studies. With time, scientists are becoming more aware of the impacts of sampling on the faunal communities and are looking for less invasive ways to investigate the vent ecosystems. In this perspective, imagery analysis plays a very important role. With this study, we test which factors can be quantitatively and accurately assessed based on imagery, through comparison with faunal sampling. Twelve instrumented chains were deployed on the Atlantic Eiffel Tower hydrothermal edifice and the corresponding study sites were subsequently sampled. Discrete, quantitative samples were compared to the imagery recorded during the experiment. An observer-effect was tested, by comparing imagery data gathered by different scientists. Most factors based on image analyses concerning Bathymodiolus azoricus mussels were shown to be valid representations of the corresponding samples. Additional ecological assets, based exclusively on imagery, were included. Copyright © 2012 Elsevier Ltd. All rights reserved.

  12. 40 CFR 1051.615 - What are the special provisions for certifying small recreational engines?

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... dynamometer using the equipment and procedures of 40 CFR part 1065 with either discrete-mode or ramped-modal... discrete-mode and ramped-modal testing (either in your original application or in an amendment to your... by the Clean Air Act. Measure steady-state emissions as follows: (1) For discrete-mode testing...

  13. Physical and chemical properties of San Francisco Bay, California, 1980

    USGS Publications Warehouse

    Ota, Allan Y.; Schemel, L.E.; Hager, S.W.

    1989-01-01

    The U.S. Geological Survey conducted hydrologic investigations in both the deep water channels and the shallow-water regions of the San Francisco Bay estuarine system during 1980. Cruises were conducted regularly, usually at two-week intervals. Physical and chemical properties presented in this report include temperature , salinity, suspended particulate matter, turbidity, extinction coefficient, partial pressure of CO2, partial pressure of oxygen , dissolved organic carbon, particulate organic carbon, discrete chlorophyll a, fluorescence of photosynthetic pigments, dissolved silica, dissolved phosphate, nitrate plus nitrite, nitrite, ammonium, dissolved inorganic nitrogen, dissolved nitrogen, dissolved phosphorus, total nitrogen, and total phosphorus. Analytical methods are described. The body of data contained in this report characterizes hydrologic conditions in San Francisco Bay during a year with an average rate of freshwater inflow to the estuary. Concentrations of dissolved silica (discrete-sample) ranged from 3.8 to 310 micro-M in the northern reach of the bay, whereas the range in the southern reach was limited to 63 to 150 micro-M. Concentrations of phosphate (discrete-sample) ranged from 1.3 to 4.4 micro-M in the northern reach, which was narrow in comparison with that of 2.2 to 19.0 micro-M in the southern reach. Concentrations of nitrate plus nitrite (discrete-sample) ranged from near zero to 53 micro-M in the northern reach, and from 2.3 to 64 micro-M in the southern reach. Concentrations of nitrite (discrete-sample) were low in both reaches, exhibiting a range from nearly zero to approximately 2.3 micro-M. Concentrations of ammonium (discrete-sample) ranged from near zero to 14.2 micro-M in the northern reach, and from near zero to 8.3 micro-M in the southern reach. (USGS)

  14. School Psychology Crossroads in America: Discrepancies between Actual and Preferred Discrete Practices and Barriers to Preferred Practice

    ERIC Educational Resources Information Center

    Filter, Kevin J.; Ebsen, Sara; Dibos, Rebecca

    2013-01-01

    A nationally representative sample of American school psychology practitioners were surveyed to analyze discrepancies that they experience between their actual discrete practices and their preferred discrete practices relative to several domains of practice including assessment, intervention, meetings, and continuing education. Discrepancies were…

  15. SEXUAL SPECIES ARE SEPARATED BY LARGER GENETIC GAPS THAN ASEXUAL SPECIES IN ROTIFERS

    PubMed Central

    Tang, Cuong Q; Obertegger, Ulrike; Fontaneto, Diego; Barraclough, Timothy G

    2014-01-01

    Why organisms diversify into discrete species instead of showing a continuum of genotypic and phenotypic forms is an important yet rarely studied question in speciation biology. Does species discreteness come from adaptation to fill discrete niches or from interspecific gaps generated by reproductive isolation? We investigate the importance of reproductive isolation by comparing genetic discreteness, in terms of intra- and interspecific variation, between facultatively sexual monogonont rotifers and obligately asexual bdelloid rotifers. We calculated the age (phylogenetic distance) and average pairwise genetic distance (raw distance) within and among evolutionarily significant units of diversity in six bdelloid clades and seven monogonont clades sampled for 4211 individuals in total. We find that monogonont species are more discrete than bdelloid species with respect to divergence between species but exhibit similar levels of intraspecific variation (species cohesiveness). This pattern arises because bdelloids have diversified into discrete genetic clusters at a faster net rate than monogononts. Although sampling biases or differences in ecology that are independent of sexuality might also affect these patterns, the results are consistent with the hypothesis that bdelloids diversified at a faster rate into less discrete species because their diversification does not depend on the evolution of reproductive isolation. PMID:24975991

  16. Effects of Individual Differences and Job Characteristics on the Psychological Health of Italian Nurses

    PubMed Central

    Zurlo, Maria Clelia; Vallone, Federica; Smith, Andrew P.

    2018-01-01

    The Demand Resources and Individual Effects Model (DRIVE Model) is a transactional model that integrates Demands- Control-Support and Effort-Reward Imbalance models emphasising the role of individual (Coping Strategies; Overcommitment) and job characteristics (Job Demands, Social Support, Decision Latitude, Skill Discretion, Effort, Rewards) in the work-related stress process. The present study aimed to test the DRIVE Model in a sample of 450 Italian nurses and to compare findings with those of a study conducted in a sample of UK nurses. A questionnaire composed of Ways of Coping Checklist-Revised (WCCL-R); Job Content Questionnaire (JCQ); ERI Test; Hospital Anxiety and Depression Scale (HADS) was used. Data supported the application of the DRIVE Model to the Italian context, showing significant associations of the individual characteristics of Problem-focused, Seek Advice and Wishful Thinking coping strategies and the job characteristics of Job Demands, Skill Discretion, Decision Latitude, and Effort with perceived levels of Anxiety and Depression. Effort represented the best predictor for psychological health conditions among Italian nurses, and Social Support significantly moderated the effects of Job Demands on perceived levels of Anxiety. The comparison study showed significant differences in the risk profiles of Italian and UK nurses. Findings were discussed in order to define focused interventions to promote nurses’ wellbeing.

  17. Morphological evidence for discrete stocks of yellow perch in Lake Erie

    USGS Publications Warehouse

    Kocovsky, Patrick M.; Knight, Carey T.

    2012-01-01

    Identification and management of unique stocks of exploited fish species are high-priority management goals in the Laurentian Great Lakes. We analyzed whole-body morphometrics of 1430 yellow perch Perca flavescens captured during 2007–2009 from seven known spawning areas in Lake Erie to determine if morphometrics vary among sites and management units to assist in identification of spawning stocks of this heavily exploited species. Truss-based morphometrics (n = 21 measurements) were analyzed using principal component analysis followed by ANOVA of the first three principal components to determine whether yellow perch from the several sampling sites varied morphometrically. Duncan's multiple range test was used to determine which sites differed from one another to test whether morphometrics varied at scales finer than management unit. Morphometrics varied significantly among sites and annually, but differences among sites were much greater. Sites within the same management unit typically differed significantly from one another, indicating morphometric variation at a scale finer than management unit. These results are largely congruent with recently-published studies on genetic variation of yellow perch from many of the same sampling sites. Thus, our results provide additional evidence that there are discrete stocks of yellow perch in Lake Erie and that management units likely comprise multiple stocks.

  18. 40 CFR 1045.505 - How do I test engines using discrete-mode or ramped-modal duty cycles?

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... your own testing. If you submit certification test data collected with both discrete-mode and ramped...-use operation. (d) For full-load operating modes, operate the engine at wide-open throttle. (e) See 40...

  19. Discrete Element Method (DEM) Simulations using PFC3D

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Matt Evans

    Contains input scripts, background information, reduced data, and results associated with the discrete element method (DEM) simulations of interface shear tests, plate anchor pullout tests, and torpedo anchor installation and pullout tests, using the software PFC3D (v4.0).

  20. A Simulation of Readiness-Based Sparing Policies

    DTIC Science & Technology

    2017-06-01

    variant of a greedy heuristic algorithm to set stock levels and estimate overall WS availability. Our discrete event simulation is then used to test the...available in the optimization tools. 14. SUBJECT TERMS readiness-based sparing, discrete event simulation, optimization, multi-indenture...variant of a greedy heuristic algorithm to set stock levels and estimate overall WS availability. Our discrete event simulation is then used to test the

  1. Piagetian conservation of discrete quantities in bonobos (Pan paniscus), chimpanzees (Pan troglodytes), and orangutans (Pongo pygmaeus).

    PubMed

    Suda, Chikako; Call, Josep

    2005-10-01

    This study investigated whether physical discreteness helps apes to understand the concept of Piagetian conservation (i.e. the invariance of quantities). Subjects were four bonobos, three chimpanzees, and five orangutans. Apes were tested on their ability to conserve discrete/continuous quantities in an over-conservation procedure in which two unequal quantities of edible rewards underwent various transformations in front of subjects. Subjects were examined to determine whether they could track the larger quantity of reward after the transformation. Comparison between the two types of conservation revealed that tests with bonobos supported the discreteness hypothesis. Bonobos, but neither chimpanzees nor orangutans, performed significantly better with discrete quantities than with continuous ones. The results suggest that at least bonobos could benefit from the discreteness of stimuli in their acquisition of conservation skills.

  2. Population Fisher information matrix and optimal design of discrete data responses in population pharmacodynamic experiments.

    PubMed

    Ogungbenro, Kayode; Aarons, Leon

    2011-08-01

    In the recent years, interest in the application of experimental design theory to population pharmacokinetic (PK) and pharmacodynamic (PD) experiments has increased. The aim is to improve the efficiency and the precision with which parameters are estimated during data analysis and sometimes to increase the power and reduce the sample size required for hypothesis testing. The population Fisher information matrix (PFIM) has been described for uniresponse and multiresponse population PK experiments for design evaluation and optimisation. Despite these developments and availability of tools for optimal design of population PK and PD experiments much of the effort has been focused on repeated continuous variable measurements with less work being done on repeated discrete type measurements. Discrete data arise mainly in PDs e.g. ordinal, nominal, dichotomous or count measurements. This paper implements expressions for the PFIM for repeated ordinal, dichotomous and count measurements based on analysis by a mixed-effects modelling technique. Three simulation studies were used to investigate the performance of the expressions. Example 1 is based on repeated dichotomous measurements, Example 2 is based on repeated count measurements and Example 3 is based on repeated ordinal measurements. Data simulated in MATLAB were analysed using NONMEM (Laplace method) and the glmmML package in R (Laplace and adaptive Gauss-Hermite quadrature methods). The results obtained for Examples 1 and 2 showed good agreement between the relative standard errors obtained using the PFIM and simulations. The results obtained for Example 3 showed the importance of sampling at the most informative time points. Implementation of these expressions will provide the opportunity for efficient design of population PD experiments that involve discrete type data through design evaluation and optimisation.

  3. Steeply dipping heaving bedrock, Colorado: Part 2 - Mineralogical and engineering properties

    USGS Publications Warehouse

    Noe, D.C.; Higgins, J.D.; Olsen, H.W.

    2007-01-01

    This paper describes the mineralogical and engineering properties of steeply dipping, differentially heaving bedrock, which has caused severe damage near the Denver area. Several field sites in heave-prone areas have been characterized using high sample densities, numerous testing methodologies, and thousands of sample tests. Hydrometer testing shows that the strata range from siltstone to claystone (33 to 66 percent clay) with occasional bentonite seams (53 to 98 percent clay mixed with calcite). From X-ray diffraction analyses, the claystone contains varying proportions of illite-smectite and discrete (pure) smectite, and the bentonite contains discrete smectite. Accessory minerals include pyrite, gypsum, calcite, and oxidized iron compounds. The dominant exchangeable cation is Ca2+, except where gypsum is prevalent, and Mg2+ and Na1+ are elevated. Scanning electron microscope analyses show that the clay fabric is deformed and porous and that pyrite is absent within the weathered zone. Unified Soil Classification for the claystone varies from CL to CH, and the bentonite is CH to MH. Average moisture content values are 17 percent for claystone and 32 percent for bentonite, and these are typically 0 to 5 percent lower than the plastic limit. Swell-consolidation and suction testing shows a full range of swelling potentials from low to very high. These findings confirm that type I (bed-parallel, symmetrical to asymmetrical) heave features are strongly associated with changes in bedrock composition and mineralogy. Composition changes are not necessarily a factor for type II (bed-parallel to bed-oblique, strongly asymmetrical) heave features, which are associated with movements along subsurface shear zones.

  4. Discrete modelling of front propagation in backward piping erosion

    NASA Astrophysics Data System (ADS)

    Tran, Duc-Kien; Prime, Noémie; Froiio, Francesco; Callari, Carlo; Vincens, Eric

    2017-06-01

    A preliminary discrete numerical model of a REV at the front region of an erosion pipe in a cohesive granular soil is briefly presented. The results reported herein refer to a simulation carried out by coupling the Discrete Element Method (DEM) with the Lattice Boltzmann Method (LBM) for the representation of the granular and fluid phases, respectively. The numerical specimen, consisiting of bonded grains, is tested under fully-saturated conditions and increasing pressure difference between the inlet (confined) and the outlet (unconfined) flow regions. The key role of compression arches of force chains that transversely cross the sample and carry most part of the hydrodynamic actions is pointed out. These arches partition the REV into an upstream region that remains almost intact and a downstream region that gradually degrades and is subsequently eroded in the form of a cluster. Eventually, the collapse of the compression arches causes the upstream region to be also eroded, abruptly, as a whole. A complete presentation of the numerical model and of the results of the simulation can be found in [12].

  5. Test Score Equating Using Discrete Anchor Items versus Passage-Based Anchor Items: A Case Study Using "SAT"® Data. Research Report. ETS RR-14-14

    ERIC Educational Resources Information Center

    Liu, Jinghua; Zu, Jiyun; Curley, Edward; Carey, Jill

    2014-01-01

    The purpose of this study is to investigate the impact of discrete anchor items versus passage-based anchor items on observed score equating using empirical data.This study compares an "SAT"® critical reading anchor that contains more discrete items proportionally, compared to the total tests to be equated, to another anchor that…

  6. Sexual species are separated by larger genetic gaps than asexual species in rotifers.

    PubMed

    Tang, Cuong Q; Obertegger, Ulrike; Fontaneto, Diego; Barraclough, Timothy G

    2014-10-01

    Why organisms diversify into discrete species instead of showing a continuum of genotypic and phenotypic forms is an important yet rarely studied question in speciation biology. Does species discreteness come from adaptation to fill discrete niches or from interspecific gaps generated by reproductive isolation? We investigate the importance of reproductive isolation by comparing genetic discreteness, in terms of intra- and interspecific variation, between facultatively sexual monogonont rotifers and obligately asexual bdelloid rotifers. We calculated the age (phylogenetic distance) and average pairwise genetic distance (raw distance) within and among evolutionarily significant units of diversity in six bdelloid clades and seven monogonont clades sampled for 4211 individuals in total. We find that monogonont species are more discrete than bdelloid species with respect to divergence between species but exhibit similar levels of intraspecific variation (species cohesiveness). This pattern arises because bdelloids have diversified into discrete genetic clusters at a faster net rate than monogononts. Although sampling biases or differences in ecology that are independent of sexuality might also affect these patterns, the results are consistent with the hypothesis that bdelloids diversified at a faster rate into less discrete species because their diversification does not depend on the evolution of reproductive isolation. © 2014 The Authors. Evolution published by Wiley Periodicals, Inc. on behalf of The Society for the Study of Evolution.

  7. Reliability of hybrid microcircuit discrete components

    NASA Technical Reports Server (NTRS)

    Allen, R. V.

    1972-01-01

    Data accumulated during 4 years of research and evaluation of ceramic chip capacitors, ceramic carrier mounted active devices, beam-lead transistors, and chip resistors are presented. Life and temperature coefficient test data, and optical and scanning electron microscope photographs of device failures are presented and the failure modes are described. Particular interest is given to discrete component qualification, power burn-in, and procedures for testing and screening discrete components. Burn-in requirements and test data will be given in support of 100 percent burn-in policy on all NASA flight programs.

  8. A Nested Modeling Scheme for High-resolution Simulation of the Aquitard Compaction in a Regional Groundwater Extraction Field

    NASA Astrophysics Data System (ADS)

    Aichi, M.; Tokunaga, T.

    2006-12-01

    In the fields that experienced both significant drawdown/land subsidence and the recovery of groundwater potential, temporal change of the effective stress in the clayey layers is not simple. Conducting consolidation tests of core samples is a straightforward approach to know the pre-consolidation stress. However, especially in the urban area, the cost of boring and the limitation of sites for boring make it difficult to carry out enough number of tests. Numerical simulation to reproduce stress history can contribute to selecting boring sites and to complement the results of the laboratory tests. To trace the effective stress profile in the clayey layers by numerical simulation, discretization in the clayey layers should be fine. At the same time, the size of the modeled domain should be large enough to calculate the effect of regional groundwater extraction. Here, we developed a new scheme to reduce memory consumption based on domain decomposition technique. A finite element model of coupled groundwater flow and land subsidence is used for the local model, and a finite difference groundwater flow model is used for the regional model. The local model is discretized to fine mesh in the clayey layers to reproduce the temporal change of pore pressure in the layers while the regional model is discretized to relatively coarse mesh to reproduce the effect of the regional groundwater extraction on the groundwater flow. We have tested this scheme by comparing the results obtained from this scheme with those from the finely gridded model for the entire calculation domain. The difference between the results of these models was small enough and our new scheme can be used for the practical problem.

  9. Observability of discretized partial differential equations

    NASA Technical Reports Server (NTRS)

    Cohn, Stephen E.; Dee, Dick P.

    1988-01-01

    It is shown that complete observability of the discrete model used to assimilate data from a linear partial differential equation (PDE) system is necessary and sufficient for asymptotic stability of the data assimilation process. The observability theory for discrete systems is reviewed and applied to obtain simple observability tests for discretized constant-coefficient PDEs. Examples are used to show how numerical dispersion can result in discrete dynamics with multiple eigenvalues, thereby detracting from observability.

  10. Polarisation in spin-echo experiments: Multi-point and lock-in measurements

    NASA Astrophysics Data System (ADS)

    Tamtögl, Anton; Davey, Benjamin; Ward, David J.; Jardine, Andrew P.; Ellis, John; Allison, William

    2018-02-01

    Spin-echo instruments are typically used to measure diffusive processes and the dynamics and motion in samples on ps and ns time scales. A key aspect of the spin-echo technique is to determine the polarisation of a particle beam. We present two methods for measuring the spin polarisation in spin-echo experiments. The current method in use is based on taking a number of discrete readings. The implementation of a new method involves continuously rotating the spin and measuring its polarisation after being scattered from the sample. A control system running on a microcontroller is used to perform the spin rotation and to calculate the polarisation of the scattered beam based on a lock-in amplifier. First experimental tests of the method on a helium spin-echo spectrometer show that it is clearly working and that it has advantages over the discrete approach, i.e., it can track changes of the beam properties throughout the experiment. Moreover, we show that real-time numerical simulations can perfectly describe a complex experiment and can be easily used to develop improved experimental methods prior to a first hardware implementation.

  11. A robust nonparametric framework for reconstruction of stochastic differential equation models

    NASA Astrophysics Data System (ADS)

    Rajabzadeh, Yalda; Rezaie, Amir Hossein; Amindavar, Hamidreza

    2016-05-01

    In this paper, we employ a nonparametric framework to robustly estimate the functional forms of drift and diffusion terms from discrete stationary time series. The proposed method significantly improves the accuracy of the parameter estimation. In this framework, drift and diffusion coefficients are modeled through orthogonal Legendre polynomials. We employ the least squares regression approach along with the Euler-Maruyama approximation method to learn coefficients of stochastic model. Next, a numerical discrete construction of mean squared prediction error (MSPE) is established to calculate the order of Legendre polynomials in drift and diffusion terms. We show numerically that the new method is robust against the variation in sample size and sampling rate. The performance of our method in comparison with the kernel-based regression (KBR) method is demonstrated through simulation and real data. In case of real dataset, we test our method for discriminating healthy electroencephalogram (EEG) signals from epilepsy ones. We also demonstrate the efficiency of the method through prediction in the financial data. In both simulation and real data, our algorithm outperforms the KBR method.

  12. Determining the sample size for co-dominant molecular marker-assisted linkage detection for a monogenic qualitative trait by controlling the type-I and type-II errors in a segregating F2 population.

    PubMed

    Hühn, M; Piepho, H P

    2003-03-01

    Tests for linkage are usually performed using the lod score method. A critical question in linkage analyses is the choice of sample size. The appropriate sample size depends on the desired type-I error and power of the test. This paper investigates the exact type-I error and power of the lod score method in a segregating F(2) population with co-dominant markers and a qualitative monogenic dominant-recessive trait. For illustration, a disease-resistance trait is considered, where the susceptible allele is recessive. A procedure is suggested for finding the appropriate sample size. It is shown that recessive plants have about twice the information content of dominant plants, so the former should be preferred for linkage detection. In some cases the exact alpha-values for a given nominal alpha may be rather small due to the discrete nature of the sampling distribution in small samples. We show that a gain in power is possible by using exact methods.

  13. Estimation in a discrete tail rate family of recapture sampling models

    NASA Technical Reports Server (NTRS)

    Gupta, Rajan; Lee, Larry D.

    1990-01-01

    In the context of recapture sampling design for debugging experiments the problem of estimating the error or hitting rate of the faults remaining in a system is considered. Moment estimators are derived for a family of models in which the rate parameters are assumed proportional to the tail probabilities of a discrete distribution on the positive integers. The estimators are shown to be asymptotically normal and fully efficient. Their fixed sample properties are compared, through simulation, with those of the conditional maximum likelihood estimators.

  14. Continuous- and discrete-time stimulus sequences for high stimulus rate paradigm in evoked potential studies.

    PubMed

    Wang, Tao; Huang, Jiang-hua; Lin, Lin; Zhan, Chang'an A

    2013-01-01

    To obtain reliable transient auditory evoked potentials (AEPs) from EEGs recorded using high stimulus rate (HSR) paradigm, it is critical to design the stimulus sequences of appropriate frequency properties. Traditionally, the individual stimulus events in a stimulus sequence occur only at discrete time points dependent on the sampling frequency of the recording system and the duration of stimulus sequence. This dependency likely causes the implementation of suboptimal stimulus sequences, sacrificing the reliability of resulting AEPs. In this paper, we explicate the use of continuous-time stimulus sequence for HSR paradigm, which is independent of the discrete electroencephalogram (EEG) recording system. We employ simulation studies to examine the applicability of the continuous-time stimulus sequences and the impacts of sampling frequency on AEPs in traditional studies using discrete-time design. Results from these studies show that the continuous-time sequences can offer better frequency properties and improve the reliability of recovered AEPs. Furthermore, we find that the errors in the recovered AEPs depend critically on the sampling frequencies of experimental systems, and their relationship can be fitted using a reciprocal function. As such, our study contributes to the literature by demonstrating the applicability and advantages of continuous-time stimulus sequences for HSR paradigm and by revealing the relationship between the reliability of AEPs and sampling frequencies of the experimental systems when discrete-time stimulus sequences are used in traditional manner for the HSR paradigm.

  15. An empirical test of Rogers' original and revised theory of correlates in adolescents.

    PubMed

    Yarcheski, A; Mahon, N E

    1991-12-01

    The purpose of this study was to examine Rogers' original and revised theory of correlates in adolescents. The correlates were measured by Perceived Field Motion, Human Field Rhythms, Creativity, Sentience, Fast Tempo, and Waking Periods. The original theory was tested with data obtained from samples of early (n = 116), middle (n = 116), and late (n = 116) adolescents. The revised theory was tested in a fourth selectively combined sample of adolescents, aged 12 to 21 (n = 89). Data were collected in classroom settings. Although the findings did not support either theory, they did indicate that: (1) four of the six correlates studied performed as correlates when examined in three discrete phases of adolescence, as determined by chronological age, (2) the means of the individual correlates increased slightly in frequency levels developmentally, and (3) the correlates emerged at different frequency levels when examined in adolescents, aged 12 to 21.

  16. Initial evaluation of discrete orthogonal basis reconstruction of ECT images

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Moody, E.B.; Donohue, K.D.

    1996-12-31

    Discrete orthogonal basis restoration (DOBR) is a linear, non-iterative, and robust method for solving inverse problems for systems characterized by shift-variant transfer functions. This simulation study evaluates the feasibility of using DOBR for reconstructing emission computed tomographic (ECT) images. The imaging system model uses typical SPECT parameters and incorporates the effects of attenuation, spatially-variant PSF, and Poisson noise in the projection process. Sample reconstructions and statistical error analyses for a class of digital phantoms compare the DOBR performance for Hartley and Walsh basis functions. Test results confirm that DOBR with either basis set produces images with good statistical properties. Nomore » problems were encountered with reconstruction instability. The flexibility of the DOBR method and its consistent performance warrants further investigation of DOBR as a means of ECT image reconstruction.« less

  17. Enhancing adaptive sparse grid approximations and improving refinement strategies using adjoint-based a posteriori error estimates

    DOE PAGES

    Jakeman, J. D.; Wildey, T.

    2015-01-01

    In this paper we present an algorithm for adaptive sparse grid approximations of quantities of interest computed from discretized partial differential equations. We use adjoint-based a posteriori error estimates of the interpolation error in the sparse grid to enhance the sparse grid approximation and to drive adaptivity. We show that utilizing these error estimates provides significantly more accurate functional values for random samples of the sparse grid approximation. We also demonstrate that alternative refinement strategies based upon a posteriori error estimates can lead to further increases in accuracy in the approximation over traditional hierarchical surplus based strategies. Throughout this papermore » we also provide and test a framework for balancing the physical discretization error with the stochastic interpolation error of the enhanced sparse grid approximation.« less

  18. Commercial portion-controlled foods in research studies: how accurate are label weights?

    PubMed

    Conway, Joan M; Rhodes, Donna G; Rumpler, William V

    2004-09-01

    The purpose of this study was to evaluate the reliability of label weights as surrogates for actual weights in commercial portion-controlled foods used in a research setting. Actual weights of replicate samples of 82 portion-controlled food items and 17 discrete units of food from larger packaging were determined over time. Comparison was made to the package label weights for the portion-controlled food items and the per-serving weights for the discrete units. The study was conducted at the US Department of Agriculture's Beltsville Human Nutrition Research Center's Human Study Facility, which houses a metabolic kitchen and human nutrition research facility. The primary outcome measures were the actual and label weights of 99 food items consumed by human volunteers during controlled feeding studies. Statistical analyses performed The difference between label and actual weights was tested by the paired t test for those data that complied with the assumptions of normality. The Wilcoxon signed rank test was used for the remainder of the data. Compliance with federal guidelines for packaged weights was also assessed. There was no statistical difference between actual and label weights for only 37 food items. The actual weights of 15 portion-controlled food items were 1% or more less than label weights, making them potentially out of compliance with federal guidelines. With advance planning and continuous monitoring, well-controlled feeding studies could incorporate portion-controlled food items and discrete units, especially beverages and confectionery products. Dietetics professionals should encourage individuals with diabetes and others on strict dietary regimens to check actual weights of portion-controlled products carefully against package weights.

  19. A Kolmogorov-Smirnov test for the molecular clock based on Bayesian ensembles of phylogenies

    PubMed Central

    Antoneli, Fernando; Passos, Fernando M.; Lopes, Luciano R.

    2018-01-01

    Divergence date estimates are central to understand evolutionary processes and depend, in the case of molecular phylogenies, on tests of molecular clocks. Here we propose two non-parametric tests of strict and relaxed molecular clocks built upon a framework that uses the empirical cumulative distribution (ECD) of branch lengths obtained from an ensemble of Bayesian trees and well known non-parametric (one-sample and two-sample) Kolmogorov-Smirnov (KS) goodness-of-fit test. In the strict clock case, the method consists in using the one-sample Kolmogorov-Smirnov (KS) test to directly test if the phylogeny is clock-like, in other words, if it follows a Poisson law. The ECD is computed from the discretized branch lengths and the parameter λ of the expected Poisson distribution is calculated as the average branch length over the ensemble of trees. To compensate for the auto-correlation in the ensemble of trees and pseudo-replication we take advantage of thinning and effective sample size, two features provided by Bayesian inference MCMC samplers. Finally, it is observed that tree topologies with very long or very short branches lead to Poisson mixtures and in this case we propose the use of the two-sample KS test with samples from two continuous branch length distributions, one obtained from an ensemble of clock-constrained trees and the other from an ensemble of unconstrained trees. Moreover, in this second form the test can also be applied to test for relaxed clock models. The use of a statistically equivalent ensemble of phylogenies to obtain the branch lengths ECD, instead of one consensus tree, yields considerable reduction of the effects of small sample size and provides a gain of power. PMID:29300759

  20. MATLAB software for viewing and processing u-channel and discrete sample paleomagnetic data: UPmag and DPmag

    NASA Astrophysics Data System (ADS)

    Xuan, C.; Channell, J. E.

    2009-12-01

    With the increasing efficiency of acquiring paleomagnetic data from u-channel or discrete samples, large volumes of data can be accumulated within a short time period. It is often critical to visualize and process these data in “real time” as measurements proceed, so that the measurement plan can be dictated accordingly. New MATLABTM software, UPmag and DPmag, are introduced for easy and rapid analysis of natural remanent magnetization (NRM) and laboratory-induced remanent magnetization data for u-channel and discrete samples, respectively. UPmag comprises three MATLABTM graphic user interfaces: UVIEW, UDIR, and UINT. UVIEW allows users to open and check through measurement data from the magnetometer as well as to correct detected flux-jumps in the data, and to export files for further treatment. UDIR reads the *.dir file generated by UVIEW, automatically calculates component directions using selectable demagnetization range(s) with anchored or free origin, and displays orthogonal projections and stepwise intensity plots for any position along the u-channel sample. UDIR can also display data on equal area stereographic projections and draw virtual geomagnetic poles (VGP) on various map projections. UINT provides a convenient platform to evaluate relative paleointensity estimates using the *.int files that can be exported from UVIEW. DPmag comprises two MATLABTM graphic user interfaces: DDIR and DFISHER. DDIR reads output files from the discrete sample magnetometer measurement system. DDIR allows users to calculate component directions for each discrete sample, to plot the demagnetization data on orthogonal projections and equal area projections, as well as to show the stepwise intensity data. DFISHER reads the *.pca file exported from DDIR, calculates VGP and Fisher statistics for data from selected groups of samples, and plots the results on equal area projections and as VGPs on a range of map projections. Data and plots from UPmag and DPmag can be exported to various file formats.

  1. Modelling road accident blackspots data with the discrete generalized Pareto distribution.

    PubMed

    Prieto, Faustino; Gómez-Déniz, Emilio; Sarabia, José María

    2014-10-01

    This study shows how road traffic networks events, in particular road accidents on blackspots, can be modelled with simple probabilistic distributions. We considered the number of crashes and the number of fatalities on Spanish blackspots in the period 2003-2007, from Spanish General Directorate of Traffic (DGT). We modelled those datasets, respectively, with the discrete generalized Pareto distribution (a discrete parametric model with three parameters) and with the discrete Lomax distribution (a discrete parametric model with two parameters, and particular case of the previous model). For that, we analyzed the basic properties of both parametric models: cumulative distribution, survival, probability mass, quantile and hazard functions, genesis and rth-order moments; applied two estimation methods of their parameters: the μ and (μ+1) frequency method and the maximum likelihood method; used two goodness-of-fit tests: Chi-square test and discrete Kolmogorov-Smirnov test based on bootstrap resampling; and compared them with the classical negative binomial distribution in terms of absolute probabilities and in models including covariates. We found that those probabilistic models can be useful to describe the road accident blackspots datasets analyzed. Copyright © 2014 Elsevier Ltd. All rights reserved.

  2. Taylor O(h³) Discretization of ZNN Models for Dynamic Equality-Constrained Quadratic Programming With Application to Manipulators.

    PubMed

    Liao, Bolin; Zhang, Yunong; Jin, Long

    2016-02-01

    In this paper, a new Taylor-type numerical differentiation formula is first presented to discretize the continuous-time Zhang neural network (ZNN), and obtain higher computational accuracy. Based on the Taylor-type formula, two Taylor-type discrete-time ZNN models (termed Taylor-type discrete-time ZNNK and Taylor-type discrete-time ZNNU models) are then proposed and discussed to perform online dynamic equality-constrained quadratic programming. For comparison, Euler-type discrete-time ZNN models (called Euler-type discrete-time ZNNK and Euler-type discrete-time ZNNU models) and Newton iteration, with interesting links being found, are also presented. It is proved herein that the steady-state residual errors of the proposed Taylor-type discrete-time ZNN models, Euler-type discrete-time ZNN models, and Newton iteration have the patterns of O(h(3)), O(h(2)), and O(h), respectively, with h denoting the sampling gap. Numerical experiments, including the application examples, are carried out, of which the results further substantiate the theoretical findings and the efficacy of Taylor-type discrete-time ZNN models. Finally, the comparisons with Taylor-type discrete-time derivative model and other Lagrange-type discrete-time ZNN models for dynamic equality-constrained quadratic programming substantiate the superiority of the proposed Taylor-type discrete-time ZNN models once again.

  3. 40 CFR 1065.525 - Engine starting, restarting, shutdown, and optional repeating of void discrete modes.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ..., and optional repeating of void discrete modes. 1065.525 Section 1065.525 Protection of Environment... repeating of void discrete modes. (a) Start the engine using one of the following methods: (1) Start the... during one of the modes of a discrete-mode test, you may void the results only for that individual mode...

  4. Enzyme-linked immunosorbent assay and polymerase chain reaction performance using Mexican and Guatemalan discrete typing unit I strains of Trypanosoma cruzi.

    PubMed

    Ballinas-Verdugo, Martha; Reyes, Pedro Antonio; Mejia-Dominguez, Ana; López, Ruth; Matta, Vivian; Monteón, Victor M

    2011-12-01

    Thirteen Trypanosoma cruzi isolates from different geographic regions of Mexico and Guatemala belonging to discrete typing unit (DTU) I and a reference CL-Brener (DTU VI) strain were used to perform enzyme-linked immunosorbent assay (ELISA) and polymerase chain reaction (PCR). A panel of 57 Mexican serum samples of patients with chronic chagasic cardiopathy and asymptomatic infected subjects (blood bank donors) were used in this study. DNA from the above 14 T. cruzi strains were extracted and analyzed by PCR using different sets of primers designed from minicircle and satellite T. cruzi DNA. The chronic chagasic cardiopathy serum samples were easily recognized with ELISA regardless of the source of antigenic extract used, even with the CL-Brener TcVI, but positive serum samples from blood bank donors in some cases were not recognized by some Mexican antigenic extracts. On the other hand, PCR showed an excellent performance despite the set of primers used, since all Mexican and Guatemalan T. cruzi strains were correctly amplified. In general terms, Mexican, Guatemalan, and CL-Brener T. cruzi strains are equally good sources of antigen when using the ELISA test to detect Mexican serum samples. However, there are some strains with poor performance. The DTU I strains are easily detected using either kinetoplast or satellite DNA target designed from DTU VI strains.

  5. Accuracy Analysis for Finite-Volume Discretization Schemes on Irregular Grids

    NASA Technical Reports Server (NTRS)

    Diskin, Boris; Thomas, James L.

    2010-01-01

    A new computational analysis tool, downscaling test, is introduced and applied for studying the convergence rates of truncation and discretization errors of nite-volume discretization schemes on general irregular (e.g., unstructured) grids. The study shows that the design-order convergence of discretization errors can be achieved even when truncation errors exhibit a lower-order convergence or, in some cases, do not converge at all. The downscaling test is a general, efficient, accurate, and practical tool, enabling straightforward extension of verification and validation to general unstructured grid formulations. It also allows separate analysis of the interior, boundaries, and singularities that could be useful even in structured-grid settings. There are several new findings arising from the use of the downscaling test analysis. It is shown that the discretization accuracy of a common node-centered nite-volume scheme, known to be second-order accurate for inviscid equations on triangular grids, degenerates to first order for mixed grids. Alternative node-centered schemes are presented and demonstrated to provide second and third order accuracies on general mixed grids. The local accuracy deterioration at intersections of tangency and in flow/outflow boundaries is demonstrated using the DS tests tailored to examining the local behavior of the boundary conditions. The discretization-error order reduction within inviscid stagnation regions is demonstrated. The accuracy deterioration is local, affecting mainly the velocity components, but applies to any order scheme.

  6. Covariant information-density cutoff in curved space-time.

    PubMed

    Kempf, Achim

    2004-06-04

    In information theory, the link between continuous information and discrete information is established through well-known sampling theorems. Sampling theory explains, for example, how frequency-filtered music signals are reconstructible perfectly from discrete samples. In this Letter, sampling theory is generalized to pseudo-Riemannian manifolds. This provides a new set of mathematical tools for the study of space-time at the Planck scale: theories formulated on a differentiable space-time manifold can be equivalent to lattice theories. There is a close connection to generalized uncertainty relations which have appeared in string theory and other studies of quantum gravity.

  7. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Saboia, A.; Toscano, F.; Walborn, S. P.

    We derive a family of entanglement criteria for continuous-variable systems based on the Renyi entropy of complementary distributions. We show that these entanglement witnesses can be more sensitive than those based on second-order moments, as well as previous tests involving the Shannon entropy [Phys. Rev. Lett. 103, 160505 (2009)]. We extend our results to include the case of discrete sampling. We provide several numerical results which show that our criteria can be used to identify entanglement in a number of experimentally relevant quantum states.

  8. A Discrete Model for Color Naming

    NASA Astrophysics Data System (ADS)

    Menegaz, G.; Le Troter, A.; Sequeira, J.; Boi, J. M.

    2006-12-01

    The ability to associate labels to colors is very natural for human beings. Though, this apparently simple task hides very complex and still unsolved problems, spreading over many different disciplines ranging from neurophysiology to psychology and imaging. In this paper, we propose a discrete model for computational color categorization and naming. Starting from the 424 color specimens of the OSA-UCS set, we propose a fuzzy partitioning of the color space. Each of the 11 basic color categories identified by Berlin and Kay is modeled as a fuzzy set whose membership function is implicitly defined by fitting the model to the results of an ad hoc psychophysical experiment (Experiment 1). Each OSA-UCS sample is represented by a feature vector whose components are the memberships to the different categories. The discrete model consists of a three-dimensional Delaunay triangulation of the CIELAB color space which associates each OSA-UCS sample to a vertex of a 3D tetrahedron. Linear interpolation is used to estimate the membership values of any other point in the color space. Model validation is performed both directly, through the comparison of the predicted membership values to the subjective counterparts, as evaluated via another psychophysical test (Experiment 2), and indirectly, through the investigation of its exploitability for image segmentation. The model has proved to be successful in both cases, providing an estimation of the membership values in good agreement with the subjective measures as well as a semantically meaningful color-based segmentation map.

  9. 49 CFR 227.5 - Definitions.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... this part. Exchange rate means the change in sound level, in decibels, which would require halving or... audiometric testing, showing the thresholds of hearing sensitivity measured at discrete frequencies, as well... discrete frequencies. Audiometry can also be referred to as audiometric testing. Baseline audiogram means...

  10. Modeling of light dynamic cone penetration test - Panda 3 ® in granular material by using 3D Discrete element method

    NASA Astrophysics Data System (ADS)

    Tran, Quoc Anh; Chevalier, Bastien; Benz, Miguel; Breul, Pierre; Gourvès, Roland

    2017-06-01

    The recent technological developments made on the light dynamic penetration test Panda 3 ® provide a dynamic load-penetration curve σp - sp for each impact. This curve is influenced by the mechanical and physical properties of the investigated granular media. In order to analyze and exploit the load-penetration curve, a numerical model of penetration test using 3D Discrete Element Method is proposed for reproducing tests in dynamic conditions in granular media. All parameters of impact used in this model have at first been calibrated by respecting mechanical and geometrical properties of the hammer and the rod. There is a good agreement between experimental results and the ones obtained from simulations in 2D or 3D. After creating a sample, we will simulate the Panda 3 ®. It is possible to measure directly the dynamic load-penetration curve occurring at the tip for each impact. Using the force and acceleration measured in the top part of the rod, it is possible to separate the incident and reflected waves and then calculate the tip's load-penetration curve. The load-penetration curve obtained is qualitatively similar with that obtained by experimental tests. In addition, the frequency analysis of the measured signals present also a good compliance with that measured in reality when the tip resistance is qualitatively similar.

  11. Discrete Element Method and its application to materials failure problem on the example of Brazilian Test

    NASA Astrophysics Data System (ADS)

    Klejment, Piotr; Kosmala, Alicja; Foltyn, Natalia; Dębski, Wojciech

    2017-04-01

    The earthquake focus is the point where a rock under external stress starts to fracture. Understanding earthquake nucleation and earthquake dynamics requires thus understanding of fracturing of brittle materials. This, however, is a continuing problem and enduring challenge to geoscience. In spite of significant progress we still do not fully understand the failure of rock materials due to extreme stress concentration in natural condition. One of the reason of this situation is that information about natural or induced seismic events is still not sufficient for precise description of physical processes in seismic foci. One of the possibility of improving this situation is using numerical simulations - a powerful tool of contemporary physics. For this reason we used an advanced implementation of the Discrete Element Method (DEM). DEM's main task is to calculate physical properties of materials which are represented as an assembly of a great number of particles interacting with each other. We analyze the possibility of using DEM for describing materials during so called Brazilian Test. Brazilian Test is a testing method to obtain the tensile strength of brittle material. One of the primary reasons for conducting such simulations is to measure macroscopic parameters of the rock sample. We would like to report our efforts of describing the fracturing process during the Brazilian Test from the microscopic point of view and give an insight into physical processes preceding materials failure.

  12. Comparison of vertical discretization techniques in finite-difference models of ground-water flow; example from a hypothetical New England setting

    USGS Publications Warehouse

    Harte, Philip T.

    1994-01-01

    Proper discretization of a ground-water-flow field is necessary for the accurate simulation of ground-water flow by models. Although discretiza- tion guidelines are available to ensure numerical stability, current guidelines arc flexible enough (particularly in vertical discretization) to allow for some ambiguity of model results. Testing of two common types of vertical-discretization schemes (horizontal and nonhorizontal-model-layer approach) were done to simulate sloping hydrogeologic units characteristic of New England. Differences of results of model simulations using these two approaches are small. Numerical errors associated with use of nonhorizontal model layers are small (4 percent). even though this discretization technique does not adhere to the strict formulation of the finite-difference method. It was concluded that vertical discretization by means of the nonhorizontal layer approach has advantages in representing the hydrogeologic units tested and in simplicity of model-data input. In addition, vertical distortion of model cells by this approach may improve the representation of shallow flow processes.

  13. Effect Of Oxidation On Chromium Leaching And Redox Capacity Of Slag-Containing Waste Forms

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Almond, P. M.; Stefanko, D. B.; Langton, C. A.

    2013-03-01

    The rate of oxidation is important to the long-term performance of reducing salt waste forms because the solubility of some contaminants, e.g., technetium, is a function of oxidation state. TcO 4 - in the salt solution is reduced to Tc(IV) and has been shown to react with ingredients in the waste form to precipitate low solubility sulfide and/or oxide phases [Shuh, et al., 1994, Shuh, et al., 2000, Shuh, et al., 2003]. Upon exposure to oxygen, the compounds containing Tc(IV) oxidize to the pertechnetate ion, Tc(VII)O 4 -, which is very soluble. Consequently the rate of technetium oxidation front advancementmore » into a monolith and the technetium leaching profile as a function of depth from an exposed surface are important to waste form performance and ground water concentration predictions. An approach for measuring contaminant oxidation rate (effective contaminant specific oxidation rate) based on leaching of select contaminants of concern is described in this report. In addition, the relationship between reduction capacity and contaminant oxidation is addressed. Chromate was used as a non-radioactive surrogate for pertechnetate in simulated waste form samples. Depth discrete subsamples were cut from material exposed to Savannah River Site (SRS) field cured conditions. The subsamples were prepared and analyzed for both reduction capacity and chromium leachability. Results from field-cured samples indicate that the depth at which leachable chromium was detected advanced further into the sample exposed for 302 days compared to the sample exposed to air for 118 days (at least 50 mm compared to at least 20 mm). Data for only two exposure time intervals is currently available. Data for additional exposure times are required to develop an equation for the oxidation front progression. Reduction capacity measurements (per the Angus-Glasser method, which is a measurement of the ability of a material to chemically reduce Ce(IV) to Ce(III) in solution) performed on depth discrete samples could not be correlated with the amount of chromium leached from the depth discrete subsamples or with the oxidation front inferred from soluble chromium (i.e., effective Cr oxidation front). Exposure to oxygen (air or oxygen dissolved in water) results in the release of chromium through oxidation of Cr(III) to highly soluble chromate, Cr(VI). Residual reduction capacity in the oxidized region of the test samples indicates that the remaining reduction capacity is not effective in re-reducing Cr(VI) in the presence of oxygen. Consequently, this method for determining reduction capacity may not be a good indicator of the effective contaminant oxidation rate in a relatively porous solid (40 to 60 volume percent porosity). The chromium extracted in depth discrete samples ranged from a maximum of about 5.8 % at about 5 mm (118 day exposure) to about 4 % at about 10 mm (302 day exposure). The use of reduction capacity as an indicator of long-term performance requires further investigation. The carbonation front was also estimated to have advanced to at least 28 mm in 302 days based on visual observation of gas evolution during acid addition during the reduction capacity measurements. Depth discrete sampling of materials exposed to realistic conditions in combination with short term leaching of crushed samples has potential for advancing the understanding of factors influencing performance and will support conceptual model development.« less

  14. The multiscale expansions of difference equations in the small lattice spacing regime, and a vicinity and integrability test: I

    NASA Astrophysics Data System (ADS)

    Santini, Paolo Maria

    2010-01-01

    We propose an algorithmic procedure (i) to study the 'distance' between an integrable PDE and any discretization of it, in the small lattice spacing epsilon regime, and, at the same time, (ii) to test the (asymptotic) integrability properties of such discretization. This method should provide, in particular, useful and concrete information on how good is any numerical scheme used to integrate a given integrable PDE. The procedure, illustrated on a fairly general ten-parameter family of discretizations of the nonlinear Schrödinger equation, consists of the following three steps: (i) the construction of the continuous multiscale expansion of a generic solution of the discrete system at all orders in epsilon, following Degasperis et al (1997 Physica D 100 187-211) (ii) the application, to such an expansion, of the Degasperis-Procesi (DP) integrability test (Degasperis A and Procesi M 1999 Asymptotic integrability Symmetry and Perturbation Theory, SPT98, ed A Degasperis and G Gaeta (Singapore: World Scientific) pp 23-37 Degasperis A 2001 Multiscale expansion and integrability of dispersive wave equations Lectures given at the Euro Summer School: 'What is integrability?' (Isaac Newton Institute, Cambridge, UK, 13-24 August); Integrability (Lecture Notes in Physics vol 767) ed A Mikhailov (Berlin: Springer)), to test the asymptotic integrability properties of the discrete system and its 'distance' from its continuous limit; (iii) the use of the main output of the DP test to construct infinitely many approximate symmetries and constants of motion of the discrete system, through novel and simple formulas.

  15. How Small the Number of Test Items Can Be for the Basis of Estimating the Operating Characteristics of the Discrete Responses to Unknown Test Items.

    ERIC Educational Resources Information Center

    Samejima, Fumiko; Changas, Paul S.

    The methods and approaches for estimating the operating characteristics of the discrete item responses without assuming any mathematical form have been developed and expanded. It has been made possible that, even if the test information function of a given test is not constant for the interval of ability of interest, it is used as the Old Test.…

  16. An algorithm for extraction of periodic signals from sparse, irregularly sampled data

    NASA Technical Reports Server (NTRS)

    Wilcox, J. Z.

    1994-01-01

    Temporal gaps in discrete sampling sequences produce spurious Fourier components at the intermodulation frequencies of an oscillatory signal and the temporal gaps, thus significantly complicating spectral analysis of such sparsely sampled data. A new fast Fourier transform (FFT)-based algorithm has been developed, suitable for spectral analysis of sparsely sampled data with a relatively small number of oscillatory components buried in background noise. The algorithm's principal idea has its origin in the so-called 'clean' algorithm used to sharpen images of scenes corrupted by atmospheric and sensor aperture effects. It identifies as the signal's 'true' frequency that oscillatory component which, when passed through the same sampling sequence as the original data, produces a Fourier image that is the best match to the original Fourier space. The algorithm has generally met with succession trials with simulated data with a low signal-to-noise ratio, including those of a type similar to hourly residuals for Earth orientation parameters extracted from VLBI data. For eight oscillatory components in the diurnal and semidiurnal bands, all components with an amplitude-noise ratio greater than 0.2 were successfully extracted for all sequences and duty cycles (greater than 0.1) tested; the amplitude-noise ratios of the extracted signals were as low as 0.05 for high duty cycles and long sampling sequences. When, in addition to these high frequencies, strong low-frequency components are present in the data, the low-frequency components are generally eliminated first, by employing a version of the algorithm that searches for non-integer multiples of the discrete FET minimum frequency.

  17. A novel condition for stable nonlinear sampled-data models using higher-order discretized approximations with zero dynamics.

    PubMed

    Zeng, Cheng; Liang, Shan; Xiang, Shuwen

    2017-05-01

    Continuous-time systems are usually modelled by the form of ordinary differential equations arising from physical laws. However, the use of these models in practice and utilizing, analyzing or transmitting these data from such systems must first invariably be discretized. More importantly, for digital control of a continuous-time nonlinear system, a good sampled-data model is required. This paper investigates the new consistency condition which is weaker than the previous similar results presented. Moreover, given the stability of the high-order approximate model with stable zero dynamics, the novel condition presented stabilizes the exact sampled-data model of the nonlinear system for sufficiently small sampling periods. An insightful interpretation of the obtained results can be made in terms of the stable sampling zero dynamics, and the new consistency condition is surprisingly associated with the relative degree of the nonlinear continuous-time system. Our controller design, based on the higher-order approximate discretized model, extends the existing methods which mainly deal with the Euler approximation. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.

  18. Comparison of soil sampling and analytical methods for asbestos at the Sumas Mountain Asbestos Site-Working towards a toolbox for better assessment.

    PubMed

    Wroble, Julie; Frederick, Timothy; Frame, Alicia; Vallero, Daniel

    2017-01-01

    Established soil sampling methods for asbestos are inadequate to support risk assessment and risk-based decision making at Superfund sites due to difficulties in detecting asbestos at low concentrations and difficulty in extrapolating soil concentrations to air concentrations. Environmental Protection Agency (EPA)'s Office of Land and Emergency Management (OLEM) currently recommends the rigorous process of Activity Based Sampling (ABS) to characterize site exposures. The purpose of this study was to compare three soil analytical methods and two soil sampling methods to determine whether one method, or combination of methods, would yield more reliable soil asbestos data than other methods. Samples were collected using both traditional discrete ("grab") samples and incremental sampling methodology (ISM). Analyses were conducted using polarized light microscopy (PLM), transmission electron microscopy (TEM) methods or a combination of these two methods. Data show that the fluidized bed asbestos segregator (FBAS) followed by TEM analysis could detect asbestos at locations that were not detected using other analytical methods; however, this method exhibited high relative standard deviations, indicating the results may be more variable than other soil asbestos methods. The comparison of samples collected using ISM versus discrete techniques for asbestos resulted in no clear conclusions regarding preferred sampling method. However, analytical results for metals clearly showed that measured concentrations in ISM samples were less variable than discrete samples.

  19. Comparison of soil sampling and analytical methods for asbestos at the Sumas Mountain Asbestos Site—Working towards a toolbox for better assessment

    PubMed Central

    2017-01-01

    Established soil sampling methods for asbestos are inadequate to support risk assessment and risk-based decision making at Superfund sites due to difficulties in detecting asbestos at low concentrations and difficulty in extrapolating soil concentrations to air concentrations. Environmental Protection Agency (EPA)’s Office of Land and Emergency Management (OLEM) currently recommends the rigorous process of Activity Based Sampling (ABS) to characterize site exposures. The purpose of this study was to compare three soil analytical methods and two soil sampling methods to determine whether one method, or combination of methods, would yield more reliable soil asbestos data than other methods. Samples were collected using both traditional discrete (“grab”) samples and incremental sampling methodology (ISM). Analyses were conducted using polarized light microscopy (PLM), transmission electron microscopy (TEM) methods or a combination of these two methods. Data show that the fluidized bed asbestos segregator (FBAS) followed by TEM analysis could detect asbestos at locations that were not detected using other analytical methods; however, this method exhibited high relative standard deviations, indicating the results may be more variable than other soil asbestos methods. The comparison of samples collected using ISM versus discrete techniques for asbestos resulted in no clear conclusions regarding preferred sampling method. However, analytical results for metals clearly showed that measured concentrations in ISM samples were less variable than discrete samples. PMID:28759607

  20. Discrete ellipsoidal statistical BGK model and Burnett equations

    NASA Astrophysics Data System (ADS)

    Zhang, Yu-Dong; Xu, Ai-Guo; Zhang, Guang-Cai; Chen, Zhi-Hua; Wang, Pei

    2018-06-01

    A new discrete Boltzmann model, the discrete ellipsoidal statistical Bhatnagar-Gross-Krook (ESBGK) model, is proposed to simulate nonequilibrium compressible flows. Compared with the original discrete BGK model, the discrete ES-BGK has a flexible Prandtl number. For the discrete ES-BGK model in the Burnett level, two kinds of discrete velocity model are introduced and the relations between nonequilibrium quantities and the viscous stress and heat flux in the Burnett level are established. The model is verified via four benchmark tests. In addition, a new idea is introduced to recover the actual distribution function through the macroscopic quantities and their space derivatives. The recovery scheme works not only for discrete Boltzmann simulation but also for hydrodynamic ones, for example, those based on the Navier-Stokes or the Burnett equations.

  1. Discrete Time Rescaling Theorem: Determining Goodness of Fit for Discrete Time Statistical Models of Neural Spiking

    PubMed Central

    Haslinger, Robert; Pipa, Gordon; Brown, Emery

    2010-01-01

    One approach for understanding the encoding of information by spike trains is to fit statistical models and then test their goodness of fit. The time rescaling theorem provides a goodness of fit test consistent with the point process nature of spike trains. The interspike intervals (ISIs) are rescaled (as a function of the model’s spike probability) to be independent and exponentially distributed if the model is accurate. A Kolmogorov Smirnov (KS) test between the rescaled ISIs and the exponential distribution is then used to check goodness of fit. This rescaling relies upon assumptions of continuously defined time and instantaneous events. However spikes have finite width and statistical models of spike trains almost always discretize time into bins. Here we demonstrate that finite temporal resolution of discrete time models prevents their rescaled ISIs from being exponentially distributed. Poor goodness of fit may be erroneously indicated even if the model is exactly correct. We present two adaptations of the time rescaling theorem to discrete time models. In the first we propose that instead of assuming the rescaled times to be exponential, the reference distribution be estimated through direct simulation by the fitted model. In the second, we prove a discrete time version of the time rescaling theorem which analytically corrects for the effects of finite resolution. This allows us to define a rescaled time which is exponentially distributed, even at arbitrary temporal discretizations. We demonstrate the efficacy of both techniques by fitting Generalized Linear Models (GLMs) to both simulated spike trains and spike trains recorded experimentally in monkey V1 cortex. Both techniques give nearly identical results, reducing the false positive rate of the KS test and greatly increasing the reliability of model evaluation based upon the time rescaling theorem. PMID:20608868

  2. Discrete time rescaling theorem: determining goodness of fit for discrete time statistical models of neural spiking.

    PubMed

    Haslinger, Robert; Pipa, Gordon; Brown, Emery

    2010-10-01

    One approach for understanding the encoding of information by spike trains is to fit statistical models and then test their goodness of fit. The time-rescaling theorem provides a goodness-of-fit test consistent with the point process nature of spike trains. The interspike intervals (ISIs) are rescaled (as a function of the model's spike probability) to be independent and exponentially distributed if the model is accurate. A Kolmogorov-Smirnov (KS) test between the rescaled ISIs and the exponential distribution is then used to check goodness of fit. This rescaling relies on assumptions of continuously defined time and instantaneous events. However, spikes have finite width, and statistical models of spike trains almost always discretize time into bins. Here we demonstrate that finite temporal resolution of discrete time models prevents their rescaled ISIs from being exponentially distributed. Poor goodness of fit may be erroneously indicated even if the model is exactly correct. We present two adaptations of the time-rescaling theorem to discrete time models. In the first we propose that instead of assuming the rescaled times to be exponential, the reference distribution be estimated through direct simulation by the fitted model. In the second, we prove a discrete time version of the time-rescaling theorem that analytically corrects for the effects of finite resolution. This allows us to define a rescaled time that is exponentially distributed, even at arbitrary temporal discretizations. We demonstrate the efficacy of both techniques by fitting generalized linear models to both simulated spike trains and spike trains recorded experimentally in monkey V1 cortex. Both techniques give nearly identical results, reducing the false-positive rate of the KS test and greatly increasing the reliability of model evaluation based on the time-rescaling theorem.

  3. Chemical contaminants in water and sediment near fish nesting sites in the Potomac River basin: determining potential exposures to smallmouth bass (Micropterus dolomieu)

    USGS Publications Warehouse

    Kolpin, Dana W.; Blazer, Vicki; Gray, James L.; Focazio, Michael J.; Young, John A.; Alvarez, David A.; Iwanowicz, Luke R.; Foreman, William T.; Furlong, Edward T.; Speiran, Gary K.; Zaugg, Steven D.; Hubbard, Laura E.; Meyer, Michael T.; Sandstrom, Mark W.; Barber, Larry B.

    2013-01-01

    The Potomac River basin is an area where a high prevalence of abnormalities such as testicular oocytes (TO), skin lesions, and mortality has been observed in smallmouth bass (SMB, Micropterus dolomieu). Previous research documented a variety of chemicals in regional streams, implicating chemical exposure as one plausible explanation for these biological effects. Six stream sites in the Potomac basin (and one out-of-basin reference site) were sampled to provide an assessment of chemicals in these streams. Potential early life-stage exposure to chemicals detected was assessed by collecting samples in and around SMB nesting areas. Target chemicals included those known to be associated with important agricultural and municipal wastewater sources in the Potomac basin. The prevalence and severity of TO in SMB were also measured to determine potential relations between chemistry and biological effects. A total of 39 chemicals were detected at least once in the discrete-water samples, with atrazine, caffeine, deethylatrazine, simazine, and iso-chlorotetracycline being most frequently detected. Of the most frequently detected chemicals, only caffeine was detected in water from the reference site. No biogenic hormones/sterols were detected in the discrete-water samples. In contrast, 100 chemicals (including six biogenic hormones/sterols) were found in a least one passive-water sample, with 25 being detected at all such samples. In addition, 46 chemicals (including seven biogenic hormones/sterols) were found in the bed-sediment samples, with caffeine, cholesterol, indole, para-cresol, and sitosterol detected in all such samples. The number of herbicides detected in discrete-water samples per site had a significant positive relation to TOrank (a nonparametric indicator of TO), with significant positive relations between TOrank and atrazine concentrations in discrete-water samples and to total hormone/sterol concentration in bed-sediment samples. Such significant correlations do not necessarily imply causation, as these chemical compositions and concentrations likely do not adequately reflect total SMB exposure history, particularly during critical life stages.

  4. Inhomogeneous point-process entropy: An instantaneous measure of complexity in discrete systems

    NASA Astrophysics Data System (ADS)

    Valenza, Gaetano; Citi, Luca; Scilingo, Enzo Pasquale; Barbieri, Riccardo

    2014-05-01

    Measures of entropy have been widely used to characterize complexity, particularly in physiological dynamical systems modeled in discrete time. Current approaches associate these measures to finite single values within an observation window, thus not being able to characterize the system evolution at each moment in time. Here, we propose a new definition of approximate and sample entropy based on the inhomogeneous point-process theory. The discrete time series is modeled through probability density functions, which characterize and predict the time until the next event occurs as a function of the past history. Laguerre expansions of the Wiener-Volterra autoregressive terms account for the long-term nonlinear information. As the proposed measures of entropy are instantaneously defined through probability functions, the novel indices are able to provide instantaneous tracking of the system complexity. The new measures are tested on synthetic data, as well as on real data gathered from heartbeat dynamics of healthy subjects and patients with cardiac heart failure and gait recordings from short walks of young and elderly subjects. Results show that instantaneous complexity is able to effectively track the system dynamics and is not affected by statistical noise properties.

  5. Bone scaffolds with homogeneous and discrete gradient mechanical properties.

    PubMed

    Jelen, C; Mattei, G; Montemurro, F; De Maria, C; Mattioli-Belmonte, M; Vozzi, G

    2013-01-01

    Bone TE uses a scaffold either to induce bone formation from surrounding tissue or to act as a carrier or template for implanted bone cells or other agents. We prepared different bone tissue constructs based on collagen, gelatin and hydroxyapatite using genipin as cross-linking agent. The fabricated construct did not present a release neither of collagen neither of genipin over its toxic level in the surrounding aqueous environment. Each scaffold has been mechanically characterized with compression, swelling and creep tests, and their respective viscoelastic mechanical models were derived. Mechanical characterization showed a practically elastic behavior of all samples and that compressive elastic modulus basically increases as content of HA increases, and it is strongly dependent on porosity and water content. Moreover, by considering that gradients in cellular and extracellular architecture as well as in mechanical properties are readily apparent in native tissues, we developed discrete functionally graded scaffolds (discrete FGSs) in order to mimic the graded structure of bone tissue. These new structures were mechanically characterized showing a marked anisotropy as the native bone tissue. Results obtained have shown FGSs could represent valid bone substitutes. Copyright © 2012 Elsevier B.V. All rights reserved.

  6. A novel method to fabricate discrete porous carbon hemispheres and their electrochemical properties as supercapacitors.

    PubMed

    Chen, Jiafu; Lang, Zhanlin; Xu, Qun; Zhang, Jianan; Fu, Jianwei; Chen, Zhimin

    2013-11-07

    A simple and efficient method to produce discrete, hierarchical porous carbon hemispheres (CHs) with high uniformity has been successfully developed by constructing nanoreactors and using low crosslinked poly(styrene-co-divinylbenzene) (P(St-co-DVB)) capsules as precursors. The samples are characterized by scanning and transmission electron microscopy, Fourier transform infrared and Raman spectroscopy, X-ray diffraction, and N2 adsorption and desorption. Considering their application, the cyclic voltammetry and electrochemical impedance spectroscopy characterization are tested. The experimental results show that the achievement of discrete and perfect carbon hemispheres is dependent on the proper amount of DVB in the P(St-co-DVB) capsules, which can contribute to the ideal thickness or mechanical strength of the shells. When the amount of DVB is 35 wt% in the precursors, a high Brunauer-Emmett-Teller surface area of 676 m(2) g(-1) can be obtained for the carbon hemispheres, and the extremely large pore volume of 2.63 cm(3) g(-1) can also be achieved at the same time. The electrochemical test shows the carbon hemispheres have a higher specific capacitance of ca. 83 F g(-1) at 10 mV s(-1), compared to other carbon materials. So this method supplies a platform to extend the fabrication field of carbon materials and supplies more chances for the application of carbon materials including carbon hemispheres that are important components and substrates for supercapacitors.

  7. Analytical chemistry of PCBs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Erickson, M.D.

    Analytical Chemistry of PCBs offers a review of physical, chemical, commercial, environmental and biological properties of PCBs. It also defines and discusses six discrete steps of analysis: sampling, extraction, cleanup, determination, data reduction, and quality assurance. The final chapter provides a discussion on collaborative testing - the ultimate step in method evaluation. Dr. Erickson also provides a bibliography of over 1200 references, critical reviews of primary literature, and five appendices which present ancillary material on PCB nomen-clature, physical properties, composition of commercial mixtures, mass spectra characteristics, and PGC/ECD chromatograms.

  8. Valuations of genetic test information for treatable conditions: the case of colorectal cancer screening.

    PubMed

    Kilambi, Vikram; Johnson, F Reed; González, Juan Marcos; Mohamed, Ateesha F

    2014-12-01

    The value of the information that genetic testing services provide can be questioned for insurance-based health systems. The results of genetic tests oftentimes may not lead to well-defined clinical interventions; however, Lynch syndrome, a genetic mutation for which carriers are at an increased risk for colorectal cancer, can be identified through genetic testing, and meaningful health interventions are available via increased colonoscopic surveillance. Valuations of test information for such conditions ought to account for the full impact of interventions and contingent outcomes. To conduct a discrete-choice experiment to elicit individuals' preferences for genetic test information. A Web-enabled discrete-choice experiment survey was administered to a representative sample of US residents aged 50 years and older. In addition to specifying expenditures on colonoscopies, respondents were asked to make a series of nine selections between two hypothetical genetic tests or a no-test option under the premise that a relative had Lynch syndrome. The hypothetical genetic tests were defined by the probability of developing colorectal cancer, the probability of a false-negative test result, privacy of the result, and out-of-pocket cost. A model specification identifying necessary interactions was derived from assumptions of risk behavior and the decision context and was estimated using random-parameters logit. A total of 650 respondents were contacted, and 385 completed the survey. The monetary equivalent of test information was approximately $1800. Expenditures on colonoscopies to reduce mortality risks affected valuations. Respondents with lower income or who reported being employed significantly valued genetic tests more. Genetic testing may confer benefits through the impact of subsequent interventions on private individuals. Copyright © 2014. Published by Elsevier Inc.

  9. Univariate and Bivariate Loglinear Models for Discrete Test Score Distributions.

    ERIC Educational Resources Information Center

    Holland, Paul W.; Thayer, Dorothy T.

    2000-01-01

    Applied the theory of exponential families of distributions to the problem of fitting the univariate histograms and discrete bivariate frequency distributions that often arise in the analysis of test scores. Considers efficient computation of the maximum likelihood estimates of the parameters using Newton's Method and computationally efficient…

  10. Computer generated hologram from point cloud using graphics processor.

    PubMed

    Chen, Rick H-Y; Wilkinson, Timothy D

    2009-12-20

    Computer generated holography is an extremely demanding and complex task when it comes to providing realistic reconstructions with full parallax, occlusion, and shadowing. We present an algorithm designed for data-parallel computing on modern graphics processing units to alleviate the computational burden. We apply Gaussian interpolation to create a continuous surface representation from discrete input object points. The algorithm maintains a potential occluder list for each individual hologram plane sample to keep the number of visibility tests to a minimum. We experimented with two approximations that simplify and accelerate occlusion computation. It is observed that letting several neighboring hologram plane samples share visibility information on object points leads to significantly faster computation without causing noticeable artifacts in the reconstructed images. Computing a reduced sample set via nonuniform sampling is also found to be an effective acceleration technique.

  11. Numerical Method for Darcy Flow Derived Using Discrete Exterior Calculus

    NASA Astrophysics Data System (ADS)

    Hirani, A. N.; Nakshatrala, K. B.; Chaudhry, J. H.

    2015-05-01

    We derive a numerical method for Darcy flow, and also for Poisson's equation in mixed (first order) form, based on discrete exterior calculus (DEC). Exterior calculus is a generalization of vector calculus to smooth manifolds and DEC is one of its discretizations on simplicial complexes such as triangle and tetrahedral meshes. DEC is a coordinate invariant discretization, in that it does not depend on the embedding of the simplices or the whole mesh. We start by rewriting the governing equations of Darcy flow using the language of exterior calculus. This yields a formulation in terms of flux differential form and pressure. The numerical method is then derived by using the framework provided by DEC for discretizing differential forms and operators that act on forms. We also develop a discretization for a spatially dependent Hodge star that varies with the permeability of the medium. This also allows us to address discontinuous permeability. The matrix representation for our discrete non-homogeneous Hodge star is diagonal, with positive diagonal entries. The resulting linear system of equations for flux and pressure are saddle type, with a diagonal matrix as the top left block. The performance of the proposed numerical method is illustrated on many standard test problems. These include patch tests in two and three dimensions, comparison with analytically known solutions in two dimensions, layered medium with alternating permeability values, and a test with a change in permeability along the flow direction. We also show numerical evidence of convergence of the flux and the pressure. A convergence experiment is included for Darcy flow on a surface. A short introduction to the relevant parts of smooth and discrete exterior calculus is included in this article. We also include a discussion of the boundary condition in terms of exterior calculus.

  12. Sample size determination for a three-arm equivalence trial of Poisson and negative binomial responses.

    PubMed

    Chang, Yu-Wei; Tsong, Yi; Zhao, Zhigen

    2017-01-01

    Assessing equivalence or similarity has drawn much attention recently as many drug products have lost or will lose their patents in the next few years, especially certain best-selling biologics. To claim equivalence between the test treatment and the reference treatment when assay sensitivity is well established from historical data, one has to demonstrate both superiority of the test treatment over placebo and equivalence between the test treatment and the reference treatment. Thus, there is urgency for practitioners to derive a practical way to calculate sample size for a three-arm equivalence trial. The primary endpoints of a clinical trial may not always be continuous, but may be discrete. In this paper, the authors derive power function and discuss sample size requirement for a three-arm equivalence trial with Poisson and negative binomial clinical endpoints. In addition, the authors examine the effect of the dispersion parameter on the power and the sample size by varying its coefficient from small to large. In extensive numerical studies, the authors demonstrate that required sample size heavily depends on the dispersion parameter. Therefore, misusing a Poisson model for negative binomial data may easily lose power up to 20%, depending on the value of the dispersion parameter.

  13. Hydrogeology and water quality of the Floridan aquifer system and effect of Lower Floridan aquifer withdrawals on the Upper Floridan aquifer at Barbour Pointe Community, Chatham County, Georgia, 2013

    USGS Publications Warehouse

    Gonthier, Gerard; Clarke, John S.

    2016-06-02

    Two test wells were completed at the Barbour Pointe community in western Chatham County, near Savannah, Georgia, in 2013 to investigate the potential of using the Lower Floridan aquifer as a source of municipal water supply. One well was completed in the Lower Floridan aquifer at a depth of 1,080 feet (ft) below land surface; the other well was completed in the Upper Floridan aquifer at a depth of 440 ft below land surface. At the Barbour Pointe test site, the U.S. Geological Survey completed electromagnetic (EM) flowmeter surveys, collected and analyzed water samples from discrete depths, and completed a 72-hour aquifer test of the Floridan aquifer system withdrawing from the Lower Floridan aquifer.Based on drill cuttings, geophysical logs, and borehole EM flowmeter surveys collected at the Barbour Pointe test site, the Upper Floridan aquifer extends 369 to 567 ft below land surface, the middle semiconfining unit, separating the two aquifers, extends 567 to 714 ft below land surface, and the Lower Floridan aquifer extends 714 to 1,056 ft below land surface.A borehole EM flowmeter survey indicates that the Upper Floridan and Lower Floridan aquifers each contain four water-bearing zones. The EM flowmeter logs of the test hole open to the entire Floridan aquifer system indicated that the Upper Floridan aquifer contributed 91 percent of the total flow rate of 1,000 gallons per minute; the Lower Floridan aquifer contributed about 8 percent. Based on the transmissivity of the middle semiconfining unit and the Floridan aquifer system, the middle semiconfining unit probably contributed on the order of 1 percent of the total flow.Hydraulic properties of the Upper Floridan and Lower Floridan aquifers were estimated based on results of the EM flowmeter survey and a 72-hour aquifer test completed in Lower Floridan aquifer well 36Q398. The EM flowmeter data were analyzed using an AnalyzeHOLE-generated model to simulate upward borehole flow and determine the transmissivity of water-bearing zones. Aquifer-test data were analyzed with a two-dimensional, axisymmetric, radial, transient, groundwater-flow model using MODFLOW–2005. The flowmeter-survey and aquifer-test simulations provided an estimated transmissivity of about 60,000 square feet per day for the Upper Floridan aquifer and about 5,000 square feet per day for the Lower Floridan aquifer.Water in discrete-depth samples collected from the Upper Floridan aquifer, middle semiconfining unit, and Lower Floridan aquifer during the EM flowmeter survey in August 2013 was low in dissolved solids. Tested constituents were in concentrations within established U.S. Environmental Protection Agency drinking water-quality criteria. Concentrations of measured constituents in water samples from Lower Floridan aquifer well 36Q398 collected at the end of the 72-hour aquifer test in November 2013 were generally higher than in the discrete-depth samples collected during EM flowmeter testing in August 2013 but remained within established drinking water-quality criteria.Water-level data for the aquifer test were filtered for external influences such as barometric pressure, earth-tide effects, and long-term trends to enable detection of small (less than 1 ft) water-level responses to aquifer-test withdrawal. During the 72-hour aquifer test, the Lower Floridan aquifer was pumped at a rate of 750 gallons per minute resulting in a drawdown response of 35.5 ft in the pumped well; 1.6 ft in the Lower Floridan aquifer observation well located about 6,000 ft west of the pumped well; and responses of 0.7, 0.6, and 0.4 ft in the Upper Floridan aquifer observation wells located about 36 ft, 6,000 ft, and 6,800 ft from the pumped well, respectively

  14. Evaluation of Pleistocene groundwater flow through fractured tuffs using a U-series disequilibrium approach, Pahute Mesa, Nevada, USA

    USGS Publications Warehouse

    Paces, James B.; Nichols, Paul J.; Neymark, Leonid A.; Rajaram, Harihar

    2013-01-01

    Groundwater flow through fractured felsic tuffs and lavas at the Nevada National Security Site represents the most likely mechanism for transport of radionuclides away from underground nuclear tests at Pahute Mesa. To help evaluate fracture flow and matrix–water exchange, we have determined U-series isotopic compositions on more than 40 drill core samples from 5 boreholes that represent discrete fracture surfaces, breccia zones, and interiors of unfractured core. The U-series approach relies on the disruption of radioactive secular equilibrium between isotopes in the uranium-series decay chain due to preferential mobilization of 234U relative to 238U, and U relative to Th. Samples from discrete fractures were obtained by milling fracture surfaces containing thin secondary mineral coatings of clays, silica, Fe–Mn oxyhydroxides, and zeolite. Intact core interiors and breccia fragments were sampled in bulk. In addition, profiles of rock matrix extending 15 to 44 mm away from several fractures that show evidence of recent flow were analyzed to investigate the extent of fracture/matrix water exchange. Samples of rock matrix have 234U/238U and 230Th/238U activity ratios (AR) closest to radioactive secular equilibrium indicating only small amounts of groundwater penetrated unfractured matrix. Greater U mobility was observed in welded-tuff matrix with elevated porosity and in zeolitized bedded tuff. Samples of brecciated core were also in secular equilibrium implying a lack of long-range hydraulic connectivity in these cases. Samples of discrete fracture surfaces typically, but not always, were in radioactive disequilibrium. Many fractures had isotopic compositions plotting near the 230Th-234U 1:1 line indicating a steady-state balance between U input and removal along with radioactive decay. Numerical simulations of U-series isotope evolution indicate that 0.5 to 1 million years are required to reach steady-state compositions. Once attained, disequilibrium 234U/238U and 230Th/238U AR values can be maintained indefinitely as long as hydrological and geochemical processes remain stable. Therefore, many Pahute Mesa fractures represent stable hydrologic pathways over million-year timescales. A smaller number of samples have non-steady-state compositions indicating transient conditions in the last several hundred thousand years. In these cases, U mobility is dominated by overall gains rather than losses of U.

  15. Mouth asymmetry in the textbook example of scale-eating cichlid fish is not a discrete dimorphism after all

    PubMed Central

    Kusche, Henrik; Lee, Hyuk Je; Meyer, Axel

    2012-01-01

    Individuals of the scale-eating cichlid fish, Perissodus microlepis, from Lake Tanganyika tend to have remarkably asymmetric heads that are either left-bending or right-bending. The ‘left’ morph opens its mouth markedly towards the left and preferentially feeds on the scales from the right-hand side of its victim fish, and the ‘right’ morph bites scales from the victims’ left-hand side. This striking dimorphism made these fish a textbook example of their astonishing degree of ecological specialization and as one of the few known incidences of negative frequency-dependent selection acting on an asymmetric morphological trait, where left and right forms are equally frequent within a species. We investigated the degree and the shape of the frequency distribution of head asymmetry in P. microlepis to test whether the variation conforms to a discrete dimorphism, as generally assumed. In both adult and juvenile fish, mouth asymmetry appeared to be continuously and unimodally distributed with no clear evidence for a discrete dimorphism. Mixture analyses did not reveal evidence of a discrete or even strong dimorphism. These results raise doubts about previous claims, as reported in textbooks, that head variation in P. microlepis represents a discrete dimorphism of left- and right-bending forms. Based on extensive field sampling that excluded ambiguous (i.e. symmetric or weakly asymmetric) individual adults, we found that left and right morphs occur in equal abundance in five populations. Moreover, mate pairing for 51 wild-caught pairs was random with regard to head laterality, calling into question reports that this laterality is maintained through disassortative mating. PMID:23055070

  16. Numerical Evaluation of the "Dual-Kernel Counter-flow" Matric Convolution Integral that Arises in Discrete/Continuous (D/C) Control Theory

    NASA Technical Reports Server (NTRS)

    Nixon, Douglas D.

    2009-01-01

    Discrete/Continuous (D/C) control theory is a new generalized theory of discrete-time control that expands the concept of conventional (exact) discrete-time control to create a framework for design and implementation of discretetime control systems that include a continuous-time command function generator so that actuator commands need not be constant between control decisions, but can be more generally defined and implemented as functions that vary with time across sample period. Because the plant/control system construct contains two linear subsystems arranged in tandem, a novel dual-kernel counter-flow convolution integral appears in the formulation. As part of the D/C system design and implementation process, numerical evaluation of that integral over the sample period is required. Three fundamentally different evaluation methods and associated algorithms are derived for the constant-coefficient case. Numerical results are matched against three available examples that have closed-form solutions.

  17. Gender moderates the associations between attachment and discrete emotions in late middle age and later life.

    PubMed

    Consedine, Nathan S; Fiori, Katherine L

    2009-11-01

    Although patterns of attachment have been linked to patterns of emotional experience, studies in developmentally diverse samples are few and have not yet examined possible gender differences in attachment or their implications for emotional wellbeing. This article describes patterns of attachment in a diverse sample of 616 men and women from middle age and later life, examines the relations between attachment and nine discrete emotions, and tests the thesis that gender moderates these associations. Convenience sampling was used to derive a sample of 616 ethnically diverse men and women from seven ethnic groups. Multiple regressions controlling for demographics found no gender differences in attachment categorizations although men reported greater dimensional fearful avoidance. Security predicted greater joy and interest whereas dismissingness was associated with lower shame and fear and with greater interest. Both preoccupation and fearful avoidance predicted most negative emotions but were not associated with positive emotions. Finally, gender moderated these associations such that (a) attachment security was more closely related to interest and, marginally, joy, among men; (b) fearful avoidance was more closely related to fear and contempt among men; and (c) preoccupation was associated with greater interest among men, whereas fear and contempt were associated with preoccupation among women only. Interpreted in the context of theories of emotions, the social origins of emotional experience, and the different roles that social relationships have for aging men and women, our data imply that attachment styles may differentially predict male emotions because of their less diverse networks.

  18. Does the Cambridge Automated Neuropsychological Test Battery (CANTAB) Distinguish Between Cognitive Domains in Healthy Older Adults?

    PubMed

    Lenehan, Megan E; Summers, Mathew J; Saunders, Nichole L; Summers, Jeffery J; Vickers, James C

    2016-04-01

    The Cambridge Neuropsychological Test Automated Battery (CANTAB) is a semiautomated computer interface for assessing cognitive function. We examined whether CANTAB tests measured specific cognitive functions, using established neuropsychological tests as a reference point. A sample of 500 healthy older (M = 60.28 years, SD = 6.75) participants in the Tasmanian Healthy Brain Project completed battery of CANTAB subtests and standard paper-based neuropsychological tests. Confirmatory factor analysis identified four factors: processing speed, verbal ability, episodic memory, and working memory. However, CANTAB tests did not consistently load onto the cognitive domain factors derived from traditional measures of the same function. These results indicate that five of the six CANTAB subtests examined did not load onto single cognitive functions. These CANTAB tests may lack the sensitivity to measure discrete cognitive functions in healthy populations or may measure other cognitive domains not included in the traditional neuropsychological battery. © The Author(s) 2015.

  19. Exponential synchronization of neural networks with discrete and distributed delays under time-varying sampling.

    PubMed

    Wu, Zheng-Guang; Shi, Peng; Su, Hongye; Chu, Jian

    2012-09-01

    This paper investigates the problem of master-slave synchronization for neural networks with discrete and distributed delays under variable sampling with a known upper bound on the sampling intervals. An improved method is proposed, which captures the characteristic of sampled-data systems. Some delay-dependent criteria are derived to ensure the exponential stability of the error systems, and thus the master systems synchronize with the slave systems. The desired sampled-data controller can be achieved by solving a set of linear matrix inequalitys, which depend upon the maximum sampling interval and the decay rate. The obtained conditions not only have less conservatism but also have less decision variables than existing results. Simulation results are given to show the effectiveness and benefits of the proposed methods.

  20. Method for testing earth samples for contamination by organic contaminants

    DOEpatents

    Schabron, John F.

    1996-01-01

    Provided is a method for testing earth samples for contamination by organic contaminants, and particularly for aromatic compounds such as those found in diesel fuel and other heavy fuel oils, kerosene, creosote, coal oil, tars and asphalts. A drying step is provided in which a drying agent is contacted with either the earth sample or a liquid extract phase to reduce to possibility of false indications of contamination that could occur when humic material is present in the earth sample. This is particularly a problem when using relatively safe, non-toxic and inexpensive polar solvents such as isopropyl alcohol since the humic material tends to be very soluble in those solvents when water is present. Also provided is an ultraviolet spectroscopic measuring technique for obtaining an indication as to whether a liquid extract phase contains aromatic organic contaminants. In one embodiment, the liquid extract phase is subjected to a narrow and discrete band of radiation including a desired wave length and the ability of the liquid extract phase to absorb that wavelength of ultraviolet radiation is measured to provide an indication of the presence of aromatic organic contaminants.

  1. Discrete retardance second harmonic generation ellipsometry.

    PubMed

    Dehen, Christopher J; Everly, R Michael; Plocinik, Ryan M; Hedderich, Hartmut G; Simpson, Garth J

    2007-01-01

    A new instrument was constructed to perform discrete retardance nonlinear optical ellipsometry (DR-NOE). The focus of the design was to perform second harmonic generation NOE while maximizing sample and application flexibility and minimizing data acquisition time. The discrete retardance configuration results in relatively simple computational algorithms for performing nonlinear optical ellipsometric analysis. NOE analysis of a disperse red 19 monolayer yielded results that were consistent with previously reported values for the same surface system, but with significantly reduced acquisition times.

  2. Modeling and Control of the Cobelli Model as a Personalized Prescriptive Tool for Diabetes Treatment

    DTIC Science & Technology

    2016-11-05

    within the body allow for a more quantified approach in medicine prescription as well as a deeper understanding of the discrete operations of...dynamics within the body allow for a more quantified approach in medicine prescription as well as a deeper understanding of the discrete operations of... discrete value) of the desired output (healthy blood glucose concentration in this project), yi is the ith sample of the measured output, ui is

  3. Robust inference in discrete hazard models for randomized clinical trials.

    PubMed

    Nguyen, Vinh Q; Gillen, Daniel L

    2012-10-01

    Time-to-event data in which failures are only assessed at discrete time points are common in many clinical trials. Examples include oncology studies where events are observed through periodic screenings such as radiographic scans. When the survival endpoint is acknowledged to be discrete, common methods for the analysis of observed failure times include the discrete hazard models (e.g., the discrete-time proportional hazards and the continuation ratio model) and the proportional odds model. In this manuscript, we consider estimation of a marginal treatment effect in discrete hazard models where the constant treatment effect assumption is violated. We demonstrate that the estimator resulting from these discrete hazard models is consistent for a parameter that depends on the underlying censoring distribution. An estimator that removes the dependence on the censoring mechanism is proposed and its asymptotic distribution is derived. Basing inference on the proposed estimator allows for statistical inference that is scientifically meaningful and reproducible. Simulation is used to assess the performance of the presented methodology in finite samples.

  4. Water-quality characteristics, including sodium-adsorption ratios, for four sites in the Powder River drainage basin, Wyoming and Montana, water years 2001-2004

    USGS Publications Warehouse

    Clark, Melanie L.; Mason, Jon P.

    2006-01-01

    The U.S. Geological Survey, in cooperation with the Wyoming Department of Environmental Quality, monitors streams throughout the Powder River structural basin in Wyoming and parts of Montana for potential effects of coalbed natural gas development. Specific conductance and sodium-adsorption ratios may be larger in coalbed waters than in stream waters that may receive the discharge waters. Therefore, continuous water-quality instruments for specific conductance were installed and discrete water-quality samples were collected to characterize water quality during water years 2001-2004 at four sites in the Powder River drainage basin: Powder River at Sussex, Wyoming; Crazy Woman Creek near Arvada, Wyoming; Clear Creek near Arvada, Wyoming; and Powder River at Moorhead, Montana. During water years 2001-2004, the median specific conductance of 2,270 microsiemens per centimeter at 25 degrees Celsius (?S/cm) in discrete samples from the Powder River at Sussex, Wyoming, was larger than the median specific conductance of 1,930 ?S/cm in discrete samples collected downstream from the Powder River at Moorhead, Montana. The median specific conductance was smallest in discrete samples from Clear Creek (1,180 ?S/cm), which has a dilution effect on the specific conductance for the Powder River at Moorhead, Montana. The daily mean specific conductance from continuous water-quality instruments during the irrigation season showed the same spatial pattern as specific conductance values for the discrete samples. Dissolved sodium, sodium-adsorption ratios, and dissolved solids generally showed the same spatial pattern as specific conductance. The largest median sodium concentration (274 milligrams per liter) and the largest range of sodium-adsorption ratios (3.7 to 21) were measured in discrete samples from the Powder River at Sussex, Wyoming. Median concentrations of sodium and sodium-adsorption ratios were substantially smaller in Crazy Woman Creek and Clear Creek, which tend to decrease sodium concentrations and sodium-adsorption ratios at the Powder River at Moorhead, Montana. Dissolved-solids concentrations in discrete samples were closely correlated with specific conductance values; Pearson's correlation coefficients were 0.98 or greater for all four sites. Regression equations for discrete values of specific conductance and sodium-adsorption ratios were statistically significant (p-values <0.001) at all four sites. The strongest relation (R2=0.92) was at the Powder River at Sussex, Wyoming. Relations on Crazy Woman Creek (R2=0.91) and Clear Creek (R2=0.83) also were strong. The relation between specific conductance and sodium-adsorption ratios was weakest (R2=0.65) at the Powder River at Moorhead, Montana; however, the relation was still significant. These data indicate that values of specific conductance are useful for estimating sodium-adsorption ratios. A regression model called LOADEST was used to estimate dissolved-solids loads for the four sites. The average daily mean dissolved-solids loads varied among the sites during water year 2004. The largest average daily mean dissolved-solids load was calculated for the Powder River at Moorhead, Montana. Although the smallest concentrations of dissolved solids were in samples from Clear Creek, the smallest average daily mean dissolved-solids load was calculated for Crazy Woman Creek. The largest loads occurred during spring runoff, and the smallest loads occurred in late summer, when streamflows typically were smallest. Dissolved-solids loads may be smaller than average during water years 2001-2004 because of smaller than average streamflow as a result of drought conditions.

  5. Results of a collaborative study on DNA identification of aged bone samples

    PubMed Central

    Vanek, Daniel; Budowle, Bruce; Dubska-Votrubova, Jitka; Ambers, Angie; Frolik, Jan; Pospisek, Martin; Al Afeefi, Ahmed Anwar; Al Hosani, Khalid Ismaeil; Allen, Marie; Al Naimi, Khudooma Saeed; Al Salafi, Dina; Al Tayyari, Wafa Ali Rashid; Arguetaa, Wendy; Bottinelli, Michel; Bus, Magdalena M.; Cemper-Kiesslich, Jan; Cepil, Olivier; De Cock, Greet; Desmyter, Stijn; El Amri, Hamid; El Ossmani, Hicham; Galdies, Ruth; Grün, Sebastian; Guidet, Francois; Hoefges, Anna; Iancu, Cristian Bogdan; Lotz, Petra; Maresca, Alessandro; Nagy, Marion; Novotny, Jindrich; Rachid, Hajar; Rothe, Jessica; Stenersen, Marguerethe; Stephenson, Mishel; Stevanovitch, Alain; Strien, Juliane; Sumita, Denilce R.; Vella, Joanna; Zander, Judith

    2017-01-01

    Aim A collaborative exercise with several institutes was organized by the Forensic DNA Service (FDNAS) and the Institute of the Legal Medicine, 2nd Faculty of Medicine, Charles University in Prague, Czech Republic, with the aim to test performance of different laboratories carrying out DNA analysis of relatively old bone samples. Methods Eighteen laboratories participating in the collaborative exercise were asked to perform DNA typing of two samples of bone powder. Two bone samples provided by the National Museum and the Institute of Archaelogy in Prague, Czech Republic, came from archeological excavations and were estimated to be approximately 150 and 400 years old. The methods of genetic characterization including autosomal, gonosomal, and mitochondrial markers was selected solely at the discretion of the participating laboratory. Results Although the participating laboratories used different extraction and amplification strategies, concordant results were obtained from the relatively intact 150 years old bone sample. Typing was more problematic with the analysis of the 400 years old bone sample due to poorer quality. Conclusion The laboratories performing identification DNA analysis of bone and teeth samples should regularly test their ability to correctly perform DNA-based identification on bone samples containing degraded DNA and potential inhibitors and demonstrate that risk of contamination is minimized. PMID:28613037

  6. Selection of sampling rate for digital control of aircrafts

    NASA Technical Reports Server (NTRS)

    Katz, P.; Powell, J. D.

    1974-01-01

    The considerations in selecting the sample rates for digital control of aircrafts are identified and evaluated using the optimal discrete method. A high performance aircraft model which includes a bending mode and wind gusts was studied. The following factors which influence the selection of the sampling rates were identified: (1) the time and roughness response to control inputs; (2) the response to external disturbances; and (3) the sensitivity to variations of parameters. It was found that the time response to a control input and the response to external disturbances limit the selection of the sampling rate. The optimal discrete regulator, the steady state Kalman filter, and the mean response to external disturbances are calculated.

  7. Sampling Versus Filtering in Large-Eddy Simulations

    NASA Technical Reports Server (NTRS)

    Debliquy, O.; Knaepen, B.; Carati, D.; Wray, A. A.

    2004-01-01

    A LES formalism in which the filter operator is replaced by a sampling operator is proposed. The unknown quantities that appear in the LES equations originate only from inadequate resolution (Discretization errors). The resulting viewpoint seems to make a link between finite difference approaches and finite element methods. Sampling operators are shown to commute with nonlinearities and to be purely projective. Moreover, their use allows an unambiguous definition of the LES numerical grid. The price to pay is that sampling never commutes with spatial derivatives and the commutation errors must be modeled. It is shown that models for the discretization errors may be treated using the dynamic procedure. Preliminary results, using the Smagorinsky model, are very encouraging.

  8. Nonuniform sampling and non-Fourier signal processing methods in multidimensional NMR

    PubMed Central

    Mobli, Mehdi; Hoch, Jeffrey C.

    2017-01-01

    Beginning with the introduction of Fourier Transform NMR by Ernst and Anderson in 1966, time domain measurement of the impulse response (the free induction decay, FID) consisted of sampling the signal at a series of discrete intervals. For compatibility with the discrete Fourier transform (DFT), the intervals are kept uniform, and the Nyquist theorem dictates the largest value of the interval sufficient to avoid aliasing. With the proposal by Jeener of parametric sampling along an indirect time dimension, extension to multidimensional experiments employed the same sampling techniques used in one dimension, similarly subject to the Nyquist condition and suitable for processing via the discrete Fourier transform. The challenges of obtaining high-resolution spectral estimates from short data records using the DFT were already well understood, however. Despite techniques such as linear prediction extrapolation, the achievable resolution in the indirect dimensions is limited by practical constraints on measuring time. The advent of non-Fourier methods of spectrum analysis capable of processing nonuniformly sampled data has led to an explosion in the development of novel sampling strategies that avoid the limits on resolution and measurement time imposed by uniform sampling. The first part of this review discusses the many approaches to data sampling in multidimensional NMR, the second part highlights commonly used methods for signal processing of such data, and the review concludes with a discussion of other approaches to speeding up data acquisition in NMR. PMID:25456315

  9. Field comparison of analytical results from discrete-depth ground water samplers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zemo, D.A.; Delfino, T.A.; Gallinatti, J.D.

    1995-07-01

    Discrete-depth ground water samplers are used during environmental screening investigations to collect ground water samples in lieu of installing and sampling monitoring wells. Two of the most commonly used samplers are the BAT Enviroprobe and the QED HydroPunch I, which rely on differing sample collection mechanics. Although these devices have been on the market for several years, it was unknown what, if any, effect the differences would have on analytical results for ground water samples containing low to moderate concentrations of chlorinated volatile organic compounds (VOCs). This study investigated whether the discrete-depth ground water sampler used introduces statistically significant differencesmore » in analytical results. The goal was to provide a technical basis for allowing the two devices to be used interchangeably during screening investigations. Because this study was based on field samples, it included several sources of potential variability. It was necessary to separate differences due to sampler type from variability due to sampling location, sample handling, and laboratory analytical error. To statistically evaluate these sources of variability, the experiment was arranged in a nested design. Sixteen ground water samples were collected from eight random locations within a 15-foot by 15-foot grid. The grid was located in an area where shallow ground water was believed to be uniformly affected by VOCs. The data were evaluated using analysis of variance.« less

  10. A COMPARISON OF INTERCELL METRICS ON DISCRETE GLOBAL GRID SYSTEMS

    EPA Science Inventory

    A discrete global grid system (DGGS) is a spatial data model that aids in global research by serving as a framework for environmental modeling, monitoring and sampling across the earth at multiple spatial scales. Topological and geometric criteria have been proposed to evaluate a...

  11. A Bell-Curved Based Algorithm for Mixed Continuous and Discrete Structural Optimization

    NASA Technical Reports Server (NTRS)

    Kincaid, Rex K.; Weber, Michael; Sobieszczanski-Sobieski, Jaroslaw

    2001-01-01

    An evolutionary based strategy utilizing two normal distributions to generate children is developed to solve mixed integer nonlinear programming problems. This Bell-Curve Based (BCB) evolutionary algorithm is similar in spirit to (mu + mu) evolutionary strategies and evolutionary programs but with fewer parameters to adjust and no mechanism for self adaptation. First, a new version of BCB to solve purely discrete optimization problems is described and its performance tested against a tabu search code for an actuator placement problem. Next, the performance of a combined version of discrete and continuous BCB is tested on 2-dimensional shape problems and on a minimum weight hub design problem. In the latter case the discrete portion is the choice of the underlying beam shape (I, triangular, circular, rectangular, or U).

  12. Initial Data of Digital Correlation ECE with a Giga Hertz Sampling Digitizer

    NASA Astrophysics Data System (ADS)

    Tsuchiya, Hayato; Inagaki, Shigeru; Tokuzawa, Tokihiko; Nagayama, Yoshio

    2015-03-01

    The proposed Digital Correlation ECE (DCECE) technique is applied in Large Helical Device. DCECE is realized by the use of the Giga Hertz Sampling Digitizer. The waveform of intermediate frequency band of ECE, whose frequency is several giga hertz, can be discretized and saved directly. The discretized IF data can be used for the analysis of correlation ECE with arbitrary parameter of spatial resolution and temporal resolution. In this paper, the characteristic of DCECE and initial Data in LHD is introduced.

  13. Discrete Ramanujan transform for distinguishing the protein coding regions from other regions.

    PubMed

    Hua, Wei; Wang, Jiasong; Zhao, Jian

    2014-01-01

    Based on the study of Ramanujan sum and Ramanujan coefficient, this paper suggests the concepts of discrete Ramanujan transform and spectrum. Using Voss numerical representation, one maps a symbolic DNA strand as a numerical DNA sequence, and deduces the discrete Ramanujan spectrum of the numerical DNA sequence. It is well known that of discrete Fourier power spectrum of protein coding sequence has an important feature of 3-base periodicity, which is widely used for DNA sequence analysis by the technique of discrete Fourier transform. It is performed by testing the signal-to-noise ratio at frequency N/3 as a criterion for the analysis, where N is the length of the sequence. The results presented in this paper show that the property of 3-base periodicity can be only identified as a prominent spike of the discrete Ramanujan spectrum at period 3 for the protein coding regions. The signal-to-noise ratio for discrete Ramanujan spectrum is defined for numerical measurement. Therefore, the discrete Ramanujan spectrum and the signal-to-noise ratio of a DNA sequence can be used for distinguishing the protein coding regions from the noncoding regions. All the exon and intron sequences in whole chromosomes 1, 2, 3 and 4 of Caenorhabditis elegans have been tested and the histograms and tables from the computational results illustrate the reliability of our method. In addition, we have analyzed theoretically and gotten the conclusion that the algorithm for calculating discrete Ramanujan spectrum owns the lower computational complexity and higher computational accuracy. The computational experiments show that the technique by using discrete Ramanujan spectrum for classifying different DNA sequences is a fast and effective method. Copyright © 2014 Elsevier Ltd. All rights reserved.

  14. A U-statistics based approach to sample size planning of two-arm trials with discrete outcome criterion aiming to establish either superiority or noninferiority.

    PubMed

    Wellek, Stefan

    2017-02-28

    In current practice, the most frequently applied approach to the handling of ties in the Mann-Whitney-Wilcoxon (MWW) test is based on the conditional distribution of the sum of mid-ranks, given the observed pattern of ties. Starting from this conditional version of the testing procedure, a sample size formula was derived and investigated by Zhao et al. (Stat Med 2008). In contrast, the approach we pursue here is a nonconditional one exploiting explicit representations for the variances of and the covariance between the two U-statistics estimators involved in the Mann-Whitney form of the test statistic. The accuracy of both ways of approximating the sample sizes required for attaining a prespecified level of power in the MWW test for superiority with arbitrarily tied data is comparatively evaluated by means of simulation. The key qualitative conclusions to be drawn from these numerical comparisons are as follows: With the sample sizes calculated by means of the respective formula, both versions of the test maintain the level and the prespecified power with about the same degree of accuracy. Despite the equivalence in terms of accuracy, the sample size estimates obtained by means of the new formula are in many cases markedly lower than that calculated for the conditional test. Perhaps, a still more important advantage of the nonconditional approach based on U-statistics is that it can be also adopted for noninferiority trials. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  15. High-Grading Lunar Samples

    NASA Technical Reports Server (NTRS)

    Allen, Carlton; Sellar, Glenn; Nunez, Jorge; Mosie, Andrea; Schwarz, Carol; Parker, Terry; Winterhalter, Daniel; Farmer, Jack

    2009-01-01

    Astronauts on long-duration lunar missions will need the capability to high-grade their samples to select the highest value samples for transport to Earth and to leave others on the Moon. We are supporting studies to define the necessary and sufficient measurements and techniques for high-grading samples at a lunar outpost. A glovebox, dedicated to testing instruments and techniques for high-grading samples, is in operation at the JSC Lunar Experiment Laboratory. A reference suite of lunar rocks and soils, spanning the full compositional range found in the Apollo collection, is available for testing in this laboratory. Thin sections of these samples are available for direct comparison. The Lunar Sample Compendium, on-line at http://www-curator.jsc.nasa.gov/lunar/compendium.cfm, summarizes previous analyses of these samples. The laboratory, sample suite, and Compendium are available to the lunar research and exploration community. In the first test of possible instruments for lunar sample high-grading, we imaged 18 lunar rocks and four soils from the reference suite using the Multispectral Microscopic Imager (MMI) developed by Arizona State University and JPL (see Farmer et. al. abstract). The MMI is a fixed-focus digital imaging system with a resolution of 62.5 microns/pixel, a field size of 40 x 32 mm, and a depth-of-field of approximately 5 mm. Samples are illuminated sequentially by 21 light emitting diodes in discrete wavelengths spanning the visible to shortwave infrared. Measurements of reflectance standards and background allow calibration to absolute reflectance. ENVI-based software is used to produce spectra for specific minerals as well as multi-spectral images of rock textures.

  16. Donders revisited: Discrete or continuous temporal processing underlying reaction time distributions?

    PubMed

    Bao, Yan; Yang, Taoxi; Lin, Xiaoxiong; Pöppel, Ernst

    2016-09-01

    Differences of reaction times to specific stimulus configurations are used as indicators of cognitive processing stages. In this classical experimental paradigm, continuous temporal processing is implicitly assumed. Multimodal response distributions indicate, however, discrete time sampling, which is often masked by experimental conditions. Differences in reaction times reflect discrete temporal mechanisms that are pre-semantically implemented and suggested to be based on entrained neural oscillations. © 2016 The Institute of Psychology, Chinese Academy of Sciences and John Wiley & Sons Australia, Ltd.

  17. An improved switching converter model using discrete and average techniques

    NASA Technical Reports Server (NTRS)

    Shortt, D. J.; Lee, F. C.

    1982-01-01

    The nonlinear modeling and analysis of dc-dc converters has been done by averaging and discrete-sampling techniques. The averaging technique is simple, but inaccurate as the modulation frequencies approach the theoretical limit of one-half the switching frequency. The discrete technique is accurate even at high frequencies, but is very complex and cumbersome. An improved model is developed by combining the aforementioned techniques. This new model is easy to implement in circuit and state variable forms and is accurate to the theoretical limit.

  18. 40 CFR 1045.505 - How do I test engines using discrete-mode or ramped-modal duty cycles?

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 40 Protection of Environment 33 2014-07-01 2014-07-01 false How do I test engines using discrete-mode or ramped-modal duty cycles? 1045.505 Section 1045.505 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR POLLUTION CONTROLS CONTROL OF EMISSIONS FROM SPARK-IGNITION PROPULSION...

  19. 40 CFR 1045.505 - How do I test engines using discrete-mode or ramped-modal duty cycles?

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 40 Protection of Environment 34 2012-07-01 2012-07-01 false How do I test engines using discrete-mode or ramped-modal duty cycles? 1045.505 Section 1045.505 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR POLLUTION CONTROLS CONTROL OF EMISSIONS FROM SPARK-IGNITION PROPULSION...

  20. 40 CFR 1045.505 - How do I test engines using discrete-mode or ramped-modal duty cycles?

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 40 Protection of Environment 34 2013-07-01 2013-07-01 false How do I test engines using discrete-mode or ramped-modal duty cycles? 1045.505 Section 1045.505 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR POLLUTION CONTROLS CONTROL OF EMISSIONS FROM SPARK-IGNITION PROPULSION...

  1. Discrete Semiconductor Device Reliability

    DTIC Science & Technology

    1988-03-25

    array or alphanumeric display. "--" indicates unknown diode count. Voc Open circuit voltage for photovoltaic modules . indicates unknown. Isc Short... circuit current for photovoltaic modules . "--" indicates unknown. Number Tested Quantity of parts under the described test or field conditions for that...information pertaining to electronic systems and parts used therein. The present scope includes integrated circuits , hybrids, discrete semiconductors

  2. An extended sequential goodness-of-fit multiple testing method for discrete data.

    PubMed

    Castro-Conde, Irene; Döhler, Sebastian; de Uña-Álvarez, Jacobo

    2017-10-01

    The sequential goodness-of-fit (SGoF) multiple testing method has recently been proposed as an alternative to the familywise error rate- and the false discovery rate-controlling procedures in high-dimensional problems. For discrete data, the SGoF method may be very conservative. In this paper, we introduce an alternative SGoF-type procedure that takes into account the discreteness of the test statistics. Like the original SGoF, our new method provides weak control of the false discovery rate/familywise error rate but attains false discovery rate levels closer to the desired nominal level, and thus it is more powerful. We study the performance of this method in a simulation study and illustrate its application to a real pharmacovigilance data set.

  3. Setting up virgin stress conditions in discrete element models.

    PubMed

    Rojek, J; Karlis, G F; Malinowski, L J; Beer, G

    2013-03-01

    In the present work, a methodology for setting up virgin stress conditions in discrete element models is proposed. The developed algorithm is applicable to discrete or coupled discrete/continuum modeling of underground excavation employing the discrete element method (DEM). Since the DEM works with contact forces rather than stresses there is a need for the conversion of pre-excavation stresses to contact forces for the DEM model. Different possibilities of setting up virgin stress conditions in the DEM model are reviewed and critically assessed. Finally, a new method to obtain a discrete element model with contact forces equivalent to given macroscopic virgin stresses is proposed. The test examples presented show that good results may be obtained regardless of the shape of the DEM domain.

  4. Setting up virgin stress conditions in discrete element models

    PubMed Central

    Rojek, J.; Karlis, G.F.; Malinowski, L.J.; Beer, G.

    2013-01-01

    In the present work, a methodology for setting up virgin stress conditions in discrete element models is proposed. The developed algorithm is applicable to discrete or coupled discrete/continuum modeling of underground excavation employing the discrete element method (DEM). Since the DEM works with contact forces rather than stresses there is a need for the conversion of pre-excavation stresses to contact forces for the DEM model. Different possibilities of setting up virgin stress conditions in the DEM model are reviewed and critically assessed. Finally, a new method to obtain a discrete element model with contact forces equivalent to given macroscopic virgin stresses is proposed. The test examples presented show that good results may be obtained regardless of the shape of the DEM domain. PMID:27087731

  5. Chemical contaminants in water and sediment near fish nesting sites in the Potomac River basin: determining potential exposures to smallmouth bass (Micropterus dolomieu).

    PubMed

    Kolpin, Dana W; Blazer, Vicki S; Gray, James L; Focazio, Michael J; Young, John A; Alvarez, David A; Iwanowicz, Luke R; Foreman, William T; Furlong, Edward T; Speiran, Gary K; Zaugg, Steven D; Hubbard, Laura E; Meyer, Michael T; Sandstrom, Mark W; Barber, Larry B

    2013-01-15

    The Potomac River basin is an area where a high prevalence of abnormalities such as testicular oocytes (TO), skin lesions, and mortality has been observed in smallmouth bass (SMB, Micropterus dolomieu). Previous research documented a variety of chemicals in regional streams, implicating chemical exposure as one plausible explanation for these biological effects. Six stream sites in the Potomac basin (and one out-of-basin reference site) were sampled to provide an assessment of chemicals in these streams. Potential early life-stage exposure to chemicals detected was assessed by collecting samples in and around SMB nesting areas. Target chemicals included those known to be associated with important agricultural and municipal wastewater sources in the Potomac basin. The prevalence and severity of TO in SMB were also measured to determine potential relations between chemistry and biological effects. A total of 39 chemicals were detected at least once in the discrete-water samples, with atrazine, caffeine, deethylatrazine, simazine, and iso-chlorotetracycline being most frequently detected. Of the most frequently detected chemicals, only caffeine was detected in water from the reference site. No biogenic hormones/sterols were detected in the discrete-water samples. In contrast, 100 chemicals (including six biogenic hormones/sterols) were found in a least one passive-water sample, with 25 being detected at all such samples. In addition, 46 chemicals (including seven biogenic hormones/sterols) were found in the bed-sediment samples, with caffeine, cholesterol, indole, para-cresol, and sitosterol detected in all such samples. The number of herbicides detected in discrete-water samples per site had a significant positive relation to TO(rank) (a nonparametric indicator of TO), with significant positive relations between TO(rank) and atrazine concentrations in discrete-water samples and to total hormone/sterol concentration in bed-sediment samples. Such significant correlations do not necessarily imply causation, as these chemical compositions and concentrations likely do not adequately reflect total SMB exposure history, particularly during critical life stages. Published by Elsevier B.V.

  6. Vectorized Rebinning Algorithm for Fast Data Down-Sampling

    NASA Technical Reports Server (NTRS)

    Dean, Bruce; Aronstein, David; Smith, Jeffrey

    2013-01-01

    A vectorized rebinning (down-sampling) algorithm, applicable to N-dimensional data sets, has been developed that offers a significant reduction in computer run time when compared to conventional rebinning algorithms. For clarity, a two-dimensional version of the algorithm is discussed to illustrate some specific details of the algorithm content, and using the language of image processing, 2D data will be referred to as "images," and each value in an image as a "pixel." The new approach is fully vectorized, i.e., the down-sampling procedure is done as a single step over all image rows, and then as a single step over all image columns. Data rebinning (or down-sampling) is a procedure that uses a discretely sampled N-dimensional data set to create a representation of the same data, but with fewer discrete samples. Such data down-sampling is fundamental to digital signal processing, e.g., for data compression applications.

  7. Incubation of extinction responding and cue-induced reinstatement, but not context- or drug priming-induced reinstatement, after withdrawal from methamphetamine.

    PubMed

    Adhikary, Sweta; Caprioli, Daniele; Venniro, Marco; Kallenberger, Paige; Shaham, Yavin; Bossert, Jennifer M

    2017-07-01

    In rats trained to self-administer methamphetamine, extinction responding in the presence of drug-associated contextual and discrete cues progressively increases after withdrawal (incubation of methamphetamine craving). The conditioning factors underlying this incubation are unknown. Here, we studied incubation of methamphetamine craving under different experimental conditions to identify factors contributing to this incubation. We also determined whether the rats' response to methamphetamine priming incubates after withdrawal. We trained rats to self-administer methamphetamine in a distinct context (context A) for 14 days (6 hours/day). Lever presses were paired with a discrete light cue. We then tested groups of rats in context A or a different non-drug context (context B) after 1 day, 1 week or 1 month for extinction responding with or without the discrete cue. Subsequently, we tested the rats for reinstatement of drug seeking induced by exposure to contextual, discrete cue, or drug priming (0, 0.25 and 0.5 mg/kg). Operant responding in the extinction sessions in contexts A or B was higher after 1 week and 1 month of withdrawal than after 1 day; this effect was context-independent. Independent of the withdrawal period, operant responding in the extinction sessions was higher when responding led to contingent delivery of the discrete cue. After extinction, discrete cue-induced reinstatement, but not context- or drug priming-induced reinstatement, progressively increased after withdrawal. Together, incubation of methamphetamine craving, as assessed in extinction tests, is primarily mediated by time-dependent increases in non-reinforced operant responding, and this effect is potentiated by exposure to discrete, but not contextual, cues. Published 2016. This article is a U.S. Government work and is in the public domain in the USA.

  8. Incubation of extinction responding and cue-induced reinstatement, but not context- or drug priming-induced reinstatement, after withdrawal from methamphetamine

    PubMed Central

    Adhikary, Sweta; Caprioli, Daniele; Venniro, Marco; Kallenberger, Paige; Shaham, Yavin; Bossert, Jennifer M.

    2016-01-01

    In rats trained to self-administer methamphetamine, extinction responding in the presence of drug-associated contextual and discrete cues progressively increases after withdrawal (incubation of methamphetamine craving). The conditioning factors underlying this incubation are unknown. Here, we studied incubation of methamphetamine craving under different experimental conditions to identify factors contributing to this incubation. We also determined whether the rats’ response to methamphetamine priming incubates after withdrawal. We trained rats to self-administer methamphetamine in a distinct context (context A) for 14 days (6-h/day). Lever presses were paired with a discrete light cue. We then tested groups of rats in context A or a different non-drug context (context B) after 1 day, 1 week, or 1 month for extinction responding with or without the discrete cue. Subsequently, we tested the rats for reinstatement of drug seeking induced by exposure to contextual, discrete cue, or drug priming (0, 0.25, and 0.5 mg/kg). Operant responding in the extinction sessions in contexts A or B was higher after 1 week and 1 month of withdrawal than after 1 day; this effect was context-independent. Independent of the withdrawal period, operant responding in the extinction sessions was higher when responding led to contingent delivery of the discrete cue. After extinction, discrete cue-induced reinstatement, but not context- or drug priming-induced reinstatement, progressively increased after withdrawal. Together, incubation of methamphetamine craving, as assessed in extinction tests, is primarily mediated by time-dependent increases in non-reinforced operant responding, and this effect is potentiated by exposure to discrete, but not contextual, cues. PMID:26989042

  9. Electromagnetic Scattering by Fully Ordered and Quasi-Random Rigid Particulate Samples

    NASA Technical Reports Server (NTRS)

    Mishchenko, Michael I.; Dlugach, Janna M.; Mackowski, Daniel W.

    2016-01-01

    In this paper we have analyzed circumstances under which a rigid particulate sample can behave optically as a true discrete random medium consisting of particles randomly moving relative to each other during measurement. To this end, we applied the numerically exact superposition T-matrix method to model far-field scattering characteristics of fully ordered and quasi-randomly arranged rigid multiparticle groups in fixed and random orientations. We have shown that, in and of itself, averaging optical observables over movements of a rigid sample as a whole is insufficient unless it is combined with a quasi-random arrangement of the constituent particles in the sample. Otherwise, certain scattering effects typical of discrete random media (including some manifestations of coherent backscattering) may not be accurately replicated.

  10. Nonparametric probability density estimation by optimization theoretic techniques

    NASA Technical Reports Server (NTRS)

    Scott, D. W.

    1976-01-01

    Two nonparametric probability density estimators are considered. The first is the kernel estimator. The problem of choosing the kernel scaling factor based solely on a random sample is addressed. An interactive mode is discussed and an algorithm proposed to choose the scaling factor automatically. The second nonparametric probability estimate uses penalty function techniques with the maximum likelihood criterion. A discrete maximum penalized likelihood estimator is proposed and is shown to be consistent in the mean square error. A numerical implementation technique for the discrete solution is discussed and examples displayed. An extensive simulation study compares the integrated mean square error of the discrete and kernel estimators. The robustness of the discrete estimator is demonstrated graphically.

  11. 40 CFR 1065.650 - Emission calculations.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... from a changing flow rate or a constant flow rate (including discrete-mode steady-state testing), as...·hr e NOx = 64.975/25.783 e NOx = 2.520 g/(kW·hr) (2) For discrete-mode steady-state testing, you may... method not be used if there are any work flow paths described in § 1065.210 that cross the system...

  12. Discretization of Continuous Time Discrete Scale Invariant Processes: Estimation and Spectra

    NASA Astrophysics Data System (ADS)

    Rezakhah, Saeid; Maleki, Yasaman

    2016-07-01

    Imposing some flexible sampling scheme we provide some discretization of continuous time discrete scale invariant (DSI) processes which is a subsidiary discrete time DSI process. Then by introducing some simple random measure we provide a second continuous time DSI process which provides a proper approximation of the first one. This enables us to provide a bilateral relation between covariance functions of the subsidiary process and the new continuous time processes. The time varying spectral representation of such continuous time DSI process is characterized, and its spectrum is estimated. Also, a new method for estimation time dependent Hurst parameter of such processes is provided which gives a more accurate estimation. The performance of this estimation method is studied via simulation. Finally this method is applied to the real data of S & P500 and Dow Jones indices for some special periods.

  13. Physical properties of the Nankai inner accretionary prism at Site C0002, IODP Expedition 348

    NASA Astrophysics Data System (ADS)

    Kitamura, Manami; Kitajima, Hiroko; Henry, Pierre; Valdez, Robert; Josh, Matthew

    2014-05-01

    Integrated Ocean Drilling Program (IODP) Nankai Trough Seismogenic Zone Experiment (NanTroSEIZE) Expedition 348 focused on deepening the existing riser hole at Site C0002 to ~3000 meters below seafloor (mbsf) to access the deep interior of the Miocene inner accretionary prism. This unique tectonic environment, which has never before been sampled in situ by ocean drilling, was characterized through riser drilling, logging while drilling (LWD), mud gas monitoring and sampling, and cuttings and core analysis. Shipboard physical properties measurements including moisture and density (MAD), electrical conductivity, P-wave, natural gamma ray, and magnetic susceptibility measurements were performed mainly on cuttings samples from 870.5 to 3058.5 mbsf, but also on core samples from 2163 and 2204 mbsf. MAD measurements were conducted on seawater-washed cuttings ("bulk cuttings") in two size fractions of >4 mm and 1-4 mm from 870.5 to 3058.5 mbsf, and hand-picked intact cuttings from the >4 mm size fractions within 1222.5-3058.5 mbsf interval. The bulk cuttings show grain density of 2.68 g/cm3 and 2.72 g/cm3, bulk density of 1.9 g/cm3 to 2.2 g/cm3, and porosity of 50% to 32%. Compared to the values on bulk cuttings, the intact cuttings show almost the same grain density (2.66-2.70 g/cm3), but higher bulk density (2.05-2.41 g/cm3) and lower porosity (37-18%), respectively. The grain density agreement suggests that the measurements on both bulk cuttings and intact cuttings are of good quality, and the differences in porosity and density are real, but the values from the bulk cuttings are affected strongly by artifacts of the drilling process. Thus, the bulk density and porosity data on handpicked cuttings are better representative of formation properties. Combined with the MAD measurements on hand-picked intact cuttings and discrete core samples from previous expeditions, porosity generally decreases from ~60% to ~20% from the seafloor to 3000 mbsf at Site C0002. Electrical conductivity and P-wave velocity on discrete samples, which were prepared from both cuttings and core samples in the depth interval of 1745.5-3058.5 mbsf, range 0.15-0.9 S/m and 1.7-4.5 km/s, respectively. The electrical resistivity (a reciprocal of conductivity) on discrete samples is generally higher than the LWD resistivity data but the overall depth trends are similar. On the other hand, the P-wave velocity on discrete samples is lower than the LWD P-wave velocity between 2200 mbsf and 2600 mbsf, while the P-wave velocity on discrete samples and LWD P-wave velocity are in a closer agreement below 2600 mbsf. The electrical conductivity and P-wave velocity on discrete samples corrected for in-situ pressure and temperature will be presented. The shipboard physical properties measurements on cuttings are very limited but can be useful with careful treatment and observation.

  14. Optimal Discrete Event Supervisory Control of Aircraft Gas Turbine Engines

    NASA Technical Reports Server (NTRS)

    Litt, Jonathan (Technical Monitor); Ray, Asok

    2004-01-01

    This report presents an application of the recently developed theory of optimal Discrete Event Supervisory (DES) control that is based on a signed real measure of regular languages. The DES control techniques are validated on an aircraft gas turbine engine simulation test bed. The test bed is implemented on a networked computer system in which two computers operate in the client-server mode. Several DES controllers have been tested for engine performance and reliability.

  15. Discrete exterior calculus discretization of incompressible Navier-Stokes equations over surface simplicial meshes

    NASA Astrophysics Data System (ADS)

    Mohamed, Mamdouh S.; Hirani, Anil N.; Samtaney, Ravi

    2016-05-01

    A conservative discretization of incompressible Navier-Stokes equations is developed based on discrete exterior calculus (DEC). A distinguishing feature of our method is the use of an algebraic discretization of the interior product operator and a combinatorial discretization of the wedge product. The governing equations are first rewritten using the exterior calculus notation, replacing vector calculus differential operators by the exterior derivative, Hodge star and wedge product operators. The discretization is then carried out by substituting with the corresponding discrete operators based on the DEC framework. Numerical experiments for flows over surfaces reveal a second order accuracy for the developed scheme when using structured-triangular meshes, and first order accuracy for otherwise unstructured meshes. By construction, the method is conservative in that both mass and vorticity are conserved up to machine precision. The relative error in kinetic energy for inviscid flow test cases converges in a second order fashion with both the mesh size and the time step.

  16. Continuous and discrete water-quality data collected at five sites on Lake Houston near Houston, Texas, 2006-08

    USGS Publications Warehouse

    Beussink, Amy M.; Burnich, Michael R.

    2009-01-01

    Lake Houston, a reservoir impounded in 1954 by the City of Houston, Texas, is a primary source of drinking water for Houston and surrounding areas. The U.S. Geological Survey, in cooperation with the City of Houston, developed a continuous water-quality monitoring network to track daily changes in water quality in the southwestern quadrant of Lake Houston beginning in 2006. Continuous water-quality data (the physiochemical properties water temperature, specific conductance, pH, dissolved oxygen concentration, and turbidity) were collected from Lake Houston to characterize the in-lake processes that affect water quality. Continuous data were collected hourly from mobile, multi-depth monitoring stations developed and constructed by the U.S. Geological Survey. Multi-depth monitoring stations were installed at five sites in three general locations in the southwestern quadrant of the lake. Discrete water-quality data (samples) were collected routinely (once or twice each month) at all sites to characterize the chemical and biological (phytoplankton and bacteria) response to changes in the continuous water-quality properties. Physiochemical properties (the five continuously monitored plus transparency) were measured in the field when samples were collected. In addition to the routine samples, discrete water-quality samples were collected synoptically (one or two times during the study period) at all sites to determine the presence and levels of selected constituents not analyzed in routine samples. Routine samples were measured or analyzed for acid neutralizing capacity; selected major ions and trace elements (calcium, silica, and manganese); nutrients (filtered and total ammonia nitrogen, filtered nitrate plus nitrite nitrogen, total nitrate nitrogen, filtered and total nitrite nitrogen, filtered and total orthophosphate phosphorus, total phosphorus, total nitrogen, total organic carbon); fecal indicator bacteria (total coliform and Escherichia coli); sediment (suspended-sediment concentration and loss-on-ignition); actinomycetes bacteria; taste-and-odor-causing compounds (2-methylisoborneol and geosmin); cyanobacterial toxins (total microcystins); and phytoplankton abundance, biovolume, and community composition (taxonomic identification to genus). Synoptic samples were analyzed for major ions, trace elements, wastewater indicators, pesticides, volatile organic compounds, and carbon. The analytical data are presented in tables by type (continuous, discrete routine, discrete synoptic) and listed by station number. Continuously monitored properties (except pH) also are displayed graphically.

  17. A Simple Approach to Fourier Aliasing

    ERIC Educational Resources Information Center

    Foadi, James

    2007-01-01

    In the context of discrete Fourier transforms the idea of aliasing as due to approximation errors in the integral defining Fourier coefficients is introduced and explained. This has the positive pedagogical effect of getting to the heart of sampling and the discrete Fourier transform without having to delve into effective, but otherwise long and…

  18. Taxometric Investigation of PTSD: Data from Two Nationally Representative Samples

    ERIC Educational Resources Information Center

    Broman-Fulks, Joshua J.; Ruggiero, Kenneth J.; Green, Bradley A.; Kilpatrick, Dean G.; Danielson, Carla Kmett; Resnick, Heidi S.; Saunders, Benjamin E.

    2006-01-01

    Current psychiatric nosology depicts posttraumatic stress disorder (PTSD) as a discrete diagnostic category. However, only one study has examined the latent structure of PTSD, and this study suggested that PTSD may be more accurately conceptualized as an extreme reaction to traumatic life events rather than a discrete clinical syndrome. To build…

  19. Formation Flying Control Implementation in Highly Elliptical Orbits

    NASA Technical Reports Server (NTRS)

    Capo-Lugo, Pedro A.; Bainum, Peter M.

    2009-01-01

    The Tschauner-Hempel equations are widely used to correct the separation distance drifts between a pair of satellites within a constellation in highly elliptical orbits [1]. This set of equations was discretized in the true anomaly angle [1] to be used in a digital steady-state hierarchical controller [2]. This controller [2] performed the drift correction between a pair of satellites within the constellation. The objective of a discretized system is to develop a simple algorithm to be implemented in the computer onboard the satellite. The main advantage of the discrete systems is that the computational time can be reduced by selecting a suitable sampling interval. For this digital system, the amount of data will depend on the sampling interval in the true anomaly angle [3]. The purpose of this paper is to implement the discrete Tschauner-Hempel equations and the steady-state hierarchical controller in the computer onboard the satellite. This set of equations is expressed in the true anomaly angle in which a relation will be formulated between the time and the true anomaly angle domains.

  20. Nonuniform sampling and non-Fourier signal processing methods in multidimensional NMR.

    PubMed

    Mobli, Mehdi; Hoch, Jeffrey C

    2014-11-01

    Beginning with the introduction of Fourier Transform NMR by Ernst and Anderson in 1966, time domain measurement of the impulse response (the free induction decay, FID) consisted of sampling the signal at a series of discrete intervals. For compatibility with the discrete Fourier transform (DFT), the intervals are kept uniform, and the Nyquist theorem dictates the largest value of the interval sufficient to avoid aliasing. With the proposal by Jeener of parametric sampling along an indirect time dimension, extension to multidimensional experiments employed the same sampling techniques used in one dimension, similarly subject to the Nyquist condition and suitable for processing via the discrete Fourier transform. The challenges of obtaining high-resolution spectral estimates from short data records using the DFT were already well understood, however. Despite techniques such as linear prediction extrapolation, the achievable resolution in the indirect dimensions is limited by practical constraints on measuring time. The advent of non-Fourier methods of spectrum analysis capable of processing nonuniformly sampled data has led to an explosion in the development of novel sampling strategies that avoid the limits on resolution and measurement time imposed by uniform sampling. The first part of this review discusses the many approaches to data sampling in multidimensional NMR, the second part highlights commonly used methods for signal processing of such data, and the review concludes with a discussion of other approaches to speeding up data acquisition in NMR. Copyright © 2014 Elsevier B.V. All rights reserved.

  1. SIMULATION FROM ENDPOINT-CONDITIONED, CONTINUOUS-TIME MARKOV CHAINS ON A FINITE STATE SPACE, WITH APPLICATIONS TO MOLECULAR EVOLUTION.

    PubMed

    Hobolth, Asger; Stone, Eric A

    2009-09-01

    Analyses of serially-sampled data often begin with the assumption that the observations represent discrete samples from a latent continuous-time stochastic process. The continuous-time Markov chain (CTMC) is one such generative model whose popularity extends to a variety of disciplines ranging from computational finance to human genetics and genomics. A common theme among these diverse applications is the need to simulate sample paths of a CTMC conditional on realized data that is discretely observed. Here we present a general solution to this sampling problem when the CTMC is defined on a discrete and finite state space. Specifically, we consider the generation of sample paths, including intermediate states and times of transition, from a CTMC whose beginning and ending states are known across a time interval of length T. We first unify the literature through a discussion of the three predominant approaches: (1) modified rejection sampling, (2) direct sampling, and (3) uniformization. We then give analytical results for the complexity and efficiency of each method in terms of the instantaneous transition rate matrix Q of the CTMC, its beginning and ending states, and the length of sampling time T. In doing so, we show that no method dominates the others across all model specifications, and we give explicit proof of which method prevails for any given Q, T, and endpoints. Finally, we introduce and compare three applications of CTMCs to demonstrate the pitfalls of choosing an inefficient sampler.

  2. On the discretization and control of an SEIR epidemic model with a periodic impulsive vaccination

    NASA Astrophysics Data System (ADS)

    Alonso-Quesada, S.; De la Sen, M.; Ibeas, A.

    2017-01-01

    This paper deals with the discretization and control of an SEIR epidemic model. Such a model describes the transmission of an infectious disease among a time-varying host population. The model assumes mortality from causes related to the disease. Our study proposes a discretization method including a free-design parameter to be adjusted for guaranteeing the positivity of the resulting discrete-time model. Such a method provides a discrete-time model close to the continuous-time one without the need for the sampling period to be as small as other commonly used discretization methods require. This fact makes possible the design of impulsive vaccination control strategies with less burden of measurements and related computations if one uses the proposed instead of other discretization methods. The proposed discretization method and the impulsive vaccination strategy designed on the resulting discretized model are the main novelties of the paper. The paper includes (i) the analysis of the positivity of the obtained discrete-time SEIR model, (ii) the study of stability of the disease-free equilibrium point of a normalized version of such a discrete-time model and (iii) the existence and the attractivity of a globally asymptotically stable disease-free periodic solution under a periodic impulsive vaccination. Concretely, the exposed and infectious subpopulations asymptotically converge to zero as time tends to infinity while the normalized subpopulations of susceptible and recovered by immunization individuals oscillate in the context of such a solution. Finally, a numerical example illustrates the theoretic results.

  3. Minimizing the Total Service Time of Discrete Dynamic Berth Allocation Problem by an Iterated Greedy Heuristic

    PubMed Central

    2014-01-01

    Berth allocation is the forefront operation performed when ships arrive at a port and is a critical task in container port optimization. Minimizing the time ships spend at berths constitutes an important objective of berth allocation problems. This study focuses on the discrete dynamic berth allocation problem (discrete DBAP), which aims to minimize total service time, and proposes an iterated greedy (IG) algorithm to solve it. The proposed IG algorithm is tested on three benchmark problem sets. Experimental results show that the proposed IG algorithm can obtain optimal solutions for all test instances of the first and second problem sets and outperforms the best-known solutions for 35 out of 90 test instances of the third problem set. PMID:25295295

  4. Sampling rare fluctuations of discrete-time Markov chains

    NASA Astrophysics Data System (ADS)

    Whitelam, Stephen

    2018-03-01

    We describe a simple method that can be used to sample the rare fluctuations of discrete-time Markov chains. We focus on the case of Markov chains with well-defined steady-state measures, and derive expressions for the large-deviation rate functions (and upper bounds on such functions) for dynamical quantities extensive in the length of the Markov chain. We illustrate the method using a series of simple examples, and use it to study the fluctuations of a lattice-based model of active matter that can undergo motility-induced phase separation.

  5. Sampling rare fluctuations of discrete-time Markov chains.

    PubMed

    Whitelam, Stephen

    2018-03-01

    We describe a simple method that can be used to sample the rare fluctuations of discrete-time Markov chains. We focus on the case of Markov chains with well-defined steady-state measures, and derive expressions for the large-deviation rate functions (and upper bounds on such functions) for dynamical quantities extensive in the length of the Markov chain. We illustrate the method using a series of simple examples, and use it to study the fluctuations of a lattice-based model of active matter that can undergo motility-induced phase separation.

  6. The effect of traffic lights and regulatory statements on the choice between complementary and conventional medicines in Australia: results from a discrete choice experiment.

    PubMed

    Spinks, Jean; Mortimer, Duncan

    2015-01-01

    It has been suggested that complementary medicines are currently 'under-regulated' in some countries due to their potential for harm as a direct result from side-effects or interactions; from delaying more effective care; or from the economic cost of purchasing an ineffective or inappropriate treatment. The requirement of additional labelling on complementary medicine products has been suggested in Australia and may provide additional information to consumers at the point of purchase. This paper details a unique way of testing the potential effects on consumer behaviour of including either a traffic light logo or regulatory statement on labels. Using a discrete choice experiment, data were collected in 2012 in a sample of 521 Australians with either type 2 diabetes or cardiovascular disease. We find that additional labelling can affect consumer behaviour, but in unpredictable ways. The results of this experiment are informative to further the dialogue concerning possible regulatory mechanisms. Copyright © 2014 Elsevier Ltd. All rights reserved.

  7. Do job demands and job control affect problem-solving?

    PubMed

    Bergman, Peter N; Ahlberg, Gunnel; Johansson, Gun; Stoetzer, Ulrich; Aborg, Carl; Hallsten, Lennart; Lundberg, Ingvar

    2012-01-01

    The Job Demand Control model presents combinations of working conditions that may facilitate learning, the active learning hypothesis, or have detrimental effects on health, the strain hypothesis. To test the active learning hypothesis, this study analysed the effects of job demands and job control on general problem-solving strategies. A population-based sample of 4,636 individuals (55% women, 45% men) with the same job characteristics measured at two times with a three year time lag was used. Main effects of demands, skill discretion, task authority and control, and the combined effects of demands and control were analysed in logistic regressions, on four outcomes representing general problem-solving strategies. Those reporting high on skill discretion, task authority and control, as well as those reporting high demand/high control and low demand/high control job characteristics were more likely to state using problem solving strategies. Results suggest that working conditions including high levels of control may affect how individuals cope with problems and that workplace characteristics may affect behaviour in the non-work domain.

  8. Discrete Element Modeling of Micro-scratch Tests: Investigation of Mechanisms of CO2 Alteration in Reservoir Rocks

    NASA Astrophysics Data System (ADS)

    Sun, Zhuang; Espinoza, D. Nicolas; Balhoff, Matthew T.; Dewers, Thomas A.

    2017-12-01

    The injection of CO2 into geological formations leads to geochemical re-equilibrium between the pore fluid and rock minerals. Mineral-brine-CO2 reactions can induce alteration of mechanical properties and affect the structural integrity of the storage formation. The location of alterable mineral phases within the rock skeleton is important to assess the potential effects of mineral dissolution on bulk geomechanical properties. Hence, although often disregarded, the understanding of particle-scale mechanisms responsible for alterations is necessary to predict the extent of geomechanical alteration as a function of dissolved mineral amounts. This study investigates the CO2-related rock chemo-mechanical alteration through numerical modeling and matching of naturally altered rocks probed with micro-scratch tests. We use a model that couples the discrete element method (DEM) and the bonded particle model (BPM) to perform simulations of micro-scratch tests on synthetic rocks that mimic Entrada sandstone. Experimental results serve to calibrate numerical scratch tests with DEM-BPM parameters. Sensitivity analyses indicate that the cement size and bond shear strength are the most sensitive microscopic parameters that govern the CO2-induced alteration in Entrada sandstone. Reductions in cement size lead to decrease in scratch toughness and an increase in ductility in the rock samples. This work demonstrates how small variations of microscopic bond properties in cemented sandstone can lead to significant changes in macroscopic large-strain mechanical properties.

  9. Method and apparatus for generating motor current spectra to enhance motor system fault detection

    DOEpatents

    Linehan, Daniel J.; Bunch, Stanley L.; Lyster, Carl T.

    1995-01-01

    A method and circuitry for sampling periodic amplitude modulations in a nonstationary periodic carrier wave to determine frequencies in the amplitude modulations. The method and circuit are described in terms of an improved motor current signature analysis. The method insures that the sampled data set contains an exact whole number of carrier wave cycles by defining the rate at which samples of motor current data are collected. The circuitry insures that a sampled data set containing stationary carrier waves is recreated from the analog motor current signal containing nonstationary carrier waves by conditioning the actual sampling rate to adjust with the frequency variations in the carrier wave. After the sampled data is transformed to the frequency domain via the Discrete Fourier Transform, the frequency distribution in the discrete spectra of those components due to the carrier wave and its harmonics will be minimized so that signals of interest are more easily analyzed.

  10. Physical properties of the Nankai inner accretionary prism sediments at Site C0002, IODP Expedition 348.

    NASA Astrophysics Data System (ADS)

    Kitamura, M.; Kitajima, H.; Henry, P.; Valdez, R. D., II; Josh, M.; Tobin, H. J.; Saffer, D. M.; Hirose, T.; Toczko, S.; Maeda, L.

    2014-12-01

    Integrated Ocean Drilling Program (IODP) Nankai Trough Seismogenic Zone Experiment (NanTroSEIZE) Expedition 348 focused on deepening the existing riser hole at Site C0002 to ~3000 meters below seafloor (mbsf) to access the deep interior of the Miocene inner accretionary prism. This unique tectonic environment, which has never before been sampled in situ by ocean drilling, was characterized through riser drilling, logging while drilling (LWD), mud gas monitoring and sampling, and cuttings and core analysis. Shipboard physical properties measurements including moisture and density (MAD), electrical conductivity, P-wave, natural gamma ray, and magnetic susceptibility measurements were performed mainly on cuttings samples from 870.5 to 3058.5 mbsf, but also on core samples from 2163 and 2204 mbsf. MAD measurements were conducted on seawater-washed cuttings ("bulk cuttings") in two size fractions of >4 mm and 1-4 mm from 870.5 to 3058.5 mbsf, and hand-picked intact cuttings from the >4 mm size fractions within 1222.5-3058.5 mbsf interval. The bulk cuttings show grain density of ~2.7 g/cm3, bulk density of 1.9 g/cm3 to 2.2 g/cm3, and porosity of 50% to 32%. Compared to the values on bulk cuttings, the intact cuttings show almost the same grain density, but higher bulk density and lower porosity, respectively. Combined with the MAD measurements on hand-picked intact cuttings and discrete core samples from previous expeditions, porosity generally decreases from ~60% to ~20% from the seafloor to 3000 mbsf at Site C0002. Electrical conductivity and P-wave velocity on discrete samples, which were prepared from both cuttings and core samples in the depth interval of 1745.5-3058.5 mbsf, range 0.15-0.9 S/m and 1.7-4.5 km/s, respectively. The electrical resistivity on discrete samples is higher than the LWD resistivity data but the overall depth trends are similar. The electrical conductivity and P-wave velocity on discrete samples corrected for in-situ pressure and temperature will be presented. The shipboard physical properties measurements on cuttings are very limited but can be useful with careful treatment and observation.

  11. Incentives for Blood Donation: A Discrete Choice Experiment to Analyze Extrinsic Motivation.

    PubMed

    Sadler, Andrew; Shi, Ling; Bethge, Susanne; Mühlbacher, Axel

    2018-04-01

    Background: Demographic trends affect size and age structure of populations. One of the consequences will be an increasing need for blood products to treat age-related diseases. Donation services rely on voluntariness and charitable motivation. It might be questioned whether there will be sufficient blood supply with voluntary donation. The present study focused on elicitation of preferences for incentives and aimed to contribute to the discussion on how to increase donation rates. Methods: A self-administered discrete choice experiment (DCE) was applied. Respondents were repeatedly asked to choose between hypothetical blood donation centers. In case of reluctance to receiving incentives a none-option was included. Random parameter logit (RPL) and latent class models (LCM) were used for analysis. Results: The study sample included 416 college students from the US and Germany. Choice decisions were significantly influenced by the characteristics of the donation center in the DCE. Incentives most preferred were monetary compensation, paid leave, and blood screening test. LCM identified subgroups with preference heterogeneity. Small subgroups indicated moderate to strong aversion to incentives. Conclusion: The majority of the sample positively responded to incentives and indicated a willingness to accept incentives. In face of future challenges, the judicious use and appropriate utilization of incentives might be an option to motivate potential donors and should be open to discussion.

  12. A Method for Continuous (239)Pu Determinations in Arctic and Antarctic Ice Cores.

    PubMed

    Arienzo, M M; McConnell, J R; Chellman, N; Criscitiello, A S; Curran, M; Fritzsche, D; Kipfstuhl, S; Mulvaney, R; Nolan, M; Opel, T; Sigl, M; Steffensen, J P

    2016-07-05

    Atmospheric nuclear weapons testing (NWT) resulted in the injection of plutonium (Pu) into the atmosphere and subsequent global deposition. We present a new method for continuous semiquantitative measurement of (239)Pu in ice cores, which was used to develop annual records of fallout from NWT in ten ice cores from Greenland and Antarctica. The (239)Pu was measured directly using an inductively coupled plasma-sector field mass spectrometer, thereby reducing analysis time and increasing depth-resolution with respect to previous methods. To validate this method, we compared our one year averaged results to published (239)Pu records and other records of NWT. The (239)Pu profiles from the Arctic ice cores reflected global trends in NWT and were in agreement with discrete Pu profiles from lower latitude ice cores. The (239)Pu measurements in the Antarctic ice cores tracked low latitude NWT, consistent with previously published discrete records from Antarctica. Advantages of the continuous (239)Pu measurement method are (1) reduced sample preparation and analysis time; (2) no requirement for additional ice samples for NWT fallout determinations; (3) measurements are exactly coregistered with all other chemical, elemental, isotopic, and gas measurements from the continuous analytical system; and (4) the long half-life means the (239)Pu record is stable through time.

  13. A passivity criterion for sampled-data bilateral teleoperation systems.

    PubMed

    Jazayeri, Ali; Tavakoli, Mahdi

    2013-01-01

    A teleoperation system consists of a teleoperator, a human operator, and a remote environment. Conditions involving system and controller parameters that ensure the teleoperator passivity can serve as control design guidelines to attain maximum teleoperation transparency while maintaining system stability. In this paper, sufficient conditions for teleoperator passivity are derived for when position error-based controllers are implemented in discrete-time. This new analysis is necessary because discretization causes energy leaks and does not necessarily preserve the passivity of the system. The proposed criterion for sampled-data teleoperator passivity imposes lower bounds on the teleoperator's robots dampings, an upper bound on the sampling time, and bounds on the control gains. The criterion is verified through simulations and experiments.

  14. On the Total Variation of High-Order Semi-Discrete Central Schemes for Conservation Laws

    NASA Technical Reports Server (NTRS)

    Bryson, Steve; Levy, Doron

    2004-01-01

    We discuss a new fifth-order, semi-discrete, central-upwind scheme for solving one-dimensional systems of conservation laws. This scheme combines a fifth-order WENO reconstruction, a semi-discrete central-upwind numerical flux, and a strong stability preserving Runge-Kutta method. We test our method with various examples, and give particular attention to the evolution of the total variation of the approximations.

  15. Financial Distress Prediction Using Discrete-time Hazard Model and Rating Transition Matrix Approach

    NASA Astrophysics Data System (ADS)

    Tsai, Bi-Huei; Chang, Chih-Huei

    2009-08-01

    Previous studies used constant cut-off indicator to distinguish distressed firms from non-distressed ones in the one-stage prediction models. However, distressed cut-off indicator must shift according to economic prosperity, rather than remains fixed all the time. This study focuses on Taiwanese listed firms and develops financial distress prediction models based upon the two-stage method. First, this study employs the firm-specific financial ratio and market factors to measure the probability of financial distress based on the discrete-time hazard models. Second, this paper further focuses on macroeconomic factors and applies rating transition matrix approach to determine the distressed cut-off indicator. The prediction models are developed by using the training sample from 1987 to 2004, and their levels of accuracy are compared with the test sample from 2005 to 2007. As for the one-stage prediction model, the model in incorporation with macroeconomic factors does not perform better than that without macroeconomic factors. This suggests that the accuracy is not improved for one-stage models which pool the firm-specific and macroeconomic factors together. In regards to the two stage models, the negative credit cycle index implies the worse economic status during the test period, so the distressed cut-off point is adjusted to increase based on such negative credit cycle index. After the two-stage models employ such adjusted cut-off point to discriminate the distressed firms from non-distressed ones, their error of misclassification becomes lower than that of one-stage ones. The two-stage models presented in this paper have incremental usefulness in predicting financial distress.

  16. Stochastic Stability of Nonlinear Sampled Data Systems with a Jump Linear Controller

    NASA Technical Reports Server (NTRS)

    Gonzalez, Oscar R.; Herencia-Zapana, Heber; Gray, W. Steven

    2004-01-01

    This paper analyzes the stability of a sampled- data system consisting of a deterministic, nonlinear, time- invariant, continuous-time plant and a stochastic, discrete- time, jump linear controller. The jump linear controller mod- els, for example, computer systems and communication net- works that are subject to stochastic upsets or disruptions. This sampled-data model has been used in the analysis and design of fault-tolerant systems and computer-control systems with random communication delays without taking into account the inter-sample response. To analyze stability, appropriate topologies are introduced for the signal spaces of the sampled- data system. With these topologies, the ideal sampling and zero-order-hold operators are shown to be measurable maps. This paper shows that the known equivalence between the stability of a deterministic, linear sampled-data system and its associated discrete-time representation as well as between a nonlinear sampled-data system and a linearized representation holds even in a stochastic framework.

  17. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Altazi, B; Fernandez, D; Zhang, G

    Purpose: Site-specific investigations of the role of Radiomics in cancer diagnosis and therapy are needed. We report of the reproducibility of quantitative image features over different discrete voxel levels in PET/CT images of cervical cancer. Methods: Our dataset consisted of the pretreatment PET/CT scans from a cohort of 76 patients diagnosed with cervical cancer, FIGO stage IB-IVA, age range 31–76 years, treated with external beam radiation therapy to a dose range between 45–50.4 Gy (median dose: 45 Gy), concurrent cisplatin chemotherapy and MRI-based Brachytherapy to a dose of 20–30 Gy (median total dose: 28 Gy). Two board certified radiation oncologistsmore » delineated Metabolic Tumor volume (MTV) for each patient. Radiomics features were extracted based on 32, 64, 128 and 256 discretization levels (DL). The 64 level was chosen to be the reference DL. Features were calculated based on Co-occurrence (COM), Gray Level Size Zone (GLSZM) and Run-Length (RLM) matrices. Mean Percentage Differences (Δ) of features for discrete levels were determined. Normality distribution of Δ was tested using Kolomogorov - Smirnov test. Bland-Altman test was used to investigate differences between feature values measured on different DL. The mean, standard deviation and upper/lower value limits for each pair of DL were calculated. Interclass Correlation Coefficient (ICC) analysis was performed to examine the reliability of repeated measures within the context of the test re-test format. Results: 3 global and 5 regional features out of 48 features showed distribution not significantly different from a normal one. The reproducible features passed the normality test. Only 5 reproducible results were reliable, ICC range 0.7 – 0.99. Conclusion: Most of the radiomics features tested showed sensitivity to voxel level discretization between (32 – 256). Only 4 GLSZM, 3 COM and 1 RLM showed insensitivity towards mentioned discrete levels.« less

  18. A Discretization Algorithm for Meteorological Data and its Parallelization Based on Hadoop

    NASA Astrophysics Data System (ADS)

    Liu, Chao; Jin, Wen; Yu, Yuting; Qiu, Taorong; Bai, Xiaoming; Zou, Shuilong

    2017-10-01

    In view of the large amount of meteorological observation data, the property is more and the attribute values are continuous values, the correlation between the elements is the need for the application of meteorological data, this paper is devoted to solving the problem of how to better discretize large meteorological data to more effectively dig out the hidden knowledge in meteorological data and research on the improvement of discretization algorithm for large scale data, in order to achieve data in the large meteorological data discretization for the follow-up to better provide knowledge to provide protection, a discretization algorithm based on information entropy and inconsistency of meteorological attributes is proposed and the algorithm is parallelized under Hadoop platform. Finally, the comparison test validates the effectiveness of the proposed algorithm for discretization in the area of meteorological large data.

  19. Wavelet data processing of micro-Raman spectra of biological samples

    NASA Astrophysics Data System (ADS)

    Camerlingo, C.; Zenone, F.; Gaeta, G. M.; Riccio, R.; Lepore, M.

    2006-02-01

    A wavelet multi-component decomposition algorithm is proposed for processing data from micro-Raman spectroscopy (μ-RS) of biological tissue. The μ-RS has been recently recognized as a promising tool for the biopsy test and in vivo diagnosis of degenerative human tissue pathologies, due to the high chemical and structural information contents of this spectroscopic technique. However, measurements of biological tissues are usually hampered by typically low-level signals and by the presence of noise and background components caused by light diffusion or fluorescence processes. In order to overcome these problems, a numerical method based on discrete wavelet transform is used for the analysis of data from μ-RS measurements performed in vitro on animal (pig and chicken) tissue samples and, in a preliminary form, on human skin and oral tissue biopsy from normal subjects. Visible light μ-RS was performed using a He-Ne laser and a monochromator with a liquid nitrogen cooled charge coupled device equipped with a grating of 1800 grooves mm-1. The validity of the proposed data procedure has been tested on the well-characterized Raman spectra of reference acetylsalicylic acid samples.

  20. Method for testing earth samples for contamination by organic contaminants

    DOEpatents

    Schabron, J.F.

    1996-10-01

    Provided is a method for testing earth samples for contamination by organic contaminants, and particularly for aromatic compounds such as those found in diesel fuel and other heavy fuel oils, kerosene, creosote, coal oil, tars and asphalts. A drying step is provided in which a drying agent is contacted with either the earth sample or a liquid extract phase to reduce to possibility of false indications of contamination that could occur when humic material is present in the earth sample. This is particularly a problem when using relatively safe, non-toxic and inexpensive polar solvents such as isopropyl alcohol since the humic material tends to be very soluble in those solvents when water is present. Also provided is an ultraviolet spectroscopic measuring technique for obtaining an indication as to whether a liquid extract phase contains aromatic organic contaminants. In one embodiment, the liquid extract phase is subjected to a narrow and discrete band of radiation including a desired wave length and the ability of the liquid extract phase to absorb that wavelength of ultraviolet radiation is measured to provide an indication of the presence of aromatic organic contaminants. 2 figs.

  1. Characterization of Pump-Induced Acoustics in Space Launch System Main Propulsion System Liquid Hydrogen Feedline Using Airflow Test Data

    NASA Technical Reports Server (NTRS)

    Eberhart, C. J.; Snellgrove, L. M.; Zoladz, T. F.

    2015-01-01

    High intensity acoustic edgetones located upstream of the RS-25 Low Pressure Fuel Turbo Pump (LPFTP) were previously observed during Space Launch System (STS) airflow testing of a model Main Propulsion System (MPS) liquid hydrogen (LH2) feedline mated to a modified LPFTP. MPS hardware has been adapted to mitigate the problematic edgetones as part of the Space Launch System (SLS) program. A follow-on airflow test campaign has subjected the adapted hardware to tests mimicking STS-era airflow conditions, and this manuscript describes acoustic environment identification and characterization born from the latest test results. Fluid dynamics responsible for driving discrete excitations were well reproduced using legacy hardware. The modified design was found insensitive to high intensity edgetone-like discretes over the bandwidth of interest to SLS MPS unsteady environments. Rather, the natural acoustics of the test article were observed to respond in a narrowband-random/mixed discrete manner to broadband noise thought generated by the flow field. The intensity of these responses were several orders of magnitude reduced from those driven by edgetones.

  2. Cluster analysis of European Y-chromosomal STR haplotypes using the discrete Laplace method.

    PubMed

    Andersen, Mikkel Meyer; Eriksen, Poul Svante; Morling, Niels

    2014-07-01

    The European Y-chromosomal short tandem repeat (STR) haplotype distribution has previously been analysed in various ways. Here, we introduce a new way of analysing population substructure using a new method based on clustering within the discrete Laplace exponential family that models the probability distribution of the Y-STR haplotypes. Creating a consistent statistical model of the haplotypes enables us to perform a wide range of analyses. Previously, haplotype frequency estimation using the discrete Laplace method has been validated. In this paper we investigate how the discrete Laplace method can be used for cluster analysis to further validate the discrete Laplace method. A very important practical fact is that the calculations can be performed on a normal computer. We identified two sub-clusters of the Eastern and Western European Y-STR haplotypes similar to results of previous studies. We also compared pairwise distances (between geographically separated samples) with those obtained using the AMOVA method and found good agreement. Further analyses that are impossible with AMOVA were made using the discrete Laplace method: analysis of the homogeneity in two different ways and calculating marginal STR distributions. We found that the Y-STR haplotypes from e.g. Finland were relatively homogeneous as opposed to the relatively heterogeneous Y-STR haplotypes from e.g. Lublin, Eastern Poland and Berlin, Germany. We demonstrated that the observed distributions of alleles at each locus were similar to the expected ones. We also compared pairwise distances between geographically separated samples from Africa with those obtained using the AMOVA method and found good agreement. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  3. Fast and Accurate Learning When Making Discrete Numerical Estimates.

    PubMed

    Sanborn, Adam N; Beierholm, Ulrik R

    2016-04-01

    Many everyday estimation tasks have an inherently discrete nature, whether the task is counting objects (e.g., a number of paint buckets) or estimating discretized continuous variables (e.g., the number of paint buckets needed to paint a room). While Bayesian inference is often used for modeling estimates made along continuous scales, discrete numerical estimates have not received as much attention, despite their common everyday occurrence. Using two tasks, a numerosity task and an area estimation task, we invoke Bayesian decision theory to characterize how people learn discrete numerical distributions and make numerical estimates. Across three experiments with novel stimulus distributions we found that participants fell between two common decision functions for converting their uncertain representation into a response: drawing a sample from their posterior distribution and taking the maximum of their posterior distribution. While this was consistent with the decision function found in previous work using continuous estimation tasks, surprisingly the prior distributions learned by participants in our experiments were much more adaptive: When making continuous estimates, participants have required thousands of trials to learn bimodal priors, but in our tasks participants learned discrete bimodal and even discrete quadrimodal priors within a few hundred trials. This makes discrete numerical estimation tasks good testbeds for investigating how people learn and make estimates.

  4. Fast and Accurate Learning When Making Discrete Numerical Estimates

    PubMed Central

    Sanborn, Adam N.; Beierholm, Ulrik R.

    2016-01-01

    Many everyday estimation tasks have an inherently discrete nature, whether the task is counting objects (e.g., a number of paint buckets) or estimating discretized continuous variables (e.g., the number of paint buckets needed to paint a room). While Bayesian inference is often used for modeling estimates made along continuous scales, discrete numerical estimates have not received as much attention, despite their common everyday occurrence. Using two tasks, a numerosity task and an area estimation task, we invoke Bayesian decision theory to characterize how people learn discrete numerical distributions and make numerical estimates. Across three experiments with novel stimulus distributions we found that participants fell between two common decision functions for converting their uncertain representation into a response: drawing a sample from their posterior distribution and taking the maximum of their posterior distribution. While this was consistent with the decision function found in previous work using continuous estimation tasks, surprisingly the prior distributions learned by participants in our experiments were much more adaptive: When making continuous estimates, participants have required thousands of trials to learn bimodal priors, but in our tasks participants learned discrete bimodal and even discrete quadrimodal priors within a few hundred trials. This makes discrete numerical estimation tasks good testbeds for investigating how people learn and make estimates. PMID:27070155

  5. Performance on perceptual word identification is mediated by discrete states.

    PubMed

    Swagman, April R; Province, Jordan M; Rouder, Jeffrey N

    2015-02-01

    We contrast predictions from discrete-state models of all-or-none information loss with signal-detection models of graded strength for the identification of briefly flashed English words. Previous assessments have focused on whether ROC curves are straight or not, which is a test of a discrete-state model where detection leads to the highest confidence response with certainty. We along with many others argue this certainty assumption is too constraining, and, consequently, the straight-line ROC test is too stringent. Instead, we assess a core property of discrete-state models, conditional independence, where the pattern of responses depends only on which state is entered. The conditional independence property implies that confidence ratings are a mixture of detect and guess state responses, and that stimulus strength factors, the duration of the flashed word in this report, affect only the probability of entering a state and not responses conditional on a state. To assess this mixture property, 50 participants saw words presented briefly on a computer screen at three variable flash durations followed by either a two-alternative confidence ratings task or a yes-no confidence ratings task. Comparable discrete-state and signal-detection models were fit to the data for each participant and task. The discrete-state models outperformed the signal detection models for 90 % of participants in the two-alternative task and for 68 % of participants in the yes-no task. We conclude discrete-state models are viable for predicting performance across stimulus conditions in a perceptual word identification task.

  6. Development and testing of a portable wind sensitive directional air sampler

    NASA Technical Reports Server (NTRS)

    Deyo, J.; Toma, J.; King, R. B.

    1975-01-01

    A portable wind sensitive directional air sampler was developed as part of an air pollution source identification system. The system is designed to identify sources of air pollution based on the directional collection of field air samples and their analysis for TSP and trace element characteristics. Sources can be identified by analyzing the data on the basis of pattern recognition concepts. The unit, designated Air Scout, receives wind direction signals from an associated wind vane. Air samples are collected on filter slides using a standard high volume air sampler drawing air through a porting arrangement which tracks the wind direction and permits collection of discrete samples. A preset timer controls the length of time each filter is in the sampling position. At the conclusion of the sampling period a new filter is automatically moved into sampling position displacing the previous filter to a storage compartment. Thus the Air Scout may be set up at a field location, loaded with up to 12 filter slides, and left to acquire air samples automatically, according to the wind, at any timer interval desired from 1 to 30 hours.

  7. Prediction of Flutter Boundary Using Flutter Margin for The Discrete-Time System

    NASA Astrophysics Data System (ADS)

    Dwi Saputra, Angga; Wibawa Purabaya, R.

    2018-04-01

    Flutter testing in a wind tunnel is generally conducted at subcritical speeds to avoid damages. Hence, The flutter speed has to be predicted from the behavior some of its stability criteria estimated against the dynamic pressure or flight speed. Therefore, it is quite important for a reliable flutter prediction method to estimates flutter boundary. This paper summarizes the flutter testing of a wing cantilever model in a wind tunnel. The model has two degree of freedom; they are bending and torsion modes. The flutter test was conducted in a subsonic wind tunnel. The dynamic data responses was measured by two accelerometers that were mounted on leading edge and center of wing tip. The measurement was repeated while the wind speed increased. The dynamic responses were used to determine the parameter flutter margin for the discrete-time system. The flutter boundary of the model was estimated using extrapolation of the parameter flutter margin against the dynamic pressure. The parameter flutter margin for the discrete-time system has a better performance for flutter prediction than the modal parameters. A model with two degree freedom and experiencing classical flutter, the parameter flutter margin for the discrete-time system gives a satisfying result in prediction of flutter boundary on subsonic wind tunnel test.

  8. Identification of Trypanosoma cruzi Discrete Typing Units (DTUs) in Latin-American migrants in Barcelona (Spain).

    PubMed

    Abras, Alba; Gállego, Montserrat; Muñoz, Carmen; Juiz, Natalia A; Ramírez, Juan Carlos; Cura, Carolina I; Tebar, Silvia; Fernández-Arévalo, Anna; Pinazo, María-Jesús; de la Torre, Leonardo; Posada, Elizabeth; Navarro, Ferran; Espinal, Paula; Ballart, Cristina; Portús, Montserrat; Gascón, Joaquim; Schijman, Alejandro G

    2017-04-01

    Trypanosoma cruzi, the causative agent of Chagas disease, is divided into six Discrete Typing Units (DTUs): TcI-TcVI. We aimed to identify T. cruzi DTUs in Latin-American migrants in the Barcelona area (Spain) and to assess different molecular typing approaches for the characterization of T. cruzi genotypes. Seventy-five peripheral blood samples were analyzed by two real-time PCR methods (qPCR) based on satellite DNA (SatDNA) and kinetoplastid DNA (kDNA). The 20 samples testing positive in both methods, all belonging to Bolivian individuals, were submitted to DTU characterization using two PCR-based flowcharts: multiplex qPCR using TaqMan probes (MTq-PCR), and conventional PCR. These samples were also studied by sequencing the SatDNA and classified as type I (TcI/III), type II (TcII/IV) and type I/II hybrid (TcV/VI). Ten out of the 20 samples gave positive results in the flowcharts: TcV (5 samples), TcII/V/VI (3) and mixed infections by TcV plus TcII (1) and TcV plus TcII/VI (1). By SatDNA sequencing, we classified the 20 samples, 19 as type I/II and one as type I. The most frequent DTU identified by both flowcharts, and suggested by SatDNA sequencing in the remaining samples with low parasitic loads, TcV, is common in Bolivia and predominant in peripheral blood. The mixed infection by TcV-TcII was detected for the first time simultaneously in Bolivian migrants. PCR-based flowcharts are very useful to characterize DTUs during acute infection. SatDNA sequence analysis cannot discriminate T. cruzi populations at the level of a single DTU but it enabled us to increase the number of characterized cases in chronically infected patients. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  9. Temperature-dependent plastic hysteresis in highly confined polycrystalline Nb films

    NASA Astrophysics Data System (ADS)

    Waheed, S.; Hao, R.; Zheng, Z.; Wheeler, J. M.; Michler, J.; Balint, D. S.; Giuliani, F.

    2018-02-01

    In this study, the effect of temperature on the cyclic deformation behaviour of a confined polycrystalline Nb film is investigated. Micropillars encapsulating a thin niobium interlayer are deformed under cyclic axial compression at different test temperatures. A distinct plastic hysteresis is observed for samples tested at elevated temperatures, whereas negligible plastic hysteresis is observed for samples tested at room temperature. These results are interpreted using planar discrete dislocation plasticity incorporating slip transmission across grain boundaries. The effect of temperature-dependent grain boundary energy and dislocation mobility on dislocation penetration and, consequently, the size of plastic hysteresis is simulated to correlate with the experimental results. It is found that the decrease in grain boundary energy barrier caused by the increase in temperature does not lead to any appreciable change in the cyclic response. However, dislocation mobility significantly affects the size of plastic hysteresis, with high mobilities leading to a larger hysteresis. Therefore, it is postulated that the experimental observations are predominantly caused by an increase in dislocation mobility as the temperature is increased above the critical temperature of body-centred cubic niobium.

  10. Evidence against the temporal subsampling account of illusory motion reversal

    PubMed Central

    Kline, Keith A.; Eagleman, David M.

    2010-01-01

    An illusion of reversed motion may occur sporadically while viewing continuous smooth motion. This has been suggested as evidence of discrete temporal sampling by the visual system in analogy to the sampling that generates the wagon–wheel effect on film. In an alternative theory, the illusion is not the result of discrete sampling but instead of perceptual rivalry between appropriately activated and spuriously activated motion detectors. Results of the current study demonstrate that illusory reversals of two spatially overlapping and orthogonal motions often occur separately, providing evidence against the possibility that illusory motion reversal (IMR) is caused by temporal sampling within a visual region. Further, we find that IMR occurs with non-uniform and non-periodic stimuli—an observation that is not accounted for by the temporal sampling hypothesis. We propose, that a motion aftereffect is superimposed on the moving stimulus, sporadically allowing motion detectors for the reverse direction to dominate perception. PMID:18484852

  11. Method for utilizing properties of the sinc(x) function for phase retrieval on nyquist-under-sampled data

    NASA Technical Reports Server (NTRS)

    Dean, Bruce H. (Inventor); Smith, Jeffrey Scott (Inventor); Aronstein, David L. (Inventor)

    2012-01-01

    Disclosed herein are systems, methods, and non-transitory computer-readable storage media for simulating propagation of an electromagnetic field, performing phase retrieval, or sampling a band-limited function. A system practicing the method generates transformed data using a discrete Fourier transform which samples a band-limited function f(x) without interpolating or modifying received data associated with the function f(x), wherein an interval between repeated copies in a periodic extension of the function f(x) obtained from the discrete Fourier transform is associated with a sampling ratio Q, defined as a ratio of a sampling frequency to a band-limited frequency, and wherein Q is assigned a value between 1 and 2 such that substantially no aliasing occurs in the transformed data, and retrieves a phase in the received data based on the transformed data, wherein the phase is used as feedback to an optical system.

  12. Isotachophoresis system having larger-diameter channels flowing into channels with reduced diameter and with selectable counter-flow

    DOEpatents

    Mariella, Jr., Raymond P.

    2018-03-06

    An isotachophoresis system for separating a sample containing particles into discrete packets including a flow channel, the flow channel having a large diameter section and a small diameter section; a negative electrode operably connected to the flow channel; a positive electrode operably connected to the flow channel; a leading carrier fluid in the flow channel; a trailing carrier fluid in the flow channel; and a control for separating the particles in the sample into discrete packets using the leading carrier fluid, the trailing carrier fluid, the large diameter section, and the small diameter section.

  13. Acoustic measurements on aerofoils moving in a circle at high speed

    NASA Technical Reports Server (NTRS)

    Wright, S. E.; Crosby, W.; Lee, D. L.

    1982-01-01

    Features of the test apparatus, research objectives and sample test results at the Stanford University rotor aerodynamics and noise facility are described. A steel frame equipped to receive lead shot for damping vibrations supports the drive shaft for rotor blade elements. Sleeve bearings are employed to assure quietness, and a variable speed ac motor produces the rotations. The test stand can be configured for horizontal or vertical orientation of the drive shaft. The entire assembly is housed in an acoustically sealed room. Rotation conditions for hover and large angles of attack can be studied, together with rotational and blade element noises. Research is possible on broad band, discrete frequency, and high speed noise, with measurements taken 3 m from the center of the rotor. Acoustic signatures from Mach 0.3-0.93 trials with a NACA 0012 airfoil are provided.

  14. Rheology of U-Shaped Granular Particles

    NASA Astrophysics Data System (ADS)

    Hill, Matthew; Franklin, Scott

    We study the response of cylindrical samples of U-shaped granular particles (staples) to extensional loads. Samples elongate in discrete bursts (events) corresponding to particles rearranging and re-entangling. Previous research on samples of constant cross-sectional area found a Weibullian weakest-link theory could explain the distribution of yield points. We now vary the cross-sectional area, and find that the maximum yield pressure (force/area) is a function of particle number density and independent of area. The probability distribution function of important event characteristics -- the stress increase before an event and stress released during an event -- both fall of inversely with magnitude, reminiscent of avalanche dynamics. Fourier transforms of the fluctuating force (or stress) scales inversely with frequency, suggesting dry friction plays a role in the rearrangements. Finally, there is some evidence that dynamics are sensitive to the stiffness of the tensile testing machine, although an explanation for this behavior is unknown.

  15. ONERA-NASA Cooperative Effort on Liner Impedance Eduction

    NASA Technical Reports Server (NTRS)

    Primus, Julien; Piot, Estelle; Simon, Frank; Jones, Michael G.; Watson, Willie R

    2013-01-01

    As part of a cooperation between ONERA and NASA, the liner impedance eduction methods developed by the two research centers are compared. The NASA technique relies on an objective function built on acoustic pressure measurements located on the wall opposite the test liner, and the propagation code solves the convected Helmholtz equation in uniform ow using a finite element method that implements a continuous Galerkin discretization. The ONERA method uses an objective function based either on wall acoustic pressure or on acoustic velocity acquired above the liner by Laser Doppler Anemometry, and the propagation code solves the linearized Euler equations by a discontinuous Galerkin discretization. Two acoustic liners are tested in both ONERA and NASA ow ducts and the measured data are treated with the corresponding impedance eduction method. The first liner is a wire mesh facesheet mounted onto a honeycomb core, designed to be linear with respect to incident sound pressure level and to grazing ow velocity. The second one is a conventional, nonlinear, perforate-over-honeycomb single layer liner. Configurations without and with ow are considered. For the nonlinear liner, the comparison of liner impedance educed by NASA and ONERA shows a sensitivity to the experimental conditions, namely to the nature of the source and to the sample width.

  16. Plasma plume oscillations monitoring during laser welding of stainless steel by discrete wavelet transform application.

    PubMed

    Sibillano, Teresa; Ancona, Antonio; Rizzi, Domenico; Lupo, Valentina; Tricarico, Luigi; Lugarà, Pietro Mario

    2010-01-01

    The plasma optical radiation emitted during CO2 laser welding of stainless steel samples has been detected with a Si-PIN photodiode and analyzed under different process conditions. The discrete wavelet transform (DWT) has been used to decompose the optical signal into various discrete series of sequences over different frequency bands. The results show that changes of the process settings may yield different signal features in the range of frequencies between 200 Hz and 30 kHz. Potential applications of this method to monitor in real time the laser welding processes are also discussed.

  17. Cognitive Representations of Peer Relationships: Linkages with Discrete Social Cognition and Social Behavior

    ERIC Educational Resources Information Center

    Meece, Darrell; Mize, Jacquelyn

    2009-01-01

    Two aspects of young children's cognitive representations of peer relationships-peer affiliative motivation and feelings and beliefs about the self and peers-were assessed among a sample of 75 children (37 girls), who ranged in age from 32 to 76 months (M = 58.2 months). Measures of three aspects of discrete social cognition, encoding of social…

  18. Compensatory neurofuzzy model for discrete data classification in biomedical

    NASA Astrophysics Data System (ADS)

    Ceylan, Rahime

    2015-03-01

    Biomedical data is separated to two main sections: signals and discrete data. So, studies in this area are about biomedical signal classification or biomedical discrete data classification. There are artificial intelligence models which are relevant to classification of ECG, EMG or EEG signals. In same way, in literature, many models exist for classification of discrete data taken as value of samples which can be results of blood analysis or biopsy in medical process. Each algorithm could not achieve high accuracy rate on classification of signal and discrete data. In this study, compensatory neurofuzzy network model is presented for classification of discrete data in biomedical pattern recognition area. The compensatory neurofuzzy network has a hybrid and binary classifier. In this system, the parameters of fuzzy systems are updated by backpropagation algorithm. The realized classifier model is conducted to two benchmark datasets (Wisconsin Breast Cancer dataset and Pima Indian Diabetes dataset). Experimental studies show that compensatory neurofuzzy network model achieved 96.11% accuracy rate in classification of breast cancer dataset and 69.08% accuracy rate was obtained in experiments made on diabetes dataset with only 10 iterations.

  19. Dynamic measurements of CO diffusing capacity using discrete samples of alveolar gas.

    PubMed

    Graham, B L; Mink, J T; Cotton, D J

    1983-01-01

    It has been shown that measurements of the diffusing capacity of the lung for CO made during a slow exhalation [DLCO(exhaled)] yield information about the distribution of the diffusing capacity in the lung that is not available from the commonly measured single-breath diffusing capacity [DLCO(SB)]. Current techniques of measuring DLCO(exhaled) require the use of a rapid-responding (less than 240 ms, 10-90%) CO meter to measure the CO concentration in the exhaled gas continuously during exhalation. DLCO(exhaled) is then calculated using two sample points in the CO signal. Because DLCO(exhaled) calculations are highly affected by small amounts of noise in the CO signal, filtering techniques have been used to reduce noise. However, these techniques reduce the response time of the system and may introduce other errors into the signal. We have developed an alternate technique in which DLCO(exhaled) can be calculated using the concentration of CO in large discrete samples of the exhaled gas, thus eliminating the requirement of a rapid response time in the CO analyzer. We show theoretically that this method is as accurate as other DLCO(exhaled) methods but is less affected by noise. These findings are verified in comparisons of the discrete-sample method of calculating DLCO(exhaled) to point-sample methods in normal subjects, patients with emphysema, and patients with asthma.

  20. SEM evaluation of metallization on semiconductors. [Scanning Electron Microscope

    NASA Technical Reports Server (NTRS)

    Fresh, D. L.; Adolphsen, J. W.

    1974-01-01

    A test method for the evaluation of metallization on semiconductors is presented and discussed. The method has been prepared in MIL-STD format for submittal as a proposed addition to MIL-STD-883. It is applicable to discrete devices and to integrated circuits and specifically addresses batch-process oriented defects. Quantitative accept/reject criteria are given for contact windows, other oxide steps, and general interconnecting metallization. Figures are provided that illustrate typical types of defects. Apparatus specifications, sampling plans, and specimen preparation and examination requirements are described. Procedures for glassivated devices and for multi-metal interconnection systems are included.

  1. Aircraft digital control design methods

    NASA Technical Reports Server (NTRS)

    Powell, J. D.; Parsons, E.; Tashker, M. G.

    1976-01-01

    Variations in design methods for aircraft digital flight control are evaluated and compared. The methods fall into two categories; those where the design is done in the continuous domain (or s plane) and those where the design is done in the discrete domain (or z plane). Design method fidelity is evaluated by examining closed loop root movement and the frequency response of the discretely controlled continuous aircraft. It was found that all methods provided acceptable performance for sample rates greater than 10 cps except the uncompensated s plane design method which was acceptable above 20 cps. A design procedure based on optimal control methods was proposed that provided the best fidelity at very slow sample rates and required no design iterations for changing sample rates.

  2. A Flexible Approach for Assessing Functional Landscape Connectivity, with Application to Greater Sage-Grouse (Centrocercus urophasianus)

    PubMed Central

    Harju, Seth M.; Olson, Chad V.; Dzialak, Matthew R.; Mudd, James P.; Winstead, Jeff B.

    2013-01-01

    Connectivity of animal populations is an increasingly prominent concern in fragmented landscapes, yet existing methodological and conceptual approaches implicitly assume the presence of, or need for, discrete corridors. We tested this assumption by developing a flexible conceptual approach that does not assume, but allows for, the presence of discrete movement corridors. We quantified functional connectivity habitat for greater sage-grouse (Centrocercus urophasianus) across a large landscape in central western North America. We assigned sample locations to a movement state (encamped, traveling and relocating), and used Global Positioning System (GPS) location data and conditional logistic regression to estimate state-specific resource selection functions. Patterns of resource selection during different movement states reflected selection for sagebrush and general avoidance of rough topography and anthropogenic features. Distinct connectivity corridors were not common in the 5,625 km2 study area. Rather, broad areas functioned as generally high or low quality connectivity habitat. A comprehensive map predicting the quality of connectivity habitat across the study area validated well based on a set of GPS locations from independent greater sage-grouse. The functional relationship between greater sage-grouse and the landscape did not always conform to the idea of a discrete corridor. A more flexible consideration of landscape connectivity may improve the efficacy of management actions by aligning those actions with the spatial patterns by which animals interact with the landscape. PMID:24349241

  3. A flexible approach for assessing functional landscape connectivity, with application to greater sage-grouse (Centrocercus urophasianus).

    PubMed

    Harju, Seth M; Olson, Chad V; Dzialak, Matthew R; Mudd, James P; Winstead, Jeff B

    2013-01-01

    Connectivity of animal populations is an increasingly prominent concern in fragmented landscapes, yet existing methodological and conceptual approaches implicitly assume the presence of, or need for, discrete corridors. We tested this assumption by developing a flexible conceptual approach that does not assume, but allows for, the presence of discrete movement corridors. We quantified functional connectivity habitat for greater sage-grouse (Centrocercus urophasianus) across a large landscape in central western North America. We assigned sample locations to a movement state (encamped, traveling and relocating), and used Global Positioning System (GPS) location data and conditional logistic regression to estimate state-specific resource selection functions. Patterns of resource selection during different movement states reflected selection for sagebrush and general avoidance of rough topography and anthropogenic features. Distinct connectivity corridors were not common in the 5,625 km(2) study area. Rather, broad areas functioned as generally high or low quality connectivity habitat. A comprehensive map predicting the quality of connectivity habitat across the study area validated well based on a set of GPS locations from independent greater sage-grouse. The functional relationship between greater sage-grouse and the landscape did not always conform to the idea of a discrete corridor. A more flexible consideration of landscape connectivity may improve the efficacy of management actions by aligning those actions with the spatial patterns by which animals interact with the landscape.

  4. An enhanced cluster analysis program with bootstrap significance testing for ecological community analysis

    USGS Publications Warehouse

    McKenna, J.E.

    2003-01-01

    The biosphere is filled with complex living patterns and important questions about biodiversity and community and ecosystem ecology are concerned with structure and function of multispecies systems that are responsible for those patterns. Cluster analysis identifies discrete groups within multivariate data and is an effective method of coping with these complexities, but often suffers from subjective identification of groups. The bootstrap testing method greatly improves objective significance determination for cluster analysis. The BOOTCLUS program makes cluster analysis that reliably identifies real patterns within a data set more accessible and easier to use than previously available programs. A variety of analysis options and rapid re-analysis provide a means to quickly evaluate several aspects of a data set. Interpretation is influenced by sampling design and a priori designation of samples into replicate groups, and ultimately relies on the researcher's knowledge of the organisms and their environment. However, the BOOTCLUS program provides reliable, objectively determined groupings of multivariate data.

  5. Variational Algorithms for Test Particle Trajectories

    NASA Astrophysics Data System (ADS)

    Ellison, C. Leland; Finn, John M.; Qin, Hong; Tang, William M.

    2015-11-01

    The theory of variational integration provides a novel framework for constructing conservative numerical methods for magnetized test particle dynamics. The retention of conservation laws in the numerical time advance captures the correct qualitative behavior of the long time dynamics. For modeling the Lorentz force system, new variational integrators have been developed that are both symplectic and electromagnetically gauge invariant. For guiding center test particle dynamics, discretization of the phase-space action principle yields multistep variational algorithms, in general. Obtaining the desired long-term numerical fidelity requires mitigation of the multistep method's parasitic modes or applying a discretization scheme that possesses a discrete degeneracy to yield a one-step method. Dissipative effects may be modeled using Lagrange-D'Alembert variational principles. Numerical results will be presented using a new numerical platform that interfaces with popular equilibrium codes and utilizes parallel hardware to achieve reduced times to solution. This work was supported by DOE Contract DE-AC02-09CH11466.

  6. Research and implementation of simulation for TDICCD remote sensing in vibration of optical axis

    NASA Astrophysics Data System (ADS)

    Liu, Zhi-hong; Kang, Xiao-jun; Lin, Zhe; Song, Li

    2013-12-01

    During the exposure time, the charge transfer speed in the push-broom direction and the line-by-lines canning speed of the sensor are required to match each other strictly for a space-borne TDICCD push-broom camera. However, as attitude disturbance of satellite and vibration of camera are inevitable, it is impossible to eliminate the speed mismatch, which will make the signal of different targets overlay each other and result in a decline of image resolution. The effects of velocity mismatch will be visually observed and analyzed by simulating the degradation of image quality caused by the vibration of the optical axis, and it is significant for the evaluation of image quality and design of the image restoration algorithm. How to give a model in time domain and space domain during the imaging time is the problem needed to be solved firstly. As vibration information for simulation is usually given by a continuous curve, the pixels of original image matrix and sensor matrix are discrete, as a result, they cannot always match each other well. The effect of simulation will also be influenced by the discrete sampling in integration time. In conclusion, it is quite significant for improving simulation accuracy and efficiency to give an appropriate discrete modeling and simulation method. The paper analyses discretization schemes in time domain and space domain and presents a method to simulate the quality of image of the optical system in the vibration of the line of sight, which is based on the principle of TDICCD sensor. The gray value of pixels in sensor matrix is obtained by a weighted arithmetic, which solves the problem of pixels dismatch. The result which compared with the experiment of hardware test indicate that this simulation system performances well in accuracy and reliability.

  7. Alcohol impairment of performance on steering and discrete tasks in a driving simulator

    DOT National Transportation Integrated Search

    1974-12-01

    In this program a simplified laboratory simulator was developed to test two types of tasks used in driving on the open road: a continuous "steering task" to regulate against gust induced disturbances and an intermittent "discrete response task" requi...

  8. Method and apparatus for generating motor current spectra to enhance motor system fault detection

    DOEpatents

    Linehan, D.J.; Bunch, S.L.; Lyster, C.T.

    1995-10-24

    A method and circuitry are disclosed for sampling periodic amplitude modulations in a nonstationary periodic carrier wave to determine frequencies in the amplitude modulations. The method and circuit are described in terms of an improved motor current signature analysis. The method insures that the sampled data set contains an exact whole number of carrier wave cycles by defining the rate at which samples of motor current data are collected. The circuitry insures that a sampled data set containing stationary carrier waves is recreated from the analog motor current signal containing nonstationary carrier waves by conditioning the actual sampling rate to adjust with the frequency variations in the carrier wave. After the sampled data is transformed to the frequency domain via the Discrete Fourier Transform, the frequency distribution in the discrete spectra of those components due to the carrier wave and its harmonics will be minimized so that signals of interest are more easily analyzed. 29 figs.

  9. Implementation and testing of the on-the-fly thermal scattering Monte Carlo sampling method for graphite and light water in MCNP6

    DOE PAGES

    Pavlou, Andrew T.; Ji, Wei; Brown, Forrest B.

    2016-01-23

    Here, a proper treatment of thermal neutron scattering requires accounting for chemical binding through a scattering law S(α,β,T). Monte Carlo codes sample the secondary neutron energy and angle after a thermal scattering event from probability tables generated from S(α,β,T) tables at discrete temperatures, requiring a large amount of data for multiscale and multiphysics problems with detailed temperature gradients. We have previously developed a method to handle this temperature dependence on-the-fly during the Monte Carlo random walk using polynomial expansions in 1/T to directly sample the secondary energy and angle. In this paper, the on-the-fly method is implemented into MCNP6 andmore » tested in both graphite-moderated and light water-moderated systems. The on-the-fly method is compared with the thermal ACE libraries that come standard with MCNP6, yielding good agreement with integral reactor quantities like k-eigenvalue and differential quantities like single-scatter secondary energy and angle distributions. The simulation runtimes are comparable between the two methods (on the order of 5–15% difference for the problems tested) and the on-the-fly fit coefficients only require 5–15 MB of total data storage.« less

  10. Phase computations and phase models for discrete molecular oscillators.

    PubMed

    Suvak, Onder; Demir, Alper

    2012-06-11

    Biochemical oscillators perform crucial functions in cells, e.g., they set up circadian clocks. The dynamical behavior of oscillators is best described and analyzed in terms of the scalar quantity, phase. A rigorous and useful definition for phase is based on the so-called isochrons of oscillators. Phase computation techniques for continuous oscillators that are based on isochrons have been used for characterizing the behavior of various types of oscillators under the influence of perturbations such as noise. In this article, we extend the applicability of these phase computation methods to biochemical oscillators as discrete molecular systems, upon the information obtained from a continuous-state approximation of such oscillators. In particular, we describe techniques for computing the instantaneous phase of discrete, molecular oscillators for stochastic simulation algorithm generated sample paths. We comment on the accuracies and derive certain measures for assessing the feasibilities of the proposed phase computation methods. Phase computation experiments on the sample paths of well-known biological oscillators validate our analyses. The impact of noise that arises from the discrete and random nature of the mechanisms that make up molecular oscillators can be characterized based on the phase computation techniques proposed in this article. The concept of isochrons is the natural choice upon which the phase notion of oscillators can be founded. The isochron-theoretic phase computation methods that we propose can be applied to discrete molecular oscillators of any dimension, provided that the oscillatory behavior observed in discrete-state does not vanish in a continuous-state approximation. Analysis of the full versatility of phase noise phenomena in molecular oscillators will be possible if a proper phase model theory is developed, without resorting to such approximations.

  11. Phase computations and phase models for discrete molecular oscillators

    PubMed Central

    2012-01-01

    Background Biochemical oscillators perform crucial functions in cells, e.g., they set up circadian clocks. The dynamical behavior of oscillators is best described and analyzed in terms of the scalar quantity, phase. A rigorous and useful definition for phase is based on the so-called isochrons of oscillators. Phase computation techniques for continuous oscillators that are based on isochrons have been used for characterizing the behavior of various types of oscillators under the influence of perturbations such as noise. Results In this article, we extend the applicability of these phase computation methods to biochemical oscillators as discrete molecular systems, upon the information obtained from a continuous-state approximation of such oscillators. In particular, we describe techniques for computing the instantaneous phase of discrete, molecular oscillators for stochastic simulation algorithm generated sample paths. We comment on the accuracies and derive certain measures for assessing the feasibilities of the proposed phase computation methods. Phase computation experiments on the sample paths of well-known biological oscillators validate our analyses. Conclusions The impact of noise that arises from the discrete and random nature of the mechanisms that make up molecular oscillators can be characterized based on the phase computation techniques proposed in this article. The concept of isochrons is the natural choice upon which the phase notion of oscillators can be founded. The isochron-theoretic phase computation methods that we propose can be applied to discrete molecular oscillators of any dimension, provided that the oscillatory behavior observed in discrete-state does not vanish in a continuous-state approximation. Analysis of the full versatility of phase noise phenomena in molecular oscillators will be possible if a proper phase model theory is developed, without resorting to such approximations. PMID:22687330

  12. Evaluation of borehole geophysical logging, aquifer-isolation tests, distribution of contaminants, and water-level measurements at the North Penn Area 5 Superfund Site, Bucks and Montgomery counties, Pennsylvania

    USGS Publications Warehouse

    Bird, Philip H.; Conger, Randall W.

    2002-01-01

    Borehole geophysical logging and aquifer-isolation (packer) tests were conducted at the North Penn Area 5 Superfund site in Bucks and Montgomery Counties, Pa. Caliper, naturalgamma, single-point-resistance, fluid-temperature, fluid-resistivity, heatpulse-flowmeter, and digital acoustic-televiewer logs and borehole television surveys were collected in 32 new and previously drilled wells that ranged in depth from 68 to 302 feet. Vertical borehole-fluid movement direction and rate were measured with a high-resolution heatpulse flowmeter under nonpumping conditions. The suite of logs was used to locate water-bearing fractures, determine zones of vertical borehole-fluid movement, select depths to set packers, and locate appropriate screen intervals for reconstructing new wells as monitoring wells. Aquifer-isolation tests were conducted in four wells to sample discrete intervals and to determine specific capacities of discrete water-bearing zones. Specific capacities of isolated zones during packer testing ranged from 0.12 to 15.30 gallons per minute per foot. Most fractures identified by borehole geophysical methods as water-producing or water-receiving zones produced water when isolated and pumped. The acoustic-televiewer logs define two basic fracture sets, bedding-plane partings with a mean strike of N. 62° E. and a mean dip of 27° NW., and high-angle fractures with a mean strike of N. 58° E. and a mean dip of 72° SE. Correlation of heatpulse-flowmeter data and acoustic-televiewer logs showed 83 percent of identified water-bearing fractures were high-angle fractures.

  13. Corrective Action Investigation Plan for Corrective Action Unit 428: Area 3 Septic Waste Systems 1 and 5, Tonopah Test Range, Nevada, REVISION 0, march 1999

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    ITLV.

    1999-03-01

    The Corrective Action Investigation Plan for Corrective Action Unit 428, Area 3 Septic Waste Systems 1 and 5, has been developed in accordance with the Federal Facility Agreement and Consent Order that was agreed to by the U. S. Department of Energy, Nevada Operations Office; the State of Nevada Division of Environmental Protection; and the U. S. Department of Defense. Corrective Action Unit 428 consists of Corrective Action Sites 03- 05- 002- SW01 and 03- 05- 002- SW05, respectively known as Area 3 Septic Waste System 1 and Septic Waste System 5. This Corrective Action Investigation Plan is used inmore » combination with the Work Plan for Leachfield Corrective Action Units: Nevada Test Site and Tonopah Test Range, Nevada , Rev. 1 (DOE/ NV, 1998c). The Leachfield Work Plan was developed to streamline investigations at leachfield Corrective Action Units by incorporating management, technical, quality assurance, health and safety, public involvement, field sampling, and waste management information common to a set of Corrective Action Units with similar site histories and characteristics into a single document that can be referenced. This Corrective Action Investigation Plan provides investigative details specific to Corrective Action Unit 428. A system of leachfields and associated collection systems was used for wastewater disposal at Area 3 of the Tonopah Test Range until a consolidated sewer system was installed in 1990 to replace the discrete septic waste systems. Operations within various buildings at Area 3 generated sanitary and industrial wastewaters potentially contaminated with contaminants of potential concern and disposed of in septic tanks and leachfields. Corrective Action Unit 428 is composed of two leachfield systems in the northern portion of Area 3. Based on site history collected to support the Data Quality Objectives process, contaminants of potential concern for the site include oil/ diesel range total petroleum hydrocarbons, and Resource Conservation and Recovery Act characteristic volatile organic compounds, semivolatile organic compounds, and metals. A limited number of samples will be analyzed for gamma- emitting radionuclides and isotopic uranium from four of the septic tanks and if radiological field screening levels are exceeded. Additional samples will be analyzed for geotechnical and hydrological properties and a bioassessment may be performed. The technical approach for investigating this Corrective Action Unit consists of the following activities: Perform video surveys of the discharge and outfall lines. Collect samples of material in the septic tanks. Conduct exploratory trenching to locate and inspect subsurface components. Collect subsurface soil samples in areas of the collection system including the septic tanks and outfall end of distribution boxes. Collect subsurface soil samples underlying the leachfield distribution pipes via trenching. Collect surface and near- surface samples near potential locations of the Acid Sewer Outfall if Septic Waste System 5 Leachfield cannot be located. Field screen samples for volatile organic compounds, total petroleum hydrocarbons, and radiological activity. Drill boreholes and collect subsurface soil samples if required. Analyze samples for total volatile organic compounds, total semivolatile organic compounds, total Resource Conservation and Recovery Act metals, and total petroleum hydrocarbons (oil/ diesel range organics). Limited number of samples will be analyzed for gamma- emitting radionuclides and isotopic uranium from particular septic tanks and if radiological field screening levels are exceeded. Collect samples from native soils beneath the distribution system and analyze for geotechnical/ hydrologic parameters. Collect and analyze bioassessment samples at the discretion of the Site Supervisor if total petroleum hydrocarbons exceed field- screening levels.« less

  14. Clustering and variable selection in the presence of mixed variable types and missing data.

    PubMed

    Storlie, C B; Myers, S M; Katusic, S K; Weaver, A L; Voigt, R G; Croarkin, P E; Stoeckel, R E; Port, J D

    2018-05-17

    We consider the problem of model-based clustering in the presence of many correlated, mixed continuous, and discrete variables, some of which may have missing values. Discrete variables are treated with a latent continuous variable approach, and the Dirichlet process is used to construct a mixture model with an unknown number of components. Variable selection is also performed to identify the variables that are most influential for determining cluster membership. The work is motivated by the need to cluster patients thought to potentially have autism spectrum disorder on the basis of many cognitive and/or behavioral test scores. There are a modest number of patients (486) in the data set along with many (55) test score variables (many of which are discrete valued and/or missing). The goal of the work is to (1) cluster these patients into similar groups to help identify those with similar clinical presentation and (2) identify a sparse subset of tests that inform the clusters in order to eliminate unnecessary testing. The proposed approach compares very favorably with other methods via simulation of problems of this type. The results of the autism spectrum disorder analysis suggested 3 clusters to be most likely, while only 4 test scores had high (>0.5) posterior probability of being informative. This will result in much more efficient and informative testing. The need to cluster observations on the basis of many correlated, continuous/discrete variables with missing values is a common problem in the health sciences as well as in many other disciplines. Copyright © 2018 John Wiley & Sons, Ltd.

  15. Mutual Information between Discrete Variables with Many Categories using Recursive Adaptive Partitioning

    PubMed Central

    Seok, Junhee; Seon Kang, Yeong

    2015-01-01

    Mutual information, a general measure of the relatedness between two random variables, has been actively used in the analysis of biomedical data. The mutual information between two discrete variables is conventionally calculated by their joint probabilities estimated from the frequency of observed samples in each combination of variable categories. However, this conventional approach is no longer efficient for discrete variables with many categories, which can be easily found in large-scale biomedical data such as diagnosis codes, drug compounds, and genotypes. Here, we propose a method to provide stable estimations for the mutual information between discrete variables with many categories. Simulation studies showed that the proposed method reduced the estimation errors by 45 folds and improved the correlation coefficients with true values by 99 folds, compared with the conventional calculation of mutual information. The proposed method was also demonstrated through a case study for diagnostic data in electronic health records. This method is expected to be useful in the analysis of various biomedical data with discrete variables. PMID:26046461

  16. Finite Elements Analysis of a Composite Semi-Span Test Article With and Without Discrete Damage

    NASA Technical Reports Server (NTRS)

    Lovejoy, Andrew E.; Jegley, Dawn C. (Technical Monitor)

    2000-01-01

    AS&M Inc. performed finite element analysis, with and without discrete damage, of a composite semi-span test article that represents the Boeing 220-passenger transport aircraft composite semi-span test article. A NASTRAN bulk data file and drawings of the test mount fixtures and semi-span components were utilized to generate the baseline finite element model. In this model, the stringer blades are represented by shell elements, and the stringer flanges are combined with the skin. Numerous modeling modifications and discrete source damage scenarios were applied to the test article model throughout the course of the study. This report details the analysis method and results obtained from the composite semi-span study. Analyses were carried out for three load cases: Braked Roll, LOG Down-Bending and 2.5G Up-Bending. These analyses included linear and nonlinear static response, as well as linear and nonlinear buckling response. Results are presented in the form of stress and strain plots. factors of safety for failed elements, buckling loads and modes, deflection prediction tables and plots, and strainage prediction tables and plots. The collected results are presented within this report for comparison to test results.

  17. Test Operations Procedure (TOP) 02-2-546 Teleoperated Unmanned Ground Vehicle (UGV) Latency Measurements

    DTIC Science & Technology

    2017-01-11

    discrete system components or measurements of latency in autonomous systems. 15. SUBJECT TERMS Unmanned Ground Vehicles, Basic Video Latency, End-to... discrete system components or measurements of latency in autonomous systems. 1.1 Basic Video Latency. Teleoperation latency, or lag, describes

  18. SIGMA--A Graphical Approach to Teaching Simulation.

    ERIC Educational Resources Information Center

    Schruben, Lee W.

    1992-01-01

    SIGMA (Simulation Graphical Modeling and Analysis) is a computer graphics environment for building, testing, and experimenting with discrete event simulation models on personal computers. It uses symbolic representations (computer animation) to depict the logic of large, complex discrete event systems for easier understanding and has proven itself…

  19. Multilevel discretized random field models with 'spin' correlations for the simulation of environmental spatial data

    NASA Astrophysics Data System (ADS)

    Žukovič, Milan; Hristopulos, Dionissios T.

    2009-02-01

    A current problem of practical significance is how to analyze large, spatially distributed, environmental data sets. The problem is more challenging for variables that follow non-Gaussian distributions. We show by means of numerical simulations that the spatial correlations between variables can be captured by interactions between 'spins'. The spins represent multilevel discretizations of environmental variables with respect to a number of pre-defined thresholds. The spatial dependence between the 'spins' is imposed by means of short-range interactions. We present two approaches, inspired by the Ising and Potts models, that generate conditional simulations of spatially distributed variables from samples with missing data. Currently, the sampling and simulation points are assumed to be at the nodes of a regular grid. The conditional simulations of the 'spin system' are forced to respect locally the sample values and the system statistics globally. The second constraint is enforced by minimizing a cost function representing the deviation between normalized correlation energies of the simulated and the sample distributions. In the approach based on the Nc-state Potts model, each point is assigned to one of Nc classes. The interactions involve all the points simultaneously. In the Ising model approach, a sequential simulation scheme is used: the discretization at each simulation level is binomial (i.e., ± 1). Information propagates from lower to higher levels as the simulation proceeds. We compare the two approaches in terms of their ability to reproduce the target statistics (e.g., the histogram and the variogram of the sample distribution), to predict data at unsampled locations, as well as in terms of their computational complexity. The comparison is based on a non-Gaussian data set (derived from a digital elevation model of the Walker Lake area, Nevada, USA). We discuss the impact of relevant simulation parameters, such as the domain size, the number of discretization levels, and the initial conditions.

  20. Applications of QCL mid-IR imaging to the advancement of pathology

    NASA Astrophysics Data System (ADS)

    Sreedhar, Hari; Varma, Vishal K.; Bird, Benjamin; Guzman, Grace; Walsh, Michael J.

    2017-03-01

    Quantum Cascade Laser (QCL) spectroscopic imaging is a novel technique with many potential applications to histopathology. Like traditional Fourier Transform Infrared (FT-IR) imaging, QCL spectroscopic imaging derives biochemical data coupled to the spatial information of a tissue sample, and can be used to improve the diagnostic and prognostic value of assessment of a tissue biopsy. This technique also offers advantages over traditional FT-IR imaging, specifically the capacity for discrete frequency and real-time imaging. In this work we present applications of QCL spectroscopic imaging to tissue samples, including discrete frequency imaging, to compare with FT-IR and its potential value to pathology.

  1. Mathematical construction and perturbation analysis of Zernike discrete orthogonal points.

    PubMed

    Shi, Zhenguang; Sui, Yongxin; Liu, Zhenyu; Peng, Ji; Yang, Huaijiang

    2012-06-20

    Zernike functions are orthogonal within the unit circle, but they are not over the discrete points such as CCD arrays or finite element grids. This will result in reconstruction errors for loss of orthogonality. By using roots of Legendre polynomials, a set of points within the unit circle can be constructed so that Zernike functions over the set are discretely orthogonal. Besides that, the location tolerances of the points are studied by perturbation analysis, and the requirements of the positioning precision are not very strict. Computer simulations show that this approach provides a very accurate wavefront reconstruction with the proposed sampling set.

  2. Uncertainties in stormwater runoff data collection from a small urban catchment, Southeast China.

    PubMed

    Huang, Jinliang; Tu, Zhenshun; Du, Pengfei; Lin, Jie; Li, Qingsheng

    2010-01-01

    Monitoring data are often used to identify stormwater runoff characteristics and in stormwater runoff modelling without consideration of their inherent uncertainties. Integrated with discrete sample analysis and error propagation analysis, this study attempted to quantify the uncertainties of discrete chemical oxygen demand (COD), total suspended solids (TSS) concentration, stormwater flowrate, stormwater event volumes, COD event mean concentration (EMC), and COD event loads in terms of flow measurement, sample collection, storage and laboratory analysis. The results showed that the uncertainties due to sample collection, storage and laboratory analysis of COD from stormwater runoff are 13.99%, 19.48% and 12.28%. Meanwhile, flow measurement uncertainty was 12.82%, and the sample collection uncertainty of TSS from stormwater runoff was 31.63%. Based on the law of propagation of uncertainties, the uncertainties regarding event flow volume, COD EMC and COD event loads were quantified as 7.03%, 10.26% and 18.47%.

  3. Serving Real-Time Point Observation Data in netCDF using Climate and Forecasting Discrete Sampling Geometry Conventions

    NASA Astrophysics Data System (ADS)

    Ward-Garrison, C.; May, R.; Davis, E.; Arms, S. C.

    2016-12-01

    NetCDF is a set of software libraries and self-describing, machine-independent data formats that support the creation, access, and sharing of array-oriented scientific data. The Climate and Forecasting (CF) metadata conventions for netCDF foster the ability to work with netCDF files in general and useful ways. These conventions include metadata attributes for physical units, standard names, and spatial coordinate systems. While these conventions have been successful in easing the use of working with netCDF-formatted output from climate and forecast models, their use for point-based observation data has been less so. Unidata has prototyped using the discrete sampling geometry (DSG) CF conventions to serve, using the THREDDS Data Server, the real-time point observation data flowing across the Internet Data Distribution (IDD). These data originate in text format reports for individual stations (e.g. METAR surface data or TEMP upper air data) and are converted and stored in netCDF files in real-time. This work discusses the experiences and challenges of using the current CF DSG conventions for storing such real-time data. We also test how parts of netCDF's extended data model can address these challenges, in order to inform decisions for a future version of CF (CF 2.0) that would take advantage of features of the netCDF enhanced data model.

  4. Conservative DEC Discretization of Incompressible Navier-Stokes Equations on Arbitrary Surface Simplicial Meshes

    NASA Astrophysics Data System (ADS)

    Mohamed, Mamdouh; Hirani, Anil; Samtaney, Ravi

    2017-11-01

    A conservative discretization of incompressible Navier-Stokes equations over surfaces is developed using discrete exterior calculus (DEC). The mimetic character of many of the DEC operators provides exact conservation of both mass and vorticity, in addition to superior kinetic energy conservation. The employment of signed diagonal Hodge star operators, while using the circumcentric dual defined on arbitrary meshes, is shown to produce correct solutions even when many non-Delaunay triangles pairs exist. This allows the DEC discretization to admit arbitrary surface simplicial meshes, in contrast to the previously held notion that DEC was limited only to Delaunay meshes. The discretization scheme is presented along with several numerical test cases demonstrating its numerical convergence and conservation properties. Recent developments regarding the extension to conservative higher order methods are also presented. KAUST Baseline Research Funds of R. Samtaney.

  5. Adaptive Discrete Hypergraph Matching.

    PubMed

    Yan, Junchi; Li, Changsheng; Li, Yin; Cao, Guitao

    2018-02-01

    This paper addresses the problem of hypergraph matching using higher-order affinity information. We propose a solver that iteratively updates the solution in the discrete domain by linear assignment approximation. The proposed method is guaranteed to converge to a stationary discrete solution and avoids the annealing procedure and ad-hoc post binarization step that are required in several previous methods. Specifically, we start with a simple iterative discrete gradient assignment solver. This solver can be trapped in an -circle sequence under moderate conditions, where is the order of the graph matching problem. We then devise an adaptive relaxation mechanism to jump out this degenerating case and show that the resulting new path will converge to a fixed solution in the discrete domain. The proposed method is tested on both synthetic and real-world benchmarks. The experimental results corroborate the efficacy of our method.

  6. On stability of discrete composite systems.

    NASA Technical Reports Server (NTRS)

    Grujic, L. T.; Siljak, D. D.

    1973-01-01

    Conditions are developed under which exponential stability of a composite discrete system is implied by exponential stability of its subsystems and the nature of their interactions. Stability of the system is determined by testing positive definiteness property of a real symmetric matrix the dimension of which is equal to the number of subsystems.

  7. Modeling of the WSTF frictional heating apparatus in high pressure systems

    NASA Technical Reports Server (NTRS)

    Skowlund, Christopher T.

    1992-01-01

    In order to develop a computer program able to model the frictional heating of metals in high pressure oxygen or nitrogen a number of additions have been made to the frictional heating model originally developed for tests in low pressure helium. These additions include: (1) a physical property package for the gases to account for departures from the ideal gas state; (2) two methods for spatial discretization (finite differences with quadratic interpolation or orthogonal collocation on finite elements) which substantially reduce the computer time required to solve the transient heat balance; (3) more efficient programs for the integration of the ordinary differential equations resulting from the discretization of the partial differential equations; and (4) two methods for determining the best-fit parameters via minimization of the mean square error (either a direct search multivariable simplex method or a modified Levenburg-Marquardt algorithm). The resulting computer program has been shown to be accurate, efficient and robust for determining the heat flux or friction coefficient vs. time at the interface of the stationary and rotating samples.

  8. A low-dispersion, exactly energy-charge-conserving semi-implicit relativistic particle-in-cell algorithm

    NASA Astrophysics Data System (ADS)

    Chen, Guangye; Luis, Chacon; Bird, Robert; Stark, David; Yin, Lin; Albright, Brian

    2017-10-01

    Leap-frog based explicit algorithms, either ``energy-conserving'' or ``momentum-conserving'', do not conserve energy discretely. Time-centered fully implicit algorithms can conserve discrete energy exactly, but introduce large dispersion errors in the light-wave modes, regardless of timestep sizes. This can lead to intolerable simulation errors where highly accurate light propagation is needed (e.g. laser-plasma interactions, LPI). In this study, we selectively combine the leap-frog and Crank-Nicolson methods to produce a low-dispersion, exactly energy-and-charge-conserving PIC algorithm. Specifically, we employ the leap-frog method for Maxwell equations, and the Crank-Nicolson method for particle equations. Such an algorithm admits exact global energy conservation, exact local charge conservation, and preserves the dispersion properties of the leap-frog method for the light wave. The algorithm has been implemented in a code named iVPIC, based on the VPIC code developed at LANL. We will present numerical results that demonstrate the properties of the scheme with sample test problems (e.g. Weibel instability run for 107 timesteps, and LPI applications.

  9. A Random Forest approach to predict the spatial distribution of sediment pollution in an estuarine system

    PubMed Central

    Kreakie, Betty J.; Cantwell, Mark G.; Nacci, Diane

    2017-01-01

    Modeling the magnitude and distribution of sediment-bound pollutants in estuaries is often limited by incomplete knowledge of the site and inadequate sample density. To address these modeling limitations, a decision-support tool framework was conceived that predicts sediment contamination from the sub-estuary to broader estuary extent. For this study, a Random Forest (RF) model was implemented to predict the distribution of a model contaminant, triclosan (5-chloro-2-(2,4-dichlorophenoxy)phenol) (TCS), in Narragansett Bay, Rhode Island, USA. TCS is an unregulated contaminant used in many personal care products. The RF explanatory variables were associated with TCS transport and fate (proxies) and direct and indirect environmental entry. The continuous RF TCS concentration predictions were discretized into three levels of contamination (low, medium, and high) for three different quantile thresholds. The RF model explained 63% of the variance with a minimum number of variables. Total organic carbon (TOC) (transport and fate proxy) was a strong predictor of TCS contamination causing a mean squared error increase of 59% when compared to permutations of randomized values of TOC. Additionally, combined sewer overflow discharge (environmental entry) and sand (transport and fate proxy) were strong predictors. The discretization models identified a TCS area of greatest concern in the northern reach of Narragansett Bay (Providence River sub-estuary), which was validated with independent test samples. This decision-support tool performed well at the sub-estuary extent and provided the means to identify areas of concern and prioritize bay-wide sampling. PMID:28738089

  10. Comparison of two Galerkin quadrature methods

    DOE PAGES

    Morel, Jim E.; Warsa, James; Franke, Brian C.; ...

    2017-02-21

    Here, we compare two methods for generating Galerkin quadratures. In method 1, the standard S N method is used to generate the moment-to-discrete matrix and the discrete-to-moment matrix is generated by inverting the moment-to-discrete matrix. This is a particular form of the original Galerkin quadrature method. In method 2, which we introduce here, the standard S N method is used to generate the discrete-to-moment matrix and the moment-to-discrete matrix is generated by inverting the discrete-to-moment matrix. With an N-point quadrature, method 1 has the advantage that it preserves N eigenvalues and N eigenvectors of the scattering operator in a pointwisemore » sense. With an N-point quadrature, method 2 has the advantage that it generates consistent angular moment equations from the corresponding S N equations while preserving N eigenvalues of the scattering operator. Our computational results indicate that these two methods are quite comparable for the test problem considered.« less

  11. Comparison of two Galerkin quadrature methods

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Morel, Jim E.; Warsa, James; Franke, Brian C.

    Here, we compare two methods for generating Galerkin quadratures. In method 1, the standard S N method is used to generate the moment-to-discrete matrix and the discrete-to-moment matrix is generated by inverting the moment-to-discrete matrix. This is a particular form of the original Galerkin quadrature method. In method 2, which we introduce here, the standard S N method is used to generate the discrete-to-moment matrix and the moment-to-discrete matrix is generated by inverting the discrete-to-moment matrix. With an N-point quadrature, method 1 has the advantage that it preserves N eigenvalues and N eigenvectors of the scattering operator in a pointwisemore » sense. With an N-point quadrature, method 2 has the advantage that it generates consistent angular moment equations from the corresponding S N equations while preserving N eigenvalues of the scattering operator. Our computational results indicate that these two methods are quite comparable for the test problem considered.« less

  12. Simulations of incompressible Navier Stokes equations on curved surfaces using discrete exterior calculus

    NASA Astrophysics Data System (ADS)

    Samtaney, Ravi; Mohamed, Mamdouh; Hirani, Anil

    2015-11-01

    We present examples of numerical solutions of incompressible flow on 2D curved domains. The Navier-Stokes equations are first rewritten using the exterior calculus notation, replacing vector calculus differential operators by the exterior derivative, Hodge star and wedge product operators. A conservative discretization of Navier-Stokes equations on simplicial meshes is developed based on discrete exterior calculus (DEC). The discretization is then carried out by substituting the corresponding discrete operators based on the DEC framework. By construction, the method is conservative in that both the discrete divergence and circulation are conserved up to machine precision. The relative error in kinetic energy for inviscid flow test cases converges in a second order fashion with both the mesh size and the time step. Numerical examples include Taylor vortices on a sphere, Stuart vortices on a sphere, and flow past a cylinder on domains with varying curvature. Supported by the KAUST Office of Competitive Research Funds under Award No. URF/1/1401-01.

  13. Systematic sampling of discrete and continuous populations: sample selection and the choice of estimator

    Treesearch

    Harry T. Valentine; David L. R. Affleck; Timothy G. Gregoire

    2009-01-01

    Systematic sampling is easy, efficient, and widely used, though it is not generally recognized that a systematic sample may be drawn from the population of interest with or without restrictions on randomization. The restrictions or the lack of them determine which estimators are unbiased, when using the sampling design as the basis for inference. We describe the...

  14. Discrete quantum dot like emitters in monolayer MoSe{sub 2}: Spatial mapping, magneto-optics, and charge tuning

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Branny, Artur; Kumar, Santosh; Gerardot, Brian D., E-mail: b.d.gerardot@hw.ac.uk

    Transition metal dichalcogenide monolayers such as MoSe{sub 2}, MoS{sub 2}, and WSe{sub 2} are direct bandgap semiconductors with original optoelectronic and spin-valley properties. Here we report on spectrally sharp, spatially localized emission in monolayer MoSe{sub 2}. We find this quantum dot-like emission in samples exfoliated onto gold substrates and also suspended flakes. Spatial mapping shows a correlation between the location of emitters and the existence of wrinkles (strained regions) in the flake. We tune the emission properties in magnetic and electric fields applied perpendicular to the monolayer plane. We extract an exciton g-factor of the discrete emitters close to −4,more » as for 2D excitons in this material. In a charge tunable sample, we record discrete jumps on the meV scale as charges are added to the emitter when changing the applied voltage.« less

  15. A VLF-based technique in applications to digital control of nonlinear hybrid multirate systems

    NASA Astrophysics Data System (ADS)

    Vassilyev, Stanislav; Ulyanov, Sergey; Maksimkin, Nikolay

    2017-01-01

    In this paper, a technique for rigorous analysis and design of nonlinear multirate digital control systems on the basis of the reduction method and sublinear vector Lyapunov functions is proposed. The control system model under consideration incorporates continuous-time dynamics of the plant and discrete-time dynamics of the controller and takes into account uncertainties of the plant, bounded disturbances, nonlinear characteristics of sensors and actuators. We consider a class of multirate systems where the control update rate is slower than the measurement sampling rates and periodic non-uniform sampling is admitted. The proposed technique does not use the preliminary discretization of the system, and, hence, allows one to eliminate the errors associated with the discretization and improve the accuracy of analysis. The technique is applied to synthesis of digital controller for a flexible spacecraft in the fine stabilization mode and decentralized controller for a formation of autonomous underwater vehicles. Simulation results are provided to validate the good performance of the designed controllers.

  16. Methotrimeprazine-induced Corneal Deposits and Cataract Revealed by Urine Drug Profiling Test

    PubMed Central

    Kim, Seong Taeck; Kim, Joon Mo; Kim, Won Young; Choi, Gwang Ju

    2010-01-01

    Two schizophrenic patients who had been taking medication for a long period presented with visual disturbance of 6-month duration. Slit-lamp examination revealed fine, discrete, and brownish deposits on the posterior cornea. In addition, bilateral star-shaped anterior subcapsular lens opacities, which were dense, dust-like granular deposits, were noted. Although we strongly suspected that the patient might have taken one of the drugs of the phenothiazine family, we were unable to obtain a history of medications other than haloperidol and risperidone, which were taken for 3 yr. We performed a drug profiling test using urine samples and detected methotrimeprazine. The patient underwent surgery for anterior subcapsular lens opacities. Visual acuity improved in both eyes, but the corneal deposits remained. We report an unusual case of methotrimeprazine-induced corneal deposits and cataract in a patient with psychosis, identified by using the urine drug profiling test. PMID:21060765

  17. GY SAMPLING THEORY AND GEOSTATISTICS: ALTERNATE MODELS OF VARIABILITY IN CONTINUOUS MEDIA

    EPA Science Inventory



    In the sampling theory developed by Pierre Gy, sample variability is modeled as the sum of a set of seven discrete error components. The variogram used in geostatisties provides an alternate model in which several of Gy's error components are combined in a continuous mode...

  18. (PRESENTED NAQC SAN FRANCISCO, CA) COARSE PM METHODS STUDY: STUDY DESIGN AND RESULTS

    EPA Science Inventory

    Comprehensive field studies were conducted to evaluate the performance of sampling methods for measuring the coarse fraction of PM10 in ambient air. Five separate sampling approaches were evaluated at each of three sampling sites. As the primary basis of comparison, a discrete ...

  19. MULTI-SITE PERFORMANCE EVALUATIONS OF CANDIDATE METHODOLOGIES FOR DETERMINING COARSE PARTICULATE MATTER (PMC) CONCENTRATIONS

    EPA Science Inventory

    Comprehensive field studies were conducted to evaluate the performance of sampling methods for measuring the coarse fraction of PM10 in ambient air. Five separate sampling approaches were evaluated at each of three sampling sites. As the primary basis of comparison, a discret...

  20. MULTI-SITE EVALUATIONS OF CANDIDATE METHODOLOGIES FOR DETERMINING COARSE PARTICULATE MATTER (PMC) CONCENTRATIONS

    EPA Science Inventory

    Comprehensive field studies were conducted to evaluate the performance of sampling methods for measuring the coarse fraction of PM10 in ambient air. Five separate sampling approaches were evaluated at each of three sampling sites. As the primary basis of comparison, a discret...

  1. MULTI-SITE EVALUATIONS OF CANDIDATE METHODOLOGIES FOR DETERMINING COARSE PARTICULATE MATTER (PMC) CONCENTRATIONS

    EPA Science Inventory

    Comprehensive field studies were conducted to evaluate the performance of sampling methods for measuring the coarse fraction of PM10 in ambient air. Five separate sampling approaches were evaluated at each of three sampling sites. As the primary basis of comparison, a discrete ...

  2. IceBreaker: Mars Drill and Sample Delivery System

    NASA Astrophysics Data System (ADS)

    Mellerowicz, B. L.; Paulsen, G. L.; Zacny, K.; McKay, C.; Glass, B. J.; Dave, A.; Davila, A. F.; Marinova, M.

    2012-12-01

    We report on the development and testing of a one meter class prototype Mars drill and cuttings sample delivery system. The IceBreaker drill consists of a rotary-percussive drill head, a sampling auger with a bit at the end having an integrated temperature sensor, a Z-stage for advancing the auger into the ground, and a sam-pling station for moving the augered ice shavings or soil cuttings into a sample cup. The drill is deployed from a 3 Degree of Freedom (DOF) robotic arm. The drill demonstrated drilling in ice-cemented ground, ice, and rocks at the 1-1-100-100 level; that is the drill reached 1 meter in 1 hour with 100 Watts of power and 100 Newton Weight on Bit. This cor-responds to an average energy of 100 Whr. The drill has been extensively tested in the Mars chamber to a depth of 1 meter, as well as in the Antarctic and the Arctic Mars analog sites. We also tested three sample delivery systems: 1) 4 DOF arm with a custom soil scoop at the end; 2) Pneumatic based, and 3) Drill based enabled by the 3 (DOF) drill deployment boom. In all approaches there is an air-gap between the sterilized drill (which penetrates subsurface) and the sample transfer hardware (which is not going to be sterilized). The air gap satisfies the planetary protection requirements. The scoop acquires cuttings sample once they are augered to the surface, and drops them into an in-strument inlet port. The system has been tested in the Mars chamber and in the Arctic. The pneumatic sample delivery system uses compressed gas to move the sample captured inside a small chamber inte-grated with the auger, directly into the instrument. The system was tested in the Mars chamber. In the third approach the drill auger captures the sample on its flutes, the 3 DOF boom positions the tip of the auger above the instrument, and then the auger discharges the sample into an instrument. This approach was tested in the labolatory (at STP). The above drilling and sample delivery tests have shown that drilling and sample transfer on Mars, in ice cemented ground with limited power, energy and Weight on Bit, and collecting samples in dis-crete depth intervals is possible within the given mass, power, and energy levels of a Phoenix-size lander and within the duration of a Phoenix-like mission.

  3. Discrete-Slots Models of Visual Working-Memory Response Times

    PubMed Central

    Donkin, Christopher; Nosofsky, Robert M.; Gold, Jason M.; Shiffrin, Richard M.

    2014-01-01

    Much recent research has aimed to establish whether visual working memory (WM) is better characterized by a limited number of discrete all-or-none slots or by a continuous sharing of memory resources. To date, however, researchers have not considered the response-time (RT) predictions of discrete-slots versus shared-resources models. To complement the past research in this field, we formalize a family of mixed-state, discrete-slots models for explaining choice and RTs in tasks of visual WM change detection. In the tasks under investigation, a small set of visual items is presented, followed by a test item in 1 of the studied positions for which a change judgment must be made. According to the models, if the studied item in that position is retained in 1 of the discrete slots, then a memory-based evidence-accumulation process determines the choice and the RT; if the studied item in that position is missing, then a guessing-based accumulation process operates. Observed RT distributions are therefore theorized to arise as probabilistic mixtures of the memory-based and guessing distributions. We formalize an analogous set of continuous shared-resources models. The model classes are tested on individual subjects with both qualitative contrasts and quantitative fits to RT-distribution data. The discrete-slots models provide much better qualitative and quantitative accounts of the RT and choice data than do the shared-resources models, although there is some evidence for “slots plus resources” when memory set size is very small. PMID:24015956

  4. Estimation of rates-across-sites distributions in phylogenetic substitution models.

    PubMed

    Susko, Edward; Field, Chris; Blouin, Christian; Roger, Andrew J

    2003-10-01

    Previous work has shown that it is often essential to account for the variation in rates at different sites in phylogenetic models in order to avoid phylogenetic artifacts such as long branch attraction. In most current models, the gamma distribution is used for the rates-across-sites distributions and is implemented as an equal-probability discrete gamma. In this article, we introduce discrete distribution estimates with large numbers of equally spaced rate categories allowing us to investigate the appropriateness of the gamma model. With large numbers of rate categories, these discrete estimates are flexible enough to approximate the shape of almost any distribution. Likelihood ratio statistical tests and a nonparametric bootstrap confidence-bound estimation procedure based on the discrete estimates are presented that can be used to test the fit of a parametric family. We applied the methodology to several different protein data sets, and found that although the gamma model often provides a good parametric model for this type of data, rate estimates from an equal-probability discrete gamma model with a small number of categories will tend to underestimate the largest rates. In cases when the gamma model assumption is in doubt, rate estimates coming from the discrete rate distribution estimate with a large number of rate categories provide a robust alternative to gamma estimates. An alternative implementation of the gamma distribution is proposed that, for equal numbers of rate categories, is computationally more efficient during optimization than the standard gamma implementation and can provide more accurate estimates of site rates.

  5. Occurrence and distribution of microbiological indicators in groundwater and stream water

    USGS Publications Warehouse

    Francy, D.S.; Helsel, D.R.; Nally, R.A.

    2000-01-01

    A total of 136 stream water and 143 groundwater samples collected in five important hydrologic systems of the United States were analyzed for microbiological indicators to test monitoring concepts in a nationally consistent program. Total coliforms were found in 99%, Escherichia coli in 97%, and Clostridium perfringens in 73% of stream water samples analyzed for each bacterium. Total coliforms were found in 20%, E. coli in less than 1%, and C. perfringens in none of the groundwater samples analyzed for each bacterium. Although coliphage analyses were performed on many of the samples, contamination in the laboratory and problems discerning discrete plaques precluded quantification. Land use was found to have the most significant effect on concentrations of bacterial indicators in stream water. Presence of septic systems on the property near the sampling site and well depth were found to be related to detection of coliforms in groundwater, although these relationships were not statistically significant. A greater diversity of sites, more detailed information about some factors, and a larger dataset may provide further insight to factors that affect microbiological indicators.

  6. Testing of the Prototype Mars Drill and Sample Acquisition System in the Mars Analog Site of the Antarctica's Dry Valleys

    NASA Astrophysics Data System (ADS)

    Zacny, K.; Paulsen, G.; McKay, C.; Glass, B. J.; Marinova, M.; Davila, A. F.; Pollard, W. H.; Jackson, A.

    2011-12-01

    We report on the testing of the one meter class prototype Mars drill and cuttings sampling system, called the IceBreaker in the Dry Valleys of Antarctica. The drill consists of a rotary-percussive drill head, a sampling auger with a bit at the end having an integrated temperature sensor, a Z-stage for advancing the auger into the ground, and a sampling station for moving the augered ice shavings or soil cuttings into a sample cup. In November/December of 2010, the IceBreaker drill was tested in the Uni-versity Valley (within the Beacon Valley region of the Antarctic Dry Valleys). University Valley is a good analog to the Northern Polar Regions of Mars because a layer of dry soil lies on top of either ice-cemeted ground or massive ice (depending on the location within the valley). That is exactly what the 2007 Phoenix mission discovered on Mars. The drill demonstrated drilling in ice-cemented ground and in massive ice at the 1-1-100-100 level; that is the drill reached 1 meter in 1 hour with 100 Watts of power and 100 Newton Weight on Bit. This corresponds to an average energy of 100 Whr. At the same time, the bit temperature measured by the bit thermocouple did not exceed more than 10 °C above the formation temperature. The temperature also never exceeded freezing, which minimizes chances of getting stuck and also of altering the materials that are being sampled and analyzed. The samples in the forms of cuttings were acquired every 10 cm intervals into sterile bags. These tests have shown that drilling on Mars, in ice cemented ground with limited power, energy and Weight on Bit, and collecting samples in discrete depth intervals is possible within the given mass, power, and energy levels of a Phoenix-size lander and within the duration of a Phoenix-like mission.

  7. An adaptive discretization of incompressible flow using a multitude of moving Cartesian grids

    NASA Astrophysics Data System (ADS)

    English, R. Elliot; Qiu, Linhai; Yu, Yue; Fedkiw, Ronald

    2013-12-01

    We present a novel method for discretizing the incompressible Navier-Stokes equations on a multitude of moving and overlapping Cartesian grids each with an independently chosen cell size to address adaptivity. Advection is handled with first and second order accurate semi-Lagrangian schemes in order to alleviate any time step restriction associated with small grid cell sizes. Likewise, an implicit temporal discretization is used for the parabolic terms including Navier-Stokes viscosity which we address separately through the development of a method for solving the heat diffusion equations. The most intricate aspect of any such discretization is the method used in order to solve the elliptic equation for the Navier-Stokes pressure or that resulting from the temporal discretization of parabolic terms. We address this by first removing any degrees of freedom which duplicately cover spatial regions due to overlapping grids, and then providing a discretization for the remaining degrees of freedom adjacent to these regions. We observe that a robust second order accurate symmetric positive definite readily preconditioned discretization can be obtained by constructing a local Voronoi region on the fly for each degree of freedom in question in order to obtain both its stencil (logically connected neighbors) and stencil weights. Internal curved boundaries such as at solid interfaces are handled using a simple immersed boundary approach which is directly applied to the Voronoi mesh in both the viscosity and pressure solves. We independently demonstrate each aspect of our approach on test problems in order to show efficacy and convergence before finally addressing a number of common test cases for incompressible flow with stationary and moving solid bodies.

  8. [Laser Raman spectral investigations of the carbon structure of LiFePO4/C cathode material].

    PubMed

    Yang, Chao; Li, Yong-Mei; Zhao, Quan-Feng; Gan, Xiang-Kun; Yao, Yao-Chun

    2013-10-01

    In the present paper, Laser Raman spectral was used to study the carbon structure of LiFePO4/C positive material. The samples were also been characterized by X-ray diffraction (XRD), scanning electron microscope(SEM), selected area electron diffraction (SEAD) and resistivity test. The result indicated that compared with the sp2/sp3 peak area ratios the I(D)/I(G) ratios are not only more evenly but also exhibited some similar rules. However, the studies indicated that there exist differences of I(D)/ I(G) ratios and sp2/sp3 peak area ratios among different points in the same sample. And compared with the samples using citric acid or sucrose as carbon source, the sample which was synthetized with mixed carbon source (mixed by citric acid and sucrose) exhibited higher I(D)/I(G) ratios and sp2/sp3 peak area ratios. Also, by contrast, the differences of I(D)/I(G) ratios and sp2/sp3 peak area ratios among different points in the same sample are less than the single carbon source samples' datas. In the scanning electron microscopy (sem) and transmission electron microscopy (sem) images, we can observed the uneven distributions of carbon coating of the primary particles and the secondary particles, this may be the main reason for not being uniform of difference data in the same sample. The obvious discreteness will affect the normal use of Raman spectroscopy in these tests.

  9. 40 CFR 141.132 - Monitoring requirements.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... discretion. (2) Chlorite. Community and nontransient noncommunity water systems using chlorine dioxide, for... samples. (ii) Reduced monitoring. Monitoring may not be reduced. (2) Chlorine dioxide—(i) Routine... three chlorine dioxide distribution system samples. If chlorine dioxide or chloramines are used to...

  10. Biomechanical symmetry in elite rugby union players during dynamic tasks: an investigation using discrete and continuous data analysis techniques.

    PubMed

    Marshall, Brendan; Franklyn-Miller, Andrew; Moran, Kieran; King, Enda; Richter, Chris; Gore, Shane; Strike, Siobhán; Falvey, Éanna

    2015-01-01

    While measures of asymmetry may provide a means of identifying individuals predisposed to injury, normative asymmetry values for challenging sport specific movements in elite athletes are currently lacking in the literature. In addition, previous studies have typically investigated symmetry using discrete point analyses alone. This study examined biomechanical symmetry in elite rugby union players using both discrete point and continuous data analysis techniques. Twenty elite injury free international rugby union players (mean ± SD: age 20.4 ± 1.0 years; height 1.86 ± 0.08 m; mass 98.4 ± 9.9 kg) underwent biomechanical assessment. A single leg drop landing, a single leg hurdle hop, and a running cut were analysed. Peak joint angles and moments were examined in the discrete point analysis while analysis of characterising phases (ACP) techniques were used to examine the continuous data. Dominant side was compared to non-dominant side using dependent t-tests for normally distributed data or Wilcoxon signed-rank test for non-normally distributed data. The significance level was set at α = 0.05. The majority of variables were found to be symmetrical with a total of 57/60 variables displaying symmetry in the discrete point analysis and 55/60 in the ACP. The five variables that were found to be asymmetrical were hip abductor moment in the drop landing (p = 0.02), pelvis lift/drop in the drop landing (p = 0.04) and hurdle hop (p = 0.02), ankle internal rotation moment in the cut (p = 0.04) and ankle dorsiflexion angle also in the cut (p = 0.01). The ACP identified two additional asymmetries not identified in the discrete point analysis. Elite injury free rugby union players tended to exhibit bi-lateral symmetry across a range of biomechanical variables in a drop landing, hurdle hop and cut. This study provides useful normative values for inter-limb symmetry in these movement tests. When examining symmetry it is recommended to incorporate continuous data analysis techniques rather than a discrete point analysis alone; a discrete point analysis was unable to detect two of the five asymmetries identified.

  11. sGD: software for estimating spatially explicit indices of genetic diversity.

    PubMed

    Shirk, A J; Cushman, S A

    2011-09-01

    Anthropogenic landscape changes have greatly reduced the population size, range and migration rates of many terrestrial species. The small local effective population size of remnant populations favours loss of genetic diversity leading to reduced fitness and adaptive potential, and thus ultimately greater extinction risk. Accurately quantifying genetic diversity is therefore crucial to assessing the viability of small populations. Diversity indices are typically calculated from the multilocus genotypes of all individuals sampled within discretely defined habitat patches or larger regional extents. Importantly, discrete population approaches do not capture the clinal nature of populations genetically isolated by distance or landscape resistance. Here, we introduce spatial Genetic Diversity (sGD), a new spatially explicit tool to estimate genetic diversity based on grouping individuals into potentially overlapping genetic neighbourhoods that match the population structure, whether discrete or clinal. We compared the estimates and patterns of genetic diversity using patch or regional sampling and sGD on both simulated and empirical populations. When the population did not meet the assumptions of an island model, we found that patch and regional sampling generally overestimated local heterozygosity, inbreeding and allelic diversity. Moreover, sGD revealed fine-scale spatial heterogeneity in genetic diversity that was not evident with patch or regional sampling. These advantages should provide a more robust means to evaluate the potential for genetic factors to influence the viability of clinal populations and guide appropriate conservation plans. © 2011 Blackwell Publishing Ltd.

  12. Utilization of the Discrete Differential Evolution for Optimization in Multidimensional Point Clouds.

    PubMed

    Uher, Vojtěch; Gajdoš, Petr; Radecký, Michal; Snášel, Václav

    2016-01-01

    The Differential Evolution (DE) is a widely used bioinspired optimization algorithm developed by Storn and Price. It is popular for its simplicity and robustness. This algorithm was primarily designed for real-valued problems and continuous functions, but several modified versions optimizing both integer and discrete-valued problems have been developed. The discrete-coded DE has been mostly used for combinatorial problems in a set of enumerative variants. However, the DE has a great potential in the spatial data analysis and pattern recognition. This paper formulates the problem as a search of a combination of distinct vertices which meet the specified conditions. It proposes a novel approach called the Multidimensional Discrete Differential Evolution (MDDE) applying the principle of the discrete-coded DE in discrete point clouds (PCs). The paper examines the local searching abilities of the MDDE and its convergence to the global optimum in the PCs. The multidimensional discrete vertices cannot be simply ordered to get a convenient course of the discrete data, which is crucial for good convergence of a population. A novel mutation operator utilizing linear ordering of spatial data based on the space filling curves is introduced. The algorithm is tested on several spatial datasets and optimization problems. The experiments show that the MDDE is an efficient and fast method for discrete optimizations in the multidimensional point clouds.

  13. Utilization of the Discrete Differential Evolution for Optimization in Multidimensional Point Clouds

    PubMed Central

    Radecký, Michal; Snášel, Václav

    2016-01-01

    The Differential Evolution (DE) is a widely used bioinspired optimization algorithm developed by Storn and Price. It is popular for its simplicity and robustness. This algorithm was primarily designed for real-valued problems and continuous functions, but several modified versions optimizing both integer and discrete-valued problems have been developed. The discrete-coded DE has been mostly used for combinatorial problems in a set of enumerative variants. However, the DE has a great potential in the spatial data analysis and pattern recognition. This paper formulates the problem as a search of a combination of distinct vertices which meet the specified conditions. It proposes a novel approach called the Multidimensional Discrete Differential Evolution (MDDE) applying the principle of the discrete-coded DE in discrete point clouds (PCs). The paper examines the local searching abilities of the MDDE and its convergence to the global optimum in the PCs. The multidimensional discrete vertices cannot be simply ordered to get a convenient course of the discrete data, which is crucial for good convergence of a population. A novel mutation operator utilizing linear ordering of spatial data based on the space filling curves is introduced. The algorithm is tested on several spatial datasets and optimization problems. The experiments show that the MDDE is an efficient and fast method for discrete optimizations in the multidimensional point clouds. PMID:27974884

  14. Front-End Board with Cyclone V as a Test High-Resolution Platform for the Auger_Beyond_2015 Front End Electronics

    NASA Astrophysics Data System (ADS)

    Szadkowski, Zbigniew

    2015-06-01

    The surface detector (SD) array of the Pierre Auger Observatory needs an upgrade which allows space for more complex triggers with higher bandwidth and greater dynamic range. To this end this paper presents a front-end board (FEB) with the largest Cyclone V E FPGA 5CEFA9F31I7N. It supports eight channels sampled with max. 250 MSps@14-bit resolution. Considered sampling for the SD is 120 MSps; however, the FEB has been developed with external anti-aliasing filters to retain maximal flexibility. Six channels are targeted at the SD, two are reserved for other experiments like: Auger Engineering Radio Array and additional muon counters. The FEB is an intermediate design plugged into a unified board communicating with a micro-controller at 40 MHz; however, it provides 250 MSPs sampling with an 18-bit dynamic range, is equipped with a virtual NIOS processor and supports 256 MB of SDRAM as well as an implemented spectral trigger based on the discrete cosine transform for detection of very inclined “old” showers. The FEB can also support neural network development for detection of “young” showers, potentially generated by neutrinos. A single FEB was already tested in the Auger surface detector in Malargüe (Argentina) for 120 and 160 MSps. Preliminary tests showed perfect stability of data acquisition for sampling frequency three or four times greater. They allowed optimization of the design before deployment of seven or eight FEBs for several months of continuous tests in the engineering array.

  15. Determining solid-fluid interface temperature distribution during phase change of cryogenic propellants using transient thermal modeling

    NASA Astrophysics Data System (ADS)

    Bellur, K.; Médici, E. F.; Hermanson, J. C.; Choi, C. K.; Allen, J. S.

    2018-04-01

    Control of boil-off of cryogenic propellants is a continuing technical challenge for long duration space missions. Predicting phase change rates of cryogenic liquids requires an accurate estimation of solid-fluid interface temperature distributions in regions where a contact line or a thin liquid film exists. This paper described a methodology to predict inner wall temperature gradients with and without evaporation using discrete temperature measurements on the outer wall of a container. Phase change experiments with liquid hydrogen and methane in cylindrical test cells of various materials and sizes were conducted at the Neutron Imaging Facility at the National Institute of Standards and Technology. Two types of tests were conducted. The first type of testing involved thermal cycling of an evacuated cell (dry) and the second involved controlled phase change with cryogenic liquids (wet). During both types of tests, temperatures were measured using Si-diode sensors mounted on the exterior surface of the test cells. Heat is transferred to the test cell by conduction through a helium exchange gas and through the cryostat sample holder. Thermal conduction through the sample holder is shown to be the dominant mode with the rate of heat transfer limited by six independent contact resistances. An iterative methodology is employed to determine contact resistances between the various components of the cryostat stick insert, test cell and lid using the dry test data. After the contact resistances are established, inner wall temperature distributions during wet tests are calculated.

  16. Rule learning in autism: the role of reward type and social context.

    PubMed

    Jones, E J H; Webb, S J; Estes, A; Dawson, G

    2013-01-01

    Learning abstract rules is central to social and cognitive development. Across two experiments, we used Delayed Non-Matching to Sample tasks to characterize the longitudinal development and nature of rule-learning impairments in children with Autism Spectrum Disorder (ASD). Results showed that children with ASD consistently experienced more difficulty learning an abstract rule from a discrete physical reward than children with DD. Rule learning was facilitated by the provision of more concrete reinforcement, suggesting an underlying difficulty in forming conceptual connections. Learning abstract rules about social stimuli remained challenging through late childhood, indicating the importance of testing executive functions in both social and non-social contexts.

  17. Voltage-Induced Nonlinear Conduction Properties of Epoxy Resin/Micron-Silver Particles Composites

    NASA Astrophysics Data System (ADS)

    Qu, Zhaoming; Lu, Pin; Yuan, Yang; Wang, Qingguo

    2018-01-01

    The nonlinear conduction properties of epoxy resin (ER)/micron-silver particles (MP) composites were investigated. Under sufficient high intensity applied constant voltage, the obvious nonlinear conduction properties of the samples with volume fraction 25% were found. With increments in the voltage, the conductive switching effect was observed. The nonlinear conduction mechanism of the ER/MP composites under high applied voltages could be attributed to the electrical current conducted via discrete paths of conductive particles induced by the electric field. The test results show that the ER/MP composites with nonlinear conduction properties are of great potential application in electromagnetic protection of electron devices and systems.

  18. Adaptive Event-Triggered Control Based on Heuristic Dynamic Programming for Nonlinear Discrete-Time Systems.

    PubMed

    Dong, Lu; Zhong, Xiangnan; Sun, Changyin; He, Haibo

    2017-07-01

    This paper presents the design of a novel adaptive event-triggered control method based on the heuristic dynamic programming (HDP) technique for nonlinear discrete-time systems with unknown system dynamics. In the proposed method, the control law is only updated when the event-triggered condition is violated. Compared with the periodic updates in the traditional adaptive dynamic programming (ADP) control, the proposed method can reduce the computation and transmission cost. An actor-critic framework is used to learn the optimal event-triggered control law and the value function. Furthermore, a model network is designed to estimate the system state vector. The main contribution of this paper is to design a new trigger threshold for discrete-time systems. A detailed Lyapunov stability analysis shows that our proposed event-triggered controller can asymptotically stabilize the discrete-time systems. Finally, we test our method on two different discrete-time systems, and the simulation results are included.

  19. Achievement Goals and Discrete Achievement Emotions: A Theoretical Model and Prospective Test

    ERIC Educational Resources Information Center

    Pekrun, Reinhard; Elliot, Andrew J.; Maier, Markus A.

    2006-01-01

    A theoretical model linking achievement goals to discrete achievement emotions is proposed. The model posits relations between the goals of the trichotomous achievement goal framework and 8 commonly experienced achievement emotions organized in a 2 (activity/outcome focus) x 2 (positive/negative valence) taxonomy. Two prospective studies tested…

  20. Discrete-State and Continuous Models of Recognition Memory: Testing Core Properties under Minimal Assumptions

    ERIC Educational Resources Information Center

    Kellen, David; Klauer, Karl Christoph

    2014-01-01

    A classic discussion in the recognition-memory literature concerns the question of whether recognition judgments are better described by continuous or discrete processes. These two hypotheses are instantiated by the signal detection theory model (SDT) and the 2-high-threshold model, respectively. Their comparison has almost invariably relied on…

  1. System for Automatic Generation of Examination Papers in Discrete Mathematics

    ERIC Educational Resources Information Center

    Fridenfalk, Mikael

    2013-01-01

    A system was developed for automatic generation of problems and solutions for examinations in a university distance course in discrete mathematics and tested in a pilot experiment involving 200 students. Considering the success of such systems in the past, particularly including automatic assessment, it should not take long before such systems are…

  2. Actinide migration in Johnston Atoll soil

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wolf, S. F.; Bates, J. K.; Buck, E. C.

    1997-02-01

    Characterization of the actinide content of a sample of contaminated coral soil from Johnston Atoll, the site of three non-nuclear destructs of nuclear warhead-carrying THOR missiles in 1962, revealed that >99% of the total actinide content is associated with discrete bomb fragments. After removal of these fragments, there was an inverse correlation between actinide content and soil particle size in particles from 43 to 0.4 {micro}m diameter. Detailed analyses of this remaining soil revealed no discrete actinide phase in these soil particles, despite measurable actinide content. Observations indicate that exposure to the environment has caused the conversion of relatively insolublemore » actinide oxides to the more soluble actinyl oxides and actinyl carbonate coordinated complexes. This process has led to dissolution of actinides from discrete particles and migration to the surrounding soil surfaces, resulting in a dispersion greater than would be expected by physical transport of discrete particles alone.« less

  3. Development and Flight Testing of an Adaptable Vehicle Health-Monitoring Architecture

    NASA Technical Reports Server (NTRS)

    Woodard, Stanley E.; Coffey, Neil C.; Gonzalez, Guillermo A.; Woodman, Keith L.; Weathered, Brenton W.; Rollins, Courtney H.; Taylor, B. Douglas; Brett, Rube R.

    2003-01-01

    Development and testing of an adaptable wireless health-monitoring architecture for a vehicle fleet is presented. It has three operational levels: one or more remote data acquisition units located throughout the vehicle; a command and control unit located within the vehicle; and a terminal collection unit to collect analysis results from all vehicles. Each level is capable of performing autonomous analysis with a trained adaptable expert system. The remote data acquisition unit has an eight channel programmable digital interface that allows the user discretion for choosing type of sensors; number of sensors, sensor sampling rate, and sampling duration for each sensor. The architecture provides framework for a tributary analysis. All measurements at the lowest operational level are reduced to provide analysis results necessary to gauge changes from established baselines. These are then collected at the next level to identify any global trends or common features from the prior level. This process is repeated until the results are reduced at the highest operational level. In the framework, only analysis results are forwarded to the next level to reduce telemetry congestion. The system's remote data acquisition hardware and non-analysis software have been flight tested on the NASA Langley B757's main landing gear.

  4. Spatial interpolation techniques using R

    EPA Science Inventory

    Interpolation techniques are used to predict the cell values of a raster based on sample data points. For example, interpolation can be used to predict the distribution of sediment particle size throughout an estuary based on discrete sediment samples. We demonstrate some inter...

  5. On the predictivity of pore-scale simulations: Estimating uncertainties with multilevel Monte Carlo

    NASA Astrophysics Data System (ADS)

    Icardi, Matteo; Boccardo, Gianluca; Tempone, Raúl

    2016-09-01

    A fast method with tunable accuracy is proposed to estimate errors and uncertainties in pore-scale and Digital Rock Physics (DRP) problems. The overall predictivity of these studies can be, in fact, hindered by many factors including sample heterogeneity, computational and imaging limitations, model inadequacy and not perfectly known physical parameters. The typical objective of pore-scale studies is the estimation of macroscopic effective parameters such as permeability, effective diffusivity and hydrodynamic dispersion. However, these are often non-deterministic quantities (i.e., results obtained for specific pore-scale sample and setup are not totally reproducible by another ;equivalent; sample and setup). The stochastic nature can arise due to the multi-scale heterogeneity, the computational and experimental limitations in considering large samples, and the complexity of the physical models. These approximations, in fact, introduce an error that, being dependent on a large number of complex factors, can be modeled as random. We propose a general simulation tool, based on multilevel Monte Carlo, that can reduce drastically the computational cost needed for computing accurate statistics of effective parameters and other quantities of interest, under any of these random errors. This is, to our knowledge, the first attempt to include Uncertainty Quantification (UQ) in pore-scale physics and simulation. The method can also provide estimates of the discretization error and it is tested on three-dimensional transport problems in heterogeneous materials, where the sampling procedure is done by generation algorithms able to reproduce realistic consolidated and unconsolidated random sphere and ellipsoid packings and arrangements. A totally automatic workflow is developed in an open-source code [1], that include rigid body physics and random packing algorithms, unstructured mesh discretization, finite volume solvers, extrapolation and post-processing techniques. The proposed method can be efficiently used in many porous media applications for problems such as stochastic homogenization/upscaling, propagation of uncertainty from microscopic fluid and rock properties to macro-scale parameters, robust estimation of Representative Elementary Volume size for arbitrary physics.

  6. Numerical integration techniques for curved-element discretizations of molecule-solvent interfaces.

    PubMed

    Bardhan, Jaydeep P; Altman, Michael D; Willis, David J; Lippow, Shaun M; Tidor, Bruce; White, Jacob K

    2007-07-07

    Surface formulations of biophysical modeling problems offer attractive theoretical and computational properties. Numerical simulations based on these formulations usually begin with discretization of the surface under consideration; often, the surface is curved, possessing complicated structure and possibly singularities. Numerical simulations commonly are based on approximate, rather than exact, discretizations of these surfaces. To assess the strength of the dependence of simulation accuracy on the fidelity of surface representation, here methods were developed to model several important surface formulations using exact surface discretizations. Following and refining Zauhar's work [J. Comput.-Aided Mol. Des. 9, 149 (1995)], two classes of curved elements were defined that can exactly discretize the van der Waals, solvent-accessible, and solvent-excluded (molecular) surfaces. Numerical integration techniques are presented that can accurately evaluate nonsingular and singular integrals over these curved surfaces. After validating the exactness of the surface discretizations and demonstrating the correctness of the presented integration methods, a set of calculations are presented that compare the accuracy of approximate, planar-triangle-based discretizations and exact, curved-element-based simulations of surface-generalized-Born (sGB), surface-continuum van der Waals (scvdW), and boundary-element method (BEM) electrostatics problems. Results demonstrate that continuum electrostatic calculations with BEM using curved elements, piecewise-constant basis functions, and centroid collocation are nearly ten times more accurate than planar-triangle BEM for basis sets of comparable size. The sGB and scvdW calculations give exceptional accuracy even for the coarsest obtainable discretized surfaces. The extra accuracy is attributed to the exact representation of the solute-solvent interface; in contrast, commonly used planar-triangle discretizations can only offer improved approximations with increasing discretization and associated increases in computational resources. The results clearly demonstrate that the methods for approximate integration on an exact geometry are far more accurate than exact integration on an approximate geometry. A MATLAB implementation of the presented integration methods and sample data files containing curved-element discretizations of several small molecules are available online as supplemental material.

  7. Combination probes for stagnation pressure and temperature measurements in gas turbine engines

    NASA Astrophysics Data System (ADS)

    Bonham, C.; Thorpe, S. J.; Erlund, M. N.; Stevenson, R. J.

    2018-01-01

    During gas turbine engine testing, steady-state gas-path stagnation pressures and temperatures are measured in order to calculate the efficiencies of the main components of turbomachinery. These measurements are acquired using fixed intrusive probes, which are installed at the inlet and outlet of each component at discrete point locations across the gas-path. The overall uncertainty in calculated component efficiency is sensitive to the accuracy of discrete point pressures and temperatures, as well as the spatial sampling across the gas-path. Both of these aspects of the measurement system must be considered if more accurate component efficiencies are to be determined. High accuracy has become increasingly important as engine manufacturers have begun to pursue small gains in component performance, which require efficiencies to be resolved to within less than  ± 1% . This article reports on three new probe designs that have been developed in a response to this demand. The probes adopt a compact combination arrangement that facilitates up to twice the spatial coverage compared to individual stagnation pressure and temperature probes. The probes also utilise novel temperature sensors and high recovery factor shield designs that facilitate improvements in point measurement accuracy compared to standard Kiel probes used in engine testing. These changes allow efficiencies to be resolved within  ± 1% over a wider range of conditions than is currently achievable with Kiel probes.

  8. A labelled discrete choice experiment adds realism to the choices presented: preferences for surveillance tests for Barrett esophagus

    PubMed Central

    2009-01-01

    Background Discrete choice experiments (DCEs) allow systematic assessment of preferences by asking respondents to choose between scenarios. We conducted a labelled discrete choice experiment with realistic choices to investigate patients' trade-offs between the expected health gains and the burden of testing in surveillance of Barrett esophagus (BE). Methods Fifteen choice scenarios were selected based on 2 attributes: 1) type of test (endoscopy and two less burdensome fictitious tests), 2) frequency of surveillance. Each test-frequency combination was associated with its own realistic decrease in risk of dying from esophageal adenocarcinoma. A conditional logit model was fitted. Results Of 297 eligible patients (155 BE and 142 with non-specific upper GI symptoms), 247 completed the questionnaire (84%). Patients preferred surveillance to no surveillance. Current surveillance schemes of once every 1–2 years were amongst the most preferred alternatives. Higher health gains were preferred over those with lower health gains, except when test frequencies exceeded once a year. For similar health gains, patients preferred video-capsule over saliva swab and least preferred endoscopy. Conclusion This first example of a labelled DCE using realistic scenarios in a healthcare context shows that such experiments are feasible. A comparison of labelled and unlabelled designs taking into account setting and research question is recommended. PMID:19454022

  9. Is Knowledge Random? Introducing Sampling and Bias through Outdoor Inquiry

    ERIC Educational Resources Information Center

    Stier, Sam

    2010-01-01

    Sampling, very generally, is the process of learning about something by selecting and assessing representative parts of that population or object. In the inquiry activity described here, students learned about sampling techniques as they estimated the number of trees greater than 12 cm dbh (diameter at breast height) in a wooded, discrete area…

  10. Analysis of Iron in Lawn Fertilizer: A Sampling Study

    ERIC Educational Resources Information Center

    Jeannot, Michael A.

    2006-01-01

    An experiment is described which uses a real-world sample of lawn fertilizer in a simple exercise to illustrate problems associated with the sampling step of a chemical analysis. A mixed-particle fertilizer containing discrete particles of iron oxide (magnetite, Fe[subscript 3]O[subscript 4]) mixed with other particles provides an excellent…

  11. Contamination of successive samples in portable pumping systems

    Treesearch

    Robert B. Thomas; Rand E. Eads

    1983-01-01

    Automatic discrete sample pumping systems used to monitor water quality should deliver to storage all materials pumped in a given cycle. If they do not, successive samples will be contaminated, a severe problem with highly variable suspended sediment concentrations in small streams. The cross-contamination characteristics of two small commonly used portable pumping...

  12. ODM2 (Observation Data Model): The EarthChem Use Case

    NASA Astrophysics Data System (ADS)

    Lehnert, Kerstin; Song, Lulin; Hsu, Leslie; Horsburgh, Jeffrey S.; Aufdenkampe, Anthony K.; Mayorga, Emilio; Tarboton, David; Zaslavsky, Ilya

    2014-05-01

    PetDB is an online data system that was created in the late 1990's to serve online a synthesis of published geochemical and petrological data of igneous and metamorphic rocks. PetDB has today reached a volume of 2.5 million analytical values for nearly 70,000 rock samples. PetDB's data model (Lehnert et al., G-Cubed 2000) was designed to store sample-based observational data generated by the analysis of rocks, together with a wide range of metadata documenting provenance of the samples, analytical procedures, data quality, and data source. Attempts to store additional types of geochemical data such as time-series data of seafloor hydrothermal springs and volcanic gases, depth-series data for marine sediments and soils, and mineral or mineral inclusion data revealed the limitations of the schema: the inability to properly record sample hierarchies (for example, a garnet that is included in a diamond that is included in a xenolith that is included in a kimberlite rock sample), inability to properly store time-series data, inability to accommodate classification schemes other than rock lithologies, deficiencies of identifying and documenting datasets that are not part of publications. In order to overcome these deficiencies, PetDB has been developing a new data schema using the ODM2 information model (ODM=Observation Data Model). The development of ODM2 is a collaborative project that leverages the experience of several existing information representations, including PetDB and EarthChem, and the CUAHSI HIS Observations Data Model (ODM), as well as the general specification for encoding observational data called Observations and Measurements (O&M) to develop a uniform information model that seamlessly manages spatially discrete, feature-based earth observations from environmental samples and sample fractions as well as in-situ sensors, and to test its initial implementation in a variety of user scenarios. The O&M model, adopted as an international standard by the Open Geospatial Consortium, and later by ISO, is the foundation of several domain markup languages such as OGC WaterML 2, used for exchanging hydrologic time series. O&M profiles for samples and sample fractions have not been standardized yet, and there is a significant variety in sample data representations used across agencies and academic projects. The intent of the ODM2 project is to create a unified relational representation for different types of spatially discrete observational data, ensuring that the data can be efficiently stored, transferred, catalogued and queried within a variety of earth science applications. We will report on the initial design and implementation of the new model for PetDB, and results of testing the model against a set of common queries. We have explored several aspects of the model, including: semantic consistency, validation and integrity checking, portability and maintainability, query efficiency, and scalability. The sample datasets from PetDB have been loaded in the initial physical implementation for testing. The results of the experiments point to both benefits and challenges of the initial design, and illustrate the key trade-off between the generality of design, ease of interpretation, and query efficiency, especially as the system needs to scale to millions of records.

  13. VARIANCE ESTIMATION FOR SPATIALLY BALANCED SAMPLES OF ENVIRONMENTAL RESOURCES

    EPA Science Inventory

    The spatial distribution of a natural resource is an important consideration in designing an efficient survey or monitoring program for the resource. We review a unified strategy for designing probability samples of discrete, finite resource populations, such as lakes within som...

  14. Effects of sequential and discrete rapid naming on reading in Japanese children with reading difficulty.

    PubMed

    Wakamiya, Eiji; Okumura, Tomohito; Nakanishi, Makoto; Takeshita, Takashi; Mizuta, Mekumi; Kurimoto, Naoko; Tamai, Hiroshi

    2011-06-01

    To clarify whether rapid naming ability itself is a main underpinning factor of rapid automatized naming tests (RAN) and how deep an influence the discrete decoding process has on reading, we performed discrete naming tasks and discrete hiragana reading tasks as well as sequential naming tasks and sequential hiragana reading tasks with 38 Japanese schoolchildren with reading difficulty. There were high correlations between both discrete and sequential hiragana reading and sentence reading, suggesting that some mechanism which automatizes hiragana reading makes sentence reading fluent. In object and color tasks, there were moderate correlations between sentence reading and sequential naming, and between sequential naming and discrete naming. But no correlation was found between reading tasks and discrete naming tasks. The influence of rapid naming ability of objects and colors upon reading seemed relatively small, and multi-item processing may work in relation to these. In contrast, in the digit naming task there was moderate correlation between sentence reading and discrete naming, while no correlation was seen between sequential naming and discrete naming. There was moderate correlation between reading tasks and sequential digit naming tasks. Digit rapid naming ability has more direct effect on reading while its effect on RAN is relatively limited. The ratio of how rapid naming ability influences RAN and reading seems to vary according to kind of the stimuli used. An assumption about components in RAN which influence reading is discussed in the context of both sequential processing and discrete naming speed. Copyright © 2010 The Japanese Society of Child Neurology. Published by Elsevier B.V. All rights reserved.

  15. Multivariate localization methods for ensemble Kalman filtering

    NASA Astrophysics Data System (ADS)

    Roh, S.; Jun, M.; Szunyogh, I.; Genton, M. G.

    2015-05-01

    In ensemble Kalman filtering (EnKF), the small number of ensemble members that is feasible to use in a practical data assimilation application leads to sampling variability of the estimates of the background error covariances. The standard approach to reducing the effects of this sampling variability, which has also been found to be highly efficient in improving the performance of EnKF, is the localization of the estimates of the covariances. One family of localization techniques is based on taking the Schur (entry-wise) product of the ensemble-based sample covariance matrix and a correlation matrix whose entries are obtained by the discretization of a distance-dependent correlation function. While the proper definition of the localization function for a single state variable has been extensively investigated, a rigorous definition of the localization function for multiple state variables has been seldom considered. This paper introduces two strategies for the construction of localization functions for multiple state variables. The proposed localization functions are tested by assimilating simulated observations experiments into the bivariate Lorenz 95 model with their help.

  16. Synchronization of generalized reaction-diffusion neural networks with time-varying delays based on general integral inequalities and sampled-data control approach.

    PubMed

    Dharani, S; Rakkiyappan, R; Cao, Jinde; Alsaedi, Ahmed

    2017-08-01

    This paper explores the problem of synchronization of a class of generalized reaction-diffusion neural networks with mixed time-varying delays. The mixed time-varying delays under consideration comprise of both discrete and distributed delays. Due to the development and merits of digital controllers, sampled-data control is a natural choice to establish synchronization in continuous-time systems. Using a newly introduced integral inequality, less conservative synchronization criteria that assure the global asymptotic synchronization of the considered generalized reaction-diffusion neural network and mixed delays are established in terms of linear matrix inequalities (LMIs). The obtained easy-to-test LMI-based synchronization criteria depends on the delay bounds in addition to the reaction-diffusion terms, which is more practicable. Upon solving these LMIs by using Matlab LMI control toolbox, a desired sampled-data controller gain can be acuqired without any difficulty. Finally, numerical examples are exploited to express the validity of the derived LMI-based synchronization criteria.

  17. HIV prevalence among people who inject drugs in Greater Kuala Lumpur recruited using respondent-driven sampling

    PubMed Central

    Bazazi, Alexander R.; Crawford, Forrest; Zelenev, Alexei; Heimer, Robert; Kamarulzaman, Adeeba; Altice, Frederick L.

    2016-01-01

    The HIV epidemic in Malaysia is concentrated among people who inject drugs (PWID). Accurate estimates of HIV prevalence are critical for developing appropriate treatment and prevention interventions for PWID in Malaysia. In 2010, 461 PWID were recruited using respondent-driven sampling in Greater Kuala Lumpur, Malaysia. Participants completed rapid HIV testing and behavioral assessments. Estimates of HIV prevalence were computed for each of the three recruitment sites and the overall sample. HIV prevalence was 15.8% (95% CI: 12.5-19.2%) overall but varied widely by location: 37.0% (28.6-45.4%) in Kampung Baru, 10.3% (5.0-15.6%) in Kajang, and 6.3% (3.0-9.5%) in Shah Alam. Recruitment extended to locations far from initial interview sites but was concentrated around discrete geographic regions. We document the high prevalence of HIV among PWID in Greater Kuala Lumpur. Sustained support for community surveillance and HIV prevention interventions is needed to stem the HIV epidemic among PWID in Malaysia. PMID:26358544

  18. Discontinuous Finite Element Quasidiffusion Methods

    DOE PAGES

    Anistratov, Dmitriy Yurievich; Warsa, James S.

    2018-05-21

    Here in this paper, two-level methods for solving transport problems in one-dimensional slab geometry based on the quasi-diffusion (QD) method are developed. A linear discontinuous finite element method (LDFEM) is derived for the spatial discretization of the low-order QD (LOQD) equations. It involves special interface conditions at the cell edges based on the idea of QD boundary conditions (BCs). We consider different kinds of QD BCs to formulate the necessary cell-interface conditions. We develop two-level methods with independent discretization of the high-order transport equation and LOQD equations, where the transport equation is discretized using the method of characteristics and themore » LDFEM is applied to the LOQD equations. We also formulate closures that lead to the discretization consistent with a LDFEM discretization of the transport equation. The proposed methods are studied by means of test problems formulated with the method of manufactured solutions. Numerical experiments are presented demonstrating the performance of the proposed methods. Lastly, we also show that the method with independent discretization has the asymptotic diffusion limit.« less

  19. Discontinuous Finite Element Quasidiffusion Methods

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Anistratov, Dmitriy Yurievich; Warsa, James S.

    Here in this paper, two-level methods for solving transport problems in one-dimensional slab geometry based on the quasi-diffusion (QD) method are developed. A linear discontinuous finite element method (LDFEM) is derived for the spatial discretization of the low-order QD (LOQD) equations. It involves special interface conditions at the cell edges based on the idea of QD boundary conditions (BCs). We consider different kinds of QD BCs to formulate the necessary cell-interface conditions. We develop two-level methods with independent discretization of the high-order transport equation and LOQD equations, where the transport equation is discretized using the method of characteristics and themore » LDFEM is applied to the LOQD equations. We also formulate closures that lead to the discretization consistent with a LDFEM discretization of the transport equation. The proposed methods are studied by means of test problems formulated with the method of manufactured solutions. Numerical experiments are presented demonstrating the performance of the proposed methods. Lastly, we also show that the method with independent discretization has the asymptotic diffusion limit.« less

  20. Rhythmic arm movements are less affected than discrete ones after a stroke.

    PubMed

    Leconte, Patricia; Orban de Xivry, Jean-Jacques; Stoquart, Gaëtan; Lejeune, Thierry; Ronsse, Renaud

    2016-06-01

    Recent reports indicate that rhythmic and discrete upper-limb movements are two different motor primitives which recruit, at least partially, distinct neural circuitries. In particular, rhythmic movements recruit a smaller cortical network than discrete movements. The goal of this paper is to compare the levels of disability in performing rhythmic and discrete movements after a stroke. More precisely, we tested the hypothesis that rhythmic movements should be less affected than discrete ones, because they recruit neural circuitries that are less likely to be damaged by the stroke. Eleven stroke patients and eleven age-matched control subjects performed discrete and rhythmic movements using an end-effector robot (REAplan). The rhythmic movement condition was performed with and without visual targets to further decrease cortical recruitment. Movement kinematics was analyzed through specific metrics, capturing the degree of smoothness and harmonicity. We reported three main observations: (1) the movement smoothness of the paretic arm was more severely degraded for discrete movements than rhythmic movements; (2) most of the patients performed rhythmic movements with a lower harmonicity than controls; and (3) visually guided rhythmic movements were more altered than non-visually guided rhythmic movements. These results suggest a hierarchy in the levels of impairment: Discrete movements are more affected than rhythmic ones, which are more affected if they are visually guided. These results are a new illustration that discrete and rhythmic movements are two fundamental primitives in upper-limb movements. Moreover, this hierarchy of impairment opens new post-stroke rehabilitation perspectives.

  1. A comparison of two nano-sized particle air filtration tests in the diameter range of 10 to 400 nanometers

    NASA Astrophysics Data System (ADS)

    Japuntich, Daniel A.; Franklin, Luke M.; Pui, David Y.; Kuehn, Thomas H.; Kim, Seong Chan; Viner, Andrew S.

    2007-01-01

    Two different air filter test methodologies are discussed and compared for challenges in the nano-sized particle range of 10-400 nm. Included in the discussion are test procedure development, factors affecting variability and comparisons between results from the tests. One test system which gives a discrete penetration for a given particle size is the TSI 8160 Automated Filter tester (updated and commercially available now as the TSI 3160) manufactured by the TSI, Inc., Shoreview, MN. Another filter test system was developed utilizing a Scanning Mobility Particle Sizer (SMPS) to sample the particle size distributions downstream and upstream of an air filter to obtain a continuous percent filter penetration versus particle size curve. Filtration test results are shown for fiberglass filter paper of intermediate filtration efficiency. Test variables affecting the results of the TSI 8160 for NaCl and dioctyl phthalate (DOP) particles are discussed, including condensation particle counter stability and the sizing of the selected particle challenges. Filter testing using a TSI 3936 SMPS sampling upstream and downstream of a filter is also shown with a discussion of test variables and the need for proper SMPS volume purging and filter penetration correction procedure. For both tests, the penetration versus particle size curves for the filter media studied follow the theoretical Brownian capture model of decreasing penetration with decreasing particle diameter down to 10 nm with no deviation. From these findings, the authors can say with reasonable confidence that there is no evidence of particle thermal rebound in the size range.

  2. Curvature and tangential deflection of discrete arcs: a theory based on the commutator of scatter matrix pairs and its application to vertex detection in planar shape data.

    PubMed

    Anderson, I M; Bezdek, J C

    1984-01-01

    This paper introduces a new theory for the tangential deflection and curvature of plane discrete curves. Our theory applies to discrete data in either rectangular boundary coordinate or chain coded formats: its rationale is drawn from the statistical and geometric properties associated with the eigenvalue-eigenvector structure of sample covariance matrices. Specifically, we prove that the nonzero entry of the commutator of a piar of scatter matrices constructed from discrete arcs is related to the angle between their eigenspaces. And further, we show that this entry is-in certain limiting cases-also proportional to the analytical curvature of the plane curve from which the discrete data are drawn. These results lend a sound theoretical basis to the notions of discrete curvature and tangential deflection; and moreover, they provide a means for computationally efficient implementation of algorithms which use these ideas in various image processing contexts. As a concrete example, we develop the commutator vertex detection (CVD) algorithm, which identifies the location of vertices in shape data based on excessive cummulative tangential deflection; and we compare its performance to several well established corner detectors that utilize the alternative strategy of finding (approximate) curvature extrema.

  3. A general population-genetic model for the production by population structure of spurious genotype-phenotype associations in discrete, admixed or spatially distributed populations.

    PubMed

    Rosenberg, Noah A; Nordborg, Magnus

    2006-07-01

    In linkage disequilibrium mapping of genetic variants causally associated with phenotypes, spurious associations can potentially be generated by any of a variety of types of population structure. However, mathematical theory of the production of spurious associations has largely been restricted to population structure models that involve the sampling of individuals from a collection of discrete subpopulations. Here, we introduce a general model of spurious association in structured populations, appropriate whether the population structure involves discrete groups, admixture among such groups, or continuous variation across space. Under the assumptions of the model, we find that a single common principle--applicable to both the discrete and admixed settings as well as to spatial populations--gives a necessary and sufficient condition for the occurrence of spurious associations. Using a mathematical connection between the discrete and admixed cases, we show that in admixed populations, spurious associations are less severe than in corresponding mixtures of discrete subpopulations, especially when the variance of admixture across individuals is small. This observation, together with the results of simulations that examine the relative influences of various model parameters, has important implications for the design and analysis of genetic association studies in structured populations.

  4. The effect of spatial discretization upon traveling wave body forcing of a turbulent wall-bounded flow

    NASA Astrophysics Data System (ADS)

    You, Soyoung; Goldstein, David

    2015-11-01

    DNS is employed to simulate turbulent channel flow subject to a traveling wave body force field near the wall. The regions in which forces are applied are made progressively more discrete in a sequence of simulations to explore the boundaries between the effects of discrete flow actuators and spatially continuum actuation. The continuum body force field is designed to correspond to the ``optimal'' resolvent mode of McKeon and Sharma (2010), which has the L2 norm of σ1. That is, the normalized harmonic forcing that gives the largest disturbance energy is the first singular mode with the gain of σ1. 2D and 3D resolvent modes are examined at a modest Reτ of 180. For code validation, nominal flow simulations without discretized forcing are compared to previous work by Sharma and Goldstein (2014) in which we find that as we increase the forcing amplitude there is a decrease in the mean velocity and an increase in turbulent kinetic energy. The same force field is then sampled into isolated sub-domains to emulate the effect of discrete physical actuators. Several cases will be presented to explore the dependencies between the level of discretization and the turbulent flow behavior.

  5. Cross-shore and Vertical Distributions of Invertebrate Larvae Using Autonomous Sampling Coupled with Genetic Analysis

    NASA Astrophysics Data System (ADS)

    Govindarajan, A.; Pineda, J.; Purcell, M.; Tradd, K.; Packard, G.; Girard, A.; Dennett, M.; Breier, J. A., Jr.

    2016-02-01

    We present a new method to estimate the distribution of invertebrate larvae relative to environmental variables such as temperature, salinity, and circulation. A large volume in situ filtering system developed for discrete biogeochemical sampling in the deep-sea (the Suspended Particulate Rosette "SUPR" multisampler) was mounted to the autonomous underwater vehicle REMUS 600 for coastal larval and environmental sampling. We describe the results of SUPR-REMUS deployments conducted in Buzzards Bay, Massachusetts (2014) and west of Martha's Vineyard, Massachusetts (2015). We collected discrete samples cross-shore and from surface, middle, and bottom layers of the water column. Samples were preserved for DNA analysis. Our Buzzards Bay deployment targeted barnacle larvae, which are abundant in late winter and early spring. For these samples, we used morphological analysis and DNA barcodes generated by Sanger sequencing to obtain stage and species-specific cross-shore and vertical distributions. We targeted bivalve larvae in our 2015 deployments, and genetic analysis of larvae from these samples is underway. For these samples, we are comparing species barcode data derived from traditional Sanger sequencing of individuals to those obtained from next generation sequencing (NGS) of bulk plankton samples. Our results demonstrate the utility of autonomous sampling combined with DNA barcoding for studying larval distributions and transport dynamics.

  6. Why the long hours? Job demands and social exchange dynamics.

    PubMed

    Genin, Emilie; Haines, Victor Y; Pelletier, David; Rousseau, Vincent; Marchand, Alain

    2016-11-22

    This study investigates the determinants of long working hours from the perspectives of the demand-control model [Karasek, 1979] and social exchange theory [Blau, 1964; Goulder, 1960]. These two theoretical perspectives are tested to understand why individuals work longer (or shorter) hours. The hypotheses are tested with a representative sample of 1,604 employed Canadians. In line with Karasek's model, the results support that high job demands are positively associated with longer work hours. The social exchange perspective would predict a positive association between skill discretion and work hours. This hypothesis was supported for individuals with a higher education degree. Finally, the results support a positive association between active jobs and longer work hours. Our research suggests that job demands and social exchange dynamics need to be considered together in the explanation of longer (or shorter) work hours.

  7. Discretization provides a conceptually simple tool to build expression networks.

    PubMed

    Vass, J Keith; Higham, Desmond J; Mudaliar, Manikhandan A V; Mao, Xuerong; Crowther, Daniel J

    2011-04-18

    Biomarker identification, using network methods, depends on finding regular co-expression patterns; the overall connectivity is of greater importance than any single relationship. A second requirement is a simple algorithm for ranking patients on how relevant a gene-set is. For both of these requirements discretized data helps to first identify gene cliques, and then to stratify patients.We explore a biologically intuitive discretization technique which codes genes as up- or down-regulated, with values close to the mean set as unchanged; this allows a richer description of relationships between genes than can be achieved by positive and negative correlation. We find a close agreement between our results and the template gene-interactions used to build synthetic microarray-like data by SynTReN, which synthesizes "microarray" data using known relationships which are successfully identified by our method.We are able to split positive co-regulation into up-together and down-together and negative co-regulation is considered as directed up-down relationships. In some cases these exist in only one direction, with real data, but not with the synthetic data. We illustrate our approach using two studies on white blood cells and derived immortalized cell lines and compare the approach with standard correlation-based computations. No attempt is made to distinguish possible causal links as the search for biomarkers would be crippled by losing highly significant co-expression relationships. This contrasts with approaches like ARACNE and IRIS.The method is illustrated with an analysis of gene-expression for energy metabolism pathways. For each discovered relationship we are able to identify the samples on which this is based in the discretized sample-gene matrix, along with a simplified view of the patterns of gene expression; this helps to dissect the gene-sample relevant to a research topic--identifying sets of co-regulated and anti-regulated genes and the samples or patients in which this relationship occurs.

  8. A homogenization-based quasi-discrete method for the fracture of heterogeneous materials

    NASA Astrophysics Data System (ADS)

    Berke, P. Z.; Peerlings, R. H. J.; Massart, T. J.; Geers, M. G. D.

    2014-05-01

    The understanding and the prediction of the failure behaviour of materials with pronounced microstructural effects is of crucial importance. This paper presents a novel computational methodology for the handling of fracture on the basis of the microscale behaviour. The basic principles presented here allow the incorporation of an adaptive discretization scheme of the structure as a function of the evolution of strain localization in the underlying microstructure. The proposed quasi-discrete methodology bridges two scales: the scale of the material microstructure, modelled with a continuum type description; and the structural scale, where a discrete description of the material is adopted. The damaging material at the structural scale is divided into unit volumes, called cells, which are represented as a discrete network of points. The scale transition is inspired by computational homogenization techniques; however it does not rely on classical averaging theorems. The structural discrete equilibrium problem is formulated in terms of the underlying fine scale computations. Particular boundary conditions are developed on the scale of the material microstructure to address damage localization problems. The performance of this quasi-discrete method with the enhanced boundary conditions is assessed using different computational test cases. The predictions of the quasi-discrete scheme agree well with reference solutions obtained through direct numerical simulations, both in terms of crack patterns and load versus displacement responses.

  9. ROLES OF OPIOID RECEPTOR SUBTYPES IN MEDIATING ALCOHOL SEEKING INDUCED BY DISCRETE CUES AND CONTEXT

    PubMed Central

    Marinelli, Peter W.; Funk, Douglas; Harding, Stephen; Li, Zhaoxia; Juzytsch, Walter; Lê, A.D.

    2009-01-01

    The aim of this study was to assess the effects of selective blockade of the delta (DOP) or mu opioid (MOP) receptors on alcohol seeking induced by discrete cues and context. In Experiment 1, rats were trained to self-administer alcohol in an environment with distinct sensory properties. After extinction in a different context with separate sensory properties, rats were tested for context-induced renewal in the original context following treatment with the DOP receptor antagonist naltrindole (0 – 15-mg/kg, IP) or the MOP receptor antagonist CTOP (0 – 3-µg/kg ICV). In a separate set of experiments, reinstatement was tested with the presentation of a discrete light+tone cue previously associated with alcohol delivery, following extinction without the cue. In Experiment 2, the effects of naltrindole (0 – 5-mg/kg, IP) or CTOP (0 – 3-µg/kg µg ICV) were assessed. For context-induced renewal, 7.5-mg/kg naltrindole reduced responding without affecting locomotor activity. Both doses of CTOP attenuated responding in the first 15 min of the renewal test session; however, total responses did not differ at the end of the session. For discrete cue-induced reinstatement, 1 and 5-mg/kg naltrindole attenuated responding, but CTOP had no effect. We conclude that while DOP receptors mediate alcohol seeking induced by discrete cues and context, MOP receptors may play a modest role only in context-induced renewal. These findings point to a differential involvement of opioid receptor subtypes in the effects of different kinds of conditioned stimuli on alcohol seeking, and support a more prominent role for DOP receptors. PMID:19686472

  10. An investigation of potential applications of OP-SAPS: Operational Sampled Analog Processors

    NASA Technical Reports Server (NTRS)

    Parrish, E. A.; Mcvey, E. S.

    1977-01-01

    The application of OP-SAP's (operational sampled analog processors) in pattern recognition system is summarized. Areas investigated include: (1) human face recognition; (2) a high-speed programmable transversal filter system; (3) discrete word (speech) recognition; and (4) a resolution enhancement system.

  11. Study of the influence of the parameters of an experiment on the simulation of pole figures of polycrystalline materials using electron microscopy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Antonova, A. O., E-mail: aoantonova@mail.ru; Savyolova, T. I.

    2016-05-15

    A two-dimensional mathematical model of a polycrystalline sample and an experiment on electron backscattering diffraction (EBSD) is considered. The measurement parameters are taken to be the scanning step and threshold grain-boundary angle. Discrete pole figures for materials with hexagonal symmetry have been calculated based on the results of the model experiment. Discrete and smoothed (by the kernel method) pole figures of the model sample and the samples in the model experiment are compared using homogeneity criterion χ{sup 2}, an estimate of the pole figure maximum and its coordinate, a deviation of the pole figures of the model in the experimentmore » from the sample in the space of L{sub 1} measurable functions, and the RP-criterion for estimating the pole figure errors. Is is shown that the problem of calculating pole figures is ill-posed and their determination with respect to measurement parameters is not reliable.« less

  12. Water quality monitoring and data collection in the Mississippi sound

    USGS Publications Warehouse

    Runner, Michael S.; Creswell, R.

    2002-01-01

    The United States Geological Survey and the Mississippi Department of Marine Resources are collecting data on the quality of the water in the Mississippi Sound of the Gulf of Mexico, and streamflow data for its tributaries. The U.S. Geological Survey is collecting continuous water-level data, continuous and discrete water-temperature data, continuous and discrete specific-conductance data, as well as chloride and salinity samples at two locations in the Mississippi Sound and three Corps of Engineers tidal gages. Continuous-discharge data are also being collected at two additional stations on tributaries. The Mississippi Department of Marine Resources collects water samples at 169 locations in the Gulf of Mexico. Between 1800 and 2000 samples are collected annually which are analyzed for turbidity and fecal coliform bacteria. The continuous data are made available real-time through the internet and are being used in conjunction with streamflow data, weather data, and sampling data for the monitoring and management of the oyster reefs, the shrimp fishery and other marine species and their habitats.

  13. Effects of Data Sampling on Graphical Depictions of Learning

    ERIC Educational Resources Information Center

    Carey, Mary-Katherine; Bourret, Jason C.

    2014-01-01

    Continuous and discontinuous data-collection methods were compared in the context of discrete-trial programming. Archival data sets were analyzed using trial sampling (1st 5 trials, 1st 3 trials, and 1st trial only) and session sampling (every other session, every 3rd session, and every 5th session). Results showed that trial sampling…

  14. Surface-water, water-quality, and meteorological data for the Cambridge, Massachusetts, drinking-water source area, water years 2007-08

    USGS Publications Warehouse

    Smith, Kirk P.

    2011-01-01

    Water samples were collected in nearly all of the subbasins in the Cambridge drinking-water source area and from Fresh Pond during the study period. Discrete water samples were collected during base-flow conditions with an antecedent dry period of at least 3 days. Composite sampl

  15. Spreadsheet Simulation of the Law of Large Numbers

    ERIC Educational Resources Information Center

    Boger, George

    2005-01-01

    If larger and larger samples are successively drawn from a population and a running average calculated after each sample has been drawn, the sequence of averages will converge to the mean, [mu], of the population. This remarkable fact, known as the law of large numbers, holds true if samples are drawn from a population of discrete or continuous…

  16. Study on the algorithm of computational ghost imaging based on discrete fourier transform measurement matrix

    NASA Astrophysics Data System (ADS)

    Zhang, Leihong; Liang, Dong; Li, Bei; Kang, Yi; Pan, Zilan; Zhang, Dawei; Gao, Xiumin; Ma, Xiuhua

    2016-07-01

    On the basis of analyzing the cosine light field with determined analytic expression and the pseudo-inverse method, the object is illuminated by a presetting light field with a determined discrete Fourier transform measurement matrix, and the object image is reconstructed by the pseudo-inverse method. The analytic expression of the algorithm of computational ghost imaging based on discrete Fourier transform measurement matrix is deduced theoretically, and compared with the algorithm of compressive computational ghost imaging based on random measurement matrix. The reconstruction process and the reconstruction error are analyzed. On this basis, the simulation is done to verify the theoretical analysis. When the sampling measurement number is similar to the number of object pixel, the rank of discrete Fourier transform matrix is the same as the one of the random measurement matrix, the PSNR of the reconstruction image of FGI algorithm and PGI algorithm are similar, the reconstruction error of the traditional CGI algorithm is lower than that of reconstruction image based on FGI algorithm and PGI algorithm. As the decreasing of the number of sampling measurement, the PSNR of reconstruction image based on FGI algorithm decreases slowly, and the PSNR of reconstruction image based on PGI algorithm and CGI algorithm decreases sharply. The reconstruction time of FGI algorithm is lower than that of other algorithms and is not affected by the number of sampling measurement. The FGI algorithm can effectively filter out the random white noise through a low-pass filter and realize the reconstruction denoising which has a higher denoising capability than that of the CGI algorithm. The FGI algorithm can improve the reconstruction accuracy and the reconstruction speed of computational ghost imaging.

  17. Time-Domain Evaluation of Fractional Order Controllers’ Direct Discretization Methods

    NASA Astrophysics Data System (ADS)

    Ma, Chengbin; Hori, Yoichi

    Fractional Order Control (FOC), in which the controlled systems and/or controllers are described by fractional order differential equations, has been applied to various control problems. Though it is not difficult to understand FOC’s theoretical superiority, realization issue keeps being somewhat problematic. Since the fractional order systems have an infinite dimension, proper approximation by finite difference equation is needed to realize the designed fractional order controllers. In this paper, the existing direct discretization methods are evaluated by their convergences and time-domain comparison with the baseline case. Proposed sampling time scaling property is used to calculate the baseline case with full memory length. This novel discretization method is based on the classical trapezoidal rule but with scaled sampling time. Comparative studies show good performance and simple algorithm make the Short Memory Principle method most practically superior. The FOC research is still at its primary stage. But its applications in modeling and robustness against non-linearities reveal the promising aspects. Parallel to the development of FOC theories, applying FOC to various control problems is also crucially important and one of top priority issues.

  18. Concurrent Tumor Segmentation and Registration with Uncertainty-based Sparse non-Uniform Graphs

    PubMed Central

    Parisot, Sarah; Wells, William; Chemouny, Stéphane; Duffau, Hugues; Paragios, Nikos

    2014-01-01

    In this paper, we present a graph-based concurrent brain tumor segmentation and atlas to diseased patient registration framework. Both segmentation and registration problems are modeled using a unified pairwise discrete Markov Random Field model on a sparse grid superimposed to the image domain. Segmentation is addressed based on pattern classification techniques, while registration is performed by maximizing the similarity between volumes and is modular with respect to the matching criterion. The two problems are coupled by relaxing the registration term in the tumor area, corresponding to areas of high classification score and high dissimilarity between volumes. In order to overcome the main shortcomings of discrete approaches regarding appropriate sampling of the solution space as well as important memory requirements, content driven samplings of the discrete displacement set and the sparse grid are considered, based on the local segmentation and registration uncertainties recovered by the min marginal energies. State of the art results on a substantial low-grade glioma database demonstrate the potential of our method, while our proposed approach shows maintained performance and strongly reduced complexity of the model. PMID:24717540

  19. Sparse dictionary for synthetic transmit aperture medical ultrasound imaging.

    PubMed

    Wang, Ping; Jiang, Jin-Yang; Li, Na; Luo, Han-Wu; Li, Fang; Cui, Shi-Gang

    2017-07-01

    It is possible to recover a signal below the Nyquist sampling limit using a compressive sensing technique in ultrasound imaging. However, the reconstruction enabled by common sparse transform approaches does not achieve satisfactory results. Considering the ultrasound echo signal's features of attenuation, repetition, and superposition, a sparse dictionary with the emission pulse signal is proposed. Sparse coefficients in the proposed dictionary have high sparsity. Images reconstructed with this dictionary were compared with those obtained with the three other common transforms, namely, discrete Fourier transform, discrete cosine transform, and discrete wavelet transform. The performance of the proposed dictionary was analyzed via a simulation and experimental data. The mean absolute error (MAE) was used to quantify the quality of the reconstructions. Experimental results indicate that the MAE associated with the proposed dictionary was always the smallest, the reconstruction time required was the shortest, and the lateral resolution and contrast of the reconstructed images were also the closest to the original images. The proposed sparse dictionary performed better than the other three sparse transforms. With the same sampling rate, the proposed dictionary achieved excellent reconstruction quality.

  20. Discrete element method (DEM) simulations of stratified sampling during solid dosage form manufacturing.

    PubMed

    Hancock, Bruno C; Ketterhagen, William R

    2011-10-14

    Discrete element model (DEM) simulations of the discharge of powders from hoppers under gravity were analyzed to provide estimates of dosage form content uniformity during the manufacture of solid dosage forms (tablets and capsules). For a system that exhibits moderate segregation the effects of sample size, number, and location within the batch were determined. The various sampling approaches were compared to current best-practices for sampling described in the Product Quality Research Institute (PQRI) Blend Uniformity Working Group (BUWG) guidelines. Sampling uniformly across the discharge process gave the most accurate results with respect to identifying segregation trends. Sigmoidal sampling (as recommended in the PQRI BUWG guidelines) tended to overestimate potential segregation issues, whereas truncated sampling (common in industrial practice) tended to underestimate them. The size of the sample had a major effect on the absolute potency RSD. The number of sampling locations (10 vs. 20) had very little effect on the trends in the data, and the number of samples analyzed at each location (1 vs. 3 vs. 7) had only a small effect for the sampling conditions examined. The results of this work provide greater understanding of the effect of different sampling approaches on the measured content uniformity of real dosage forms, and can help to guide the choice of appropriate sampling protocols. Copyright © 2011 Elsevier B.V. All rights reserved.

  1. Direct Discrete Method for Neutronic Calculations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vosoughi, Naser; Akbar Salehi, Ali; Shahriari, Majid

    The objective of this paper is to introduce a new direct method for neutronic calculations. This method which is named Direct Discrete Method, is simpler than the neutron Transport equation and also more compatible with physical meaning of problems. This method is based on physic of problem and with meshing of the desired geometry, writing the balance equation for each mesh intervals and with notice to the conjunction between these mesh intervals, produce the final discrete equations series without production of neutron transport differential equation and mandatory passing from differential equation bridge. We have produced neutron discrete equations for amore » cylindrical shape with two boundary conditions in one group energy. The correction of the results from this method are tested with MCNP-4B code execution. (authors)« less

  2. Discrete Roughness Transition for Hypersonic Flight Vehicles

    NASA Technical Reports Server (NTRS)

    Berry, Scott A.; Horvath, Thomas J.

    2007-01-01

    The importance of discrete roughness and the correlations developed to predict the onset of boundary layer transition on hypersonic flight vehicles are discussed. The paper is organized by hypersonic vehicle applications characterized in a general sense by the boundary layer: slender with hypersonic conditions at the edge of the boundary layer, moderately blunt with supersonic, and blunt with subsonic. This paper is intended to be a review of recent discrete roughness transition work completed at NASA Langley Research Center in support of agency flight test programs. First, a review is provided of discrete roughness wind tunnel data and the resulting correlations that were developed. Then, results obtained from flight vehicles, in particular the recently flown Hyper-X and Shuttle missions, are discussed and compared to the ground-based correlations.

  3. Analyzing Large Gene Expression and Methylation Data Profiles Using StatBicRM: Statistical Biclustering-Based Rule Mining

    PubMed Central

    Maulik, Ujjwal; Mallik, Saurav; Mukhopadhyay, Anirban; Bandyopadhyay, Sanghamitra

    2015-01-01

    Microarray and beadchip are two most efficient techniques for measuring gene expression and methylation data in bioinformatics. Biclustering deals with the simultaneous clustering of genes and samples. In this article, we propose a computational rule mining framework, StatBicRM (i.e., statistical biclustering-based rule mining) to identify special type of rules and potential biomarkers using integrated approaches of statistical and binary inclusion-maximal biclustering techniques from the biological datasets. At first, a novel statistical strategy has been utilized to eliminate the insignificant/low-significant/redundant genes in such way that significance level must satisfy the data distribution property (viz., either normal distribution or non-normal distribution). The data is then discretized and post-discretized, consecutively. Thereafter, the biclustering technique is applied to identify maximal frequent closed homogeneous itemsets. Corresponding special type of rules are then extracted from the selected itemsets. Our proposed rule mining method performs better than the other rule mining algorithms as it generates maximal frequent closed homogeneous itemsets instead of frequent itemsets. Thus, it saves elapsed time, and can work on big dataset. Pathway and Gene Ontology analyses are conducted on the genes of the evolved rules using David database. Frequency analysis of the genes appearing in the evolved rules is performed to determine potential biomarkers. Furthermore, we also classify the data to know how much the evolved rules are able to describe accurately the remaining test (unknown) data. Subsequently, we also compare the average classification accuracy, and other related factors with other rule-based classifiers. Statistical significance tests are also performed for verifying the statistical relevance of the comparative results. Here, each of the other rule mining methods or rule-based classifiers is also starting with the same post-discretized data-matrix. Finally, we have also included the integrated analysis of gene expression and methylation for determining epigenetic effect (viz., effect of methylation) on gene expression level. PMID:25830807

  4. Analyzing large gene expression and methylation data profiles using StatBicRM: statistical biclustering-based rule mining.

    PubMed

    Maulik, Ujjwal; Mallik, Saurav; Mukhopadhyay, Anirban; Bandyopadhyay, Sanghamitra

    2015-01-01

    Microarray and beadchip are two most efficient techniques for measuring gene expression and methylation data in bioinformatics. Biclustering deals with the simultaneous clustering of genes and samples. In this article, we propose a computational rule mining framework, StatBicRM (i.e., statistical biclustering-based rule mining) to identify special type of rules and potential biomarkers using integrated approaches of statistical and binary inclusion-maximal biclustering techniques from the biological datasets. At first, a novel statistical strategy has been utilized to eliminate the insignificant/low-significant/redundant genes in such way that significance level must satisfy the data distribution property (viz., either normal distribution or non-normal distribution). The data is then discretized and post-discretized, consecutively. Thereafter, the biclustering technique is applied to identify maximal frequent closed homogeneous itemsets. Corresponding special type of rules are then extracted from the selected itemsets. Our proposed rule mining method performs better than the other rule mining algorithms as it generates maximal frequent closed homogeneous itemsets instead of frequent itemsets. Thus, it saves elapsed time, and can work on big dataset. Pathway and Gene Ontology analyses are conducted on the genes of the evolved rules using David database. Frequency analysis of the genes appearing in the evolved rules is performed to determine potential biomarkers. Furthermore, we also classify the data to know how much the evolved rules are able to describe accurately the remaining test (unknown) data. Subsequently, we also compare the average classification accuracy, and other related factors with other rule-based classifiers. Statistical significance tests are also performed for verifying the statistical relevance of the comparative results. Here, each of the other rule mining methods or rule-based classifiers is also starting with the same post-discretized data-matrix. Finally, we have also included the integrated analysis of gene expression and methylation for determining epigenetic effect (viz., effect of methylation) on gene expression level.

  5. Improving immunization of programmable logic controllers using weighted median filters.

    PubMed

    Paredes, José L; Díaz, Dhionel

    2005-04-01

    This paper addresses the problem of improving immunization of programmable logic controllers (PLC's) to electromagnetic interference with impulsive characteristics. A filtering structure, based on weighted median filters, that does not require additional hardware and can be implemented in legacy PLC's is proposed. The filtering operation is implemented in the binary domain and removes the impulsive noise presented in the discrete input adding thus robustness to PLC's. By modifying the sampling clock structure, two variants of the filter are obtained. Both structures exploit the cyclic nature of the PLC to form an N-sample observation window of the discrete input, hence a status change on it is determined by the filter output taking into account all the N samples avoiding thus that a single impulse affects the PLC functionality. A comparative study, based on a statistical analysis, of the different filters' performances is presented.

  6. Research study on stabilization and control: Modern sampled-data control theory. Continuous and discrete describing function analysis of the LST system. [with emphasis on the control moment gyroscope control loop

    NASA Technical Reports Server (NTRS)

    Kuo, B. C.; Singh, G.

    1974-01-01

    The dynamics of the Large Space Telescope (LST) control system were studied in order to arrive at a simplified model for computer simulation without loss of accuracy. The frictional nonlinearity of the Control Moment Gyroscope (CMG) Control Loop was analyzed in a model to obtain data for the following: (1) a continuous describing function for the gimbal friction nonlinearity; (2) a describing function of the CMG nonlinearity using an analytical torque equation; and (3) the discrete describing function and function plots for CMG functional linearity. Preliminary computer simulations are shown for the simplified LST system, first without, and then with analytical torque expressions. Transfer functions of the sampled-data LST system are also described. A final computer simulation is presented which uses elements of the simplified sampled-data LST system with analytical CMG frictional torque expressions.

  7. Networked event-triggered control: an introduction and research trends

    NASA Astrophysics Data System (ADS)

    Mahmoud, Magdi S.; Sabih, Muhammad

    2014-11-01

    A physical system can be studied as either continuous time or discrete-time system depending upon the control objectives. Discrete-time control systems can be further classified into two categories based on the sampling: (1) time-triggered control systems and (2) event-triggered control systems. Time-triggered systems sample states and calculate controls at every sampling instant in a periodic fashion, even in cases when states and calculated control do not change much. This indicates unnecessary and useless data transmission and computation efforts of a time-triggered system, thus inefficiency. For networked systems, the transmission of measurement and control signals, thus, cause unnecessary network traffic. Event-triggered systems, on the other hand, have potential to reduce the communication burden in addition to reducing the computation of control signals. This paper provides an up-to-date survey on the event-triggered methods for control systems and highlights the potential research directions.

  8. Essentialist beliefs, sexual identity uncertainty, internalized homonegativity and psychological wellbeing in gay men.

    PubMed

    Morandini, James S; Blaszczynski, Alexander; Ross, Michael W; Costa, Daniel S J; Dar-Nimrod, Ilan

    2015-07-01

    The present study examined essentialist beliefs about sexual orientation and their implications for sexual identity uncertainty, internalized homonegativity and psychological wellbeing in a sample of gay men. A combination of targeted sampling and snowball strategies were used to recruit 639 gay identifying men for a cross-sectional online survey. Participants completed a questionnaire assessing sexual orientation beliefs, sexual identity uncertainty, internalized homonegativity, and psychological wellbeing outcomes. Structural equation modeling was used to test whether essentialist beliefs were associated with psychological wellbeing indirectly via their effect on sexual identity uncertainty and internalized homonegativity. A unique pattern of direct and indirect effects was observed in which facets of essentialism predicted sexual identity uncertainty, internalized homonegativity and psychological wellbeing. Of note, viewing sexual orientation as immutable/biologically based and as existing in discrete categories, were associated with less sexual identity uncertainty. On the other hand, these beliefs had divergent relationships with internalized homonegativity, with immutability/biological beliefs associated with lower, and discreteness beliefs associated with greater internalized homonegativity. Of interest, although sexual identity uncertainty was associated with poorer psychological wellbeing via its contribution to internalized homophobia, there was no direct relationship between identity uncertainty and psychological wellbeing. Findings indicate that essentializing sexual orientation has mixed implications for sexual identity uncertainty and internalized homonegativity and wellbeing in gay men. Those undertaking educational and clinical interventions with gay men should be aware of the benefits and of caveats of essentialist theories of homosexuality for this population. (c) 2015 APA, all rights reserved).

  9. Event-driven contrastive divergence for spiking neuromorphic systems.

    PubMed

    Neftci, Emre; Das, Srinjoy; Pedroni, Bruno; Kreutz-Delgado, Kenneth; Cauwenberghs, Gert

    2013-01-01

    Restricted Boltzmann Machines (RBMs) and Deep Belief Networks have been demonstrated to perform efficiently in a variety of applications, such as dimensionality reduction, feature learning, and classification. Their implementation on neuromorphic hardware platforms emulating large-scale networks of spiking neurons can have significant advantages from the perspectives of scalability, power dissipation and real-time interfacing with the environment. However, the traditional RBM architecture and the commonly used training algorithm known as Contrastive Divergence (CD) are based on discrete updates and exact arithmetics which do not directly map onto a dynamical neural substrate. Here, we present an event-driven variation of CD to train a RBM constructed with Integrate & Fire (I&F) neurons, that is constrained by the limitations of existing and near future neuromorphic hardware platforms. Our strategy is based on neural sampling, which allows us to synthesize a spiking neural network that samples from a target Boltzmann distribution. The recurrent activity of the network replaces the discrete steps of the CD algorithm, while Spike Time Dependent Plasticity (STDP) carries out the weight updates in an online, asynchronous fashion. We demonstrate our approach by training an RBM composed of leaky I&F neurons with STDP synapses to learn a generative model of the MNIST hand-written digit dataset, and by testing it in recognition, generation and cue integration tasks. Our results contribute to a machine learning-driven approach for synthesizing networks of spiking neurons capable of carrying out practical, high-level functionality.

  10. Event-driven contrastive divergence for spiking neuromorphic systems

    PubMed Central

    Neftci, Emre; Das, Srinjoy; Pedroni, Bruno; Kreutz-Delgado, Kenneth; Cauwenberghs, Gert

    2014-01-01

    Restricted Boltzmann Machines (RBMs) and Deep Belief Networks have been demonstrated to perform efficiently in a variety of applications, such as dimensionality reduction, feature learning, and classification. Their implementation on neuromorphic hardware platforms emulating large-scale networks of spiking neurons can have significant advantages from the perspectives of scalability, power dissipation and real-time interfacing with the environment. However, the traditional RBM architecture and the commonly used training algorithm known as Contrastive Divergence (CD) are based on discrete updates and exact arithmetics which do not directly map onto a dynamical neural substrate. Here, we present an event-driven variation of CD to train a RBM constructed with Integrate & Fire (I&F) neurons, that is constrained by the limitations of existing and near future neuromorphic hardware platforms. Our strategy is based on neural sampling, which allows us to synthesize a spiking neural network that samples from a target Boltzmann distribution. The recurrent activity of the network replaces the discrete steps of the CD algorithm, while Spike Time Dependent Plasticity (STDP) carries out the weight updates in an online, asynchronous fashion. We demonstrate our approach by training an RBM composed of leaky I&F neurons with STDP synapses to learn a generative model of the MNIST hand-written digit dataset, and by testing it in recognition, generation and cue integration tasks. Our results contribute to a machine learning-driven approach for synthesizing networks of spiking neurons capable of carrying out practical, high-level functionality. PMID:24574952

  11. Investigating catalyst coated membrane equilibration time for polymer electrolyte membrane fuel cell manufacturing

    NASA Astrophysics Data System (ADS)

    Cote, Philippe

    Mercedes-Benz Canada Inc., Fuel Cell Division, manufactures polymer electrolyte membrane fuel cell stacks for use in vehicles. The manufacturing line is being optimized for efficiency and quality control, in order to uphold the high standards of Mercedes-Benz Inc. vehicles. In an operating polymer electrolyte membrane fuel cell, the catalyst coated membrane facilitates the electrochemical reaction that generates electricity. This research examines the equilibration of catalyst coated membrane rolls to controlled temperature and humidity conditions, before they are used in the manufacturing of polymer electrolyte membrane fuel cells. Equilibration involves allowing the water content in the catalyst coated membrane to stabilize at the controlled conditions, in order to reduce mechanical stress in the material for better manufacturability. Initial equilibration measurements were conducted on discrete catalyst coated membrane samples using novel electronic conductivity measurements of the catalyst layer, and compared to ionic conductivity measurements of the membrane. Electronic conductivity measurements are easier to implement in the manufacturing environment than the more complex ionic conductivity measurements. When testing discrete catalyst coated membrane samples in an environmental chamber, the equilibration trends for the measured ionic and electronic conductivity signals were similar enough to permit us to adapt the electronic conductivity measurements for catalyst coated membrane in roll form. Equilibration measurements of catalyst coated membrane rolls were optimized to achieve a robust and repeatable procedure which could be used in the manufacturing environment at Mercedes-Benz Canada Inc., Fuel Cell Division.

  12. Probabilistic Round Trip Contamination Analysis of a Mars Sample Acquisition and Handling Process Using Markovian Decompositions

    NASA Technical Reports Server (NTRS)

    Hudson, Nicolas; Lin, Ying; Barengoltz, Jack

    2010-01-01

    A method for evaluating the probability of a Viable Earth Microorganism (VEM) contaminating a sample during the sample acquisition and handling (SAH) process of a potential future Mars Sample Return mission is developed. A scenario where multiple core samples would be acquired using a rotary percussive coring tool, deployed from an arm on a MER class rover is analyzed. The analysis is conducted in a structured way by decomposing sample acquisition and handling process into a series of discrete time steps, and breaking the physical system into a set of relevant components. At each discrete time step, two key functions are defined: The probability of a VEM being released from each component, and the transport matrix, which represents the probability of VEM transport from one component to another. By defining the expected the number of VEMs on each component at the start of the sampling process, these decompositions allow the expected number of VEMs on each component at each sampling step to be represented as a Markov chain. This formalism provides a rigorous mathematical framework in which to analyze the probability of a VEM entering the sample chain, as well as making the analysis tractable by breaking the process down into small analyzable steps.

  13. Benchmarks for single-phase flow in fractured porous media

    NASA Astrophysics Data System (ADS)

    Flemisch, Bernd; Berre, Inga; Boon, Wietse; Fumagalli, Alessio; Schwenck, Nicolas; Scotti, Anna; Stefansson, Ivar; Tatomir, Alexandru

    2018-01-01

    This paper presents several test cases intended to be benchmarks for numerical schemes for single-phase fluid flow in fractured porous media. A number of solution strategies are compared, including a vertex and two cell-centred finite volume methods, a non-conforming embedded discrete fracture model, a primal and a dual extended finite element formulation, and a mortar discrete fracture model. The proposed benchmarks test the schemes by increasing the difficulties in terms of network geometry, e.g. intersecting fractures, and physical parameters, e.g. low and high fracture-matrix permeability ratio as well as heterogeneous fracture permeabilities. For each problem, the results presented are the number of unknowns, the approximation errors in the porous matrix and in the fractures with respect to a reference solution, and the sparsity and condition number of the discretized linear system. All data and meshes used in this study are publicly available for further comparisons.

  14. Relationship between job demands and psychological outcomes among nurses: Does skill discretion matter?

    PubMed

    Viotti, Sara; Converso, Daniela

    2016-01-01

    The aim of the present study was to assess both the direct and indirect effects (i.e., interacting with various job demands) of skill discretion on various psychological outcomes (i.e., emotional exhaustion, intention to leave, affective well-being, and job satisfaction). Data were collected by a self-reported questionnaire in 3 hospitals in Italy. The sample consisted of 522 nurses. Moderated hierarchical regression analyses were employed. The findings highlighted the direct effect of skill discretion on reducing emotional exhaustion, intention to leave, sustaining affective well-being and job satisfaction. As regards interaction effect, the analyses indicated that skill discretion moderates the negative effect of disproportionate patient expectations on all the considered psychological outcomes. On the other hand, skill discretion was found to moderate the effect of cognitive demands on turnover intention as well as the effect of quantitative demands on emotional exhaustion and job satisfaction only in conditions of low job demands. The study revealed some interesting findings, suggesting that skill discretion is not a resource in the pure sense, but that it also has some characteristics of a job demand. The study has relevant practical implications. Particularly, from a job design point of view, the present study suggests that job demands and skill discretion should be balanced carefully in order to sustain job well-being and worker retention. This work is available in Open Access model and licensed under a CC BY-NC 3.0 PL license.

  15. Multisource Data Classification Using A Hybrid Semi-supervised Learning Scheme

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vatsavai, Raju; Bhaduri, Budhendra L; Shekhar, Shashi

    2009-01-01

    In many practical situations thematic classes can not be discriminated by spectral measurements alone. Often one needs additional features such as population density, road density, wetlands, elevation, soil types, etc. which are discrete attributes. On the other hand remote sensing image features are continuous attributes. Finding a suitable statistical model and estimation of parameters is a challenging task in multisource (e.g., discrete and continuous attributes) data classification. In this paper we present a semi-supervised learning method by assuming that the samples were generated by a mixture model, where each component could be either a continuous or discrete distribution. Overall classificationmore » accuracy of the proposed method is improved by 12% in our initial experiments.« less

  16. An improved switching converter model. Ph.D. Thesis. Final Report

    NASA Technical Reports Server (NTRS)

    Shortt, D. J.

    1982-01-01

    The nonlinear modeling and analysis of dc-dc converters in the continuous mode and discontinuous mode was done by averaging and discrete sampling techniques. A model was developed by combining these two techniques. This model, the discrete average model, accurately predicts the envelope of the output voltage and is easy to implement in circuit and state variable forms. The proposed model is shown to be dependent on the type of duty cycle control. The proper selection of the power stage model, between average and discrete average, is largely a function of the error processor in the feedback loop. The accuracy of the measurement data taken by a conventional technique is affected by the conditions at which the data is collected.

  17. Application of time series discretization using evolutionary programming for classification of precancerous cervical lesions.

    PubMed

    Acosta-Mesa, Héctor-Gabriel; Rechy-Ramírez, Fernando; Mezura-Montes, Efrén; Cruz-Ramírez, Nicandro; Hernández Jiménez, Rodolfo

    2014-06-01

    In this work, we present a novel application of time series discretization using evolutionary programming for the classification of precancerous cervical lesions. The approach optimizes the number of intervals in which the length and amplitude of the time series should be compressed, preserving the important information for classification purposes. Using evolutionary programming, the search for a good discretization scheme is guided by a cost function which considers three criteria: the entropy regarding the classification, the complexity measured as the number of different strings needed to represent the complete data set, and the compression rate assessed as the length of the discrete representation. This discretization approach is evaluated using a time series data based on temporal patterns observed during a classical test used in cervical cancer detection; the classification accuracy reached by our method is compared with the well-known times series discretization algorithm SAX and the dimensionality reduction method PCA. Statistical analysis of the classification accuracy shows that the discrete representation is as efficient as the complete raw representation for the present application, reducing the dimensionality of the time series length by 97%. This representation is also very competitive in terms of classification accuracy when compared with similar approaches. Copyright © 2014 Elsevier Inc. All rights reserved.

  18. A Stochastic Diffusion Process for the Dirichlet Distribution

    DOE PAGES

    Bakosi, J.; Ristorcelli, J. R.

    2013-03-01

    The method of potential solutions of Fokker-Planck equations is used to develop a transport equation for the joint probability ofNcoupled stochastic variables with the Dirichlet distribution as its asymptotic solution. To ensure a bounded sample space, a coupled nonlinear diffusion process is required: the Wiener processes in the equivalent system of stochastic differential equations are multiplicative with coefficients dependent on all the stochastic variables. Individual samples of a discrete ensemble, obtained from the stochastic process, satisfy a unit-sum constraint at all times. The process may be used to represent realizations of a fluctuating ensemble ofNvariables subject to a conservation principle.more » Similar to the multivariate Wright-Fisher process, whose invariant is also Dirichlet, the univariate case yields a process whose invariant is the beta distribution. As a test of the results, Monte Carlo simulations are used to evolve numerical ensembles toward the invariant Dirichlet distribution.« less

  19. Demonstration/Validation of Incremental Sampling at Two Diverse Military Ranges and Development of an Incremental Sampling Tool

    DTIC Science & Technology

    2010-06-01

    Sampling (MIS)? • Technique of combining many increments of soil from a number of points within exposure area • Developed by Enviro Stat (Trademarked...Demonstrating a reliable soil sampling strategy to accurately characterize contaminant concentrations in spatially extreme and heterogeneous...into a set of decision (exposure) units • One or several discrete or small- scale composite soil samples collected to represent each decision unit

  20. Adjoint-Based Methodology for Time-Dependent Optimization

    NASA Technical Reports Server (NTRS)

    Yamaleev, N. K.; Diskin, B.; Nielsen, E. J.

    2008-01-01

    This paper presents a discrete adjoint method for a broad class of time-dependent optimization problems. The time-dependent adjoint equations are derived in terms of the discrete residual of an arbitrary finite volume scheme which approximates unsteady conservation law equations. Although only the 2-D unsteady Euler equations are considered in the present analysis, this time-dependent adjoint method is applicable to the 3-D unsteady Reynolds-averaged Navier-Stokes equations with minor modifications. The discrete adjoint operators involving the derivatives of the discrete residual and the cost functional with respect to the flow variables are computed using a complex-variable approach, which provides discrete consistency and drastically reduces the implementation and debugging cycle. The implementation of the time-dependent adjoint method is validated by comparing the sensitivity derivative with that obtained by forward mode differentiation. Our numerical results show that O(10) optimization iterations of the steepest descent method are needed to reduce the objective functional by 3-6 orders of magnitude for test problems considered.

  1. Discrete sequence prediction and its applications

    NASA Technical Reports Server (NTRS)

    Laird, Philip

    1992-01-01

    Learning from experience to predict sequences of discrete symbols is a fundamental problem in machine learning with many applications. We apply sequence prediction using a simple and practical sequence-prediction algorithm, called TDAG. The TDAG algorithm is first tested by comparing its performance with some common data compression algorithms. Then it is adapted to the detailed requirements of dynamic program optimization, with excellent results.

  2. The application of the analog signal to discrete time interval converter to the signal conditioner power supplies

    NASA Technical Reports Server (NTRS)

    Schoenfeld, A. D.; Yu, Y.

    1973-01-01

    The Analog Signal to Discrete Time Interval Converter microminiaturized module was utilized to control the signal conditioner power supplies. The multi-loop control provides outstanding static and dynamic performance characteristics, exceeding those generally associated with single-loop regulators. Eight converter boards, each containing three independent dc to dc converter, were built, tested, and delivered.

  3. Evaluating the Whitening and Microstructural Effects of a Novel Whitening Strip on Porcelain and Composite Dental Materials

    PubMed Central

    Takesh, Thair; Sargsyan, Anik; Lee, Matthew; Anbarani, Afarin; Ho, Jessica; Wilder-Smith, Petra

    2017-01-01

    Aims The aim of this project was to evaluate the effects of 2 different whitening strips on color, microstructure and roughness of tea stained porcelain and composite surfaces. Methods 54 porcelain and 72 composite chips served as samples for timed application of over-the-counter (OTC) test or control dental whitening strips. Chips were divided randomly into three groups of 18 porcelain and 24 composite chips each. Of these groups, 1 porcelain and 1 composite set served as controls. The remaining 2 groups were randomized to treatment with either Oral Essentials® Whitening Strips or Crest® 3D White Whitestrips™. Sample surface structure was examined by light microscopy, profilometry and Scanning Electron Microscopy (SEM). Additionally, a reflectance spectrophotometer was used to assess color changes in the porcelain and composite samples over 24 hours of whitening. Data points were analyzed at each time point using ANOVA. Results In the light microscopy and SEM images, no discrete physical defects were observed in any of the samples at any time points. However, high-resolution SEM images showed an appearance of increased surface roughness in all composite samples. Using profilometry, significantly increased post-whitening roughness was documented in the composite samples exposed to the control bleaching strips. Composite samples underwent a significant and equivalent shift in color following exposure to Crest® 3D White Whitestrips™ and Oral Essentials® Whitening Strips. Conclusions A novel commercial tooth whitening strip demonstrated a comparable beaching effect to a widely used OTC whitening strip. Neither whitening strip caused physical defects in the sample surfaces. However, the control strip caused roughening of the composite samples whereas the test strip did not. PMID:29226023

  4. Technology Development Risk Assessment for Space Transportation Systems

    NASA Technical Reports Server (NTRS)

    Mathias, Donovan L.; Godsell, Aga M.; Go, Susie

    2006-01-01

    A new approach for assessing development risk associated with technology development projects is presented. The method represents technology evolution in terms of sector-specific discrete development stages. A Monte Carlo simulation is used to generate development probability distributions based on statistical models of the discrete transitions. Development risk is derived from the resulting probability distributions and specific program requirements. Two sample cases are discussed to illustrate the approach, a single rocket engine development and a three-technology space transportation portfolio.

  5. THYME: Toolkit for Hybrid Modeling of Electric Power Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nutaro Kalyan Perumalla, James Joseph

    2011-01-01

    THYME is an object oriented library for building models of wide area control and communications in electric power systems. This software is designed as a module to be used with existing open source simulators for discrete event systems in general and communication systems in particular. THYME consists of a typical model for simulating electro-mechanical transients (e.g., as are used in dynamic stability studies), data handling objects to work with CDF and PTI formatted power flow data, and sample models of discrete sensors and controllers.

  6. Asymptotic analysis of discrete schemes for non-equilibrium radiation diffusion

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cui, Xia, E-mail: cui_xia@iapcm.ac.cn; Yuan, Guang-wei; Shen, Zhi-jun

    Motivated by providing well-behaved fully discrete schemes in practice, this paper extends the asymptotic analysis on time integration methods for non-equilibrium radiation diffusion in [2] to space discretizations. Therein studies were carried out on a two-temperature model with Larsen's flux-limited diffusion operator, both the implicitly balanced (IB) and linearly implicit (LI) methods were shown asymptotic-preserving. In this paper, we focus on asymptotic analysis for space discrete schemes in dimensions one and two. First, in construction of the schemes, in contrast to traditional first-order approximations, asymmetric second-order accurate spatial approximations are devised for flux-limiters on boundary, and discrete schemes with second-ordermore » accuracy on global spatial domain are acquired consequently. Then by employing formal asymptotic analysis, the first-order asymptotic-preserving property for these schemes and furthermore for the fully discrete schemes is shown. Finally, with the help of manufactured solutions, numerical tests are performed, which demonstrate quantitatively the fully discrete schemes with IB time evolution indeed have the accuracy and asymptotic convergence as theory predicts, hence are well qualified for both non-equilibrium and equilibrium radiation diffusion. - Highlights: • Provide AP fully discrete schemes for non-equilibrium radiation diffusion. • Propose second order accurate schemes by asymmetric approach for boundary flux-limiter. • Show first order AP property of spatially and fully discrete schemes with IB evolution. • Devise subtle artificial solutions; verify accuracy and AP property quantitatively. • Ideas can be generalized to 3-dimensional problems and higher order implicit schemes.« less

  7. Use of market segmentation to identify untapped consumer needs in vision correction surgery for future growth.

    PubMed

    Loarie, Thomas M; Applegate, David; Kuenne, Christopher B; Choi, Lawrence J; Horowitz, Diane P

    2003-01-01

    Market segmentation analysis identifies discrete segments of the population whose beliefs are consistent with exhibited behaviors such as purchase choice. This study applies market segmentation analysis to low myopes (-1 to -3 D with less than 1 D cylinder) in their consideration and choice of a refractive surgery procedure to discover opportunities within the market. A quantitative survey based on focus group research was sent to a demographically balanced sample of myopes using contact lenses and/or glasses. A variable reduction process followed by a clustering analysis was used to discover discrete belief-based segments. The resulting segments were validated both analytically and through in-market testing. Discontented individuals who wear contact lenses are the primary target for vision correction surgery. However, 81% of the target group is apprehensive about laser in situ keratomileusis (LASIK). They are nervous about the procedure and strongly desire reversibility and exchangeability. There exists a large untapped opportunity for vision correction surgery within the low myope population. Market segmentation analysis helped determine how to best meet this opportunity through repositioning existing procedures or developing new vision correction technology, and could also be applied to identify opportunities in other vision correction populations.

  8. Flow system for optical activity detection of vegetable extracts employing molecular exclusion continuous chromatographic detection

    NASA Astrophysics Data System (ADS)

    Fajer, V.; Rodríguez, C.; Naranjo, S.; Mesa, G.; Mora, W.; Arista, E.; Cepero, T.; Fernández, H.

    2006-02-01

    The combination of molecular exclusion chromatography and laser polarimetric detection has turned into a carbohydrate separation and quantification system for plant fluids of industrial value, making it possible the evaluation of the quality of sugarcane juices, agave juices and many other plant extracts. Some previous papers described a system where liquid chromatography separation and polarimetric detection using a LASERPOL 101M polarimeter with He-Ne light source allowed the collection and quantification of discrete samples for analytical purposes. In this paper, the authors are introducing a new improved system which accomplishes polarimetric measurements in a continuous flux. Chromatograms of several carbohydrates standard solutions were obtained as useful references to study juice quality of several sugarcane varieties under different physiological conditions. Results by either discrete or continuous flux systems were compared in order to test the validation of the new system. An application of the system to the diagnostics of scalded foliar is described. A computer program allowing the output of the chromatograms to a display on line and the possibility of digital storing, maxima detections, zone integration, and some other possibilities make this system very competitive and self-convincing.

  9. Feature extraction using extrema sampling of discrete derivatives for spike sorting in implantable upper-limb neural prostheses.

    PubMed

    Zamani, Majid; Demosthenous, Andreas

    2014-07-01

    Next generation neural interfaces for upper-limb (and other) prostheses aim to develop implantable interfaces for one or more nerves, each interface having many neural signal channels that work reliably in the stump without harming the nerves. To achieve real-time multi-channel processing it is important to integrate spike sorting on-chip to overcome limitations in transmission bandwidth. This requires computationally efficient algorithms for feature extraction and clustering suitable for low-power hardware implementation. This paper describes a new feature extraction method for real-time spike sorting based on extrema analysis (namely positive peaks and negative peaks) of spike shapes and their discrete derivatives at different frequency bands. Employing simulation across different datasets, the accuracy and computational complexity of the proposed method are assessed and compared with other methods. The average classification accuracy of the proposed method in conjunction with online sorting (O-Sort) is 91.6%, outperforming all the other methods tested with the O-Sort clustering algorithm. The proposed method offers a better tradeoff between classification error and computational complexity, making it a particularly strong choice for on-chip spike sorting.

  10. Multiple Time-Step Dual-Hamiltonian Hybrid Molecular Dynamics — Monte Carlo Canonical Propagation Algorithm

    PubMed Central

    Weare, Jonathan; Dinner, Aaron R.; Roux, Benoît

    2016-01-01

    A multiple time-step integrator based on a dual Hamiltonian and a hybrid method combining molecular dynamics (MD) and Monte Carlo (MC) is proposed to sample systems in the canonical ensemble. The Dual Hamiltonian Multiple Time-Step (DHMTS) algorithm is based on two similar Hamiltonians: a computationally expensive one that serves as a reference and a computationally inexpensive one to which the workload is shifted. The central assumption is that the difference between the two Hamiltonians is slowly varying. Earlier work has shown that such dual Hamiltonian multiple time-step schemes effectively precondition nonlinear differential equations for dynamics by reformulating them into a recursive root finding problem that can be solved by propagating a correction term through an internal loop, analogous to RESPA. Of special interest in the present context, a hybrid MD-MC version of the DHMTS algorithm is introduced to enforce detailed balance via a Metropolis acceptance criterion and ensure consistency with the Boltzmann distribution. The Metropolis criterion suppresses the discretization errors normally associated with the propagation according to the computationally inexpensive Hamiltonian, treating the discretization error as an external work. Illustrative tests are carried out to demonstrate the effectiveness of the method. PMID:26918826

  11. RRI-GBT MULTI-BAND RECEIVER: MOTIVATION, DESIGN, AND DEVELOPMENT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Maan, Yogesh; Deshpande, Avinash A.; Chandrashekar, Vinutha

    2013-01-15

    We report the design and development of a self-contained multi-band receiver (MBR) system, intended for use with a single large aperture to facilitate sensitive and high time-resolution observations simultaneously in 10 discrete frequency bands sampling a wide spectral span (100-1500 MHz) in a nearly log-periodic fashion. The development of this system was primarily motivated by need for tomographic studies of pulsar polar emission regions. Although the system design is optimized for the primary goal, it is also suited for several other interesting astronomical investigations. The system consists of a dual-polarization multi-band feed (with discrete responses corresponding to the 10 bandsmore » pre-selected as relatively radio frequency interference free), a common wide-band radio frequency front-end, and independent back-end receiver chains for the 10 individual sub-bands. The raw voltage time sequences corresponding to 16 MHz bandwidth each for the two linear polarization channels and the 10 bands are recorded at the Nyquist rate simultaneously. We present the preliminary results from the tests and pulsar observations carried out with the Robert C. Byrd Green Bank Telescope using this receiver. The system performance implied by these results and possible improvements are also briefly discussed.« less

  12. Numerical Integration Techniques for Curved-Element Discretizations of Molecule–Solvent Interfaces

    PubMed Central

    Bardhan, Jaydeep P.; Altman, Michael D.; Willis, David J.; Lippow, Shaun M.; Tidor, Bruce; White, Jacob K.

    2012-01-01

    Surface formulations of biophysical modeling problems offer attractive theoretical and computational properties. Numerical simulations based on these formulations usually begin with discretization of the surface under consideration; often, the surface is curved, possessing complicated structure and possibly singularities. Numerical simulations commonly are based on approximate, rather than exact, discretizations of these surfaces. To assess the strength of the dependence of simulation accuracy on the fidelity of surface representation, we have developed methods to model several important surface formulations using exact surface discretizations. Following and refining Zauhar’s work (J. Comp.-Aid. Mol. Des. 9:149-159, 1995), we define two classes of curved elements that can exactly discretize the van der Waals, solvent-accessible, and solvent-excluded (molecular) surfaces. We then present numerical integration techniques that can accurately evaluate nonsingular and singular integrals over these curved surfaces. After validating the exactness of the surface discretizations and demonstrating the correctness of the presented integration methods, we present a set of calculations that compare the accuracy of approximate, planar-triangle-based discretizations and exact, curved-element-based simulations of surface-generalized-Born (sGB), surface-continuum van der Waals (scvdW), and boundary-element method (BEM) electrostatics problems. Results demonstrate that continuum electrostatic calculations with BEM using curved elements, piecewise-constant basis functions, and centroid collocation are nearly ten times more accurate than planartriangle BEM for basis sets of comparable size. The sGB and scvdW calculations give exceptional accuracy even for the coarsest obtainable discretized surfaces. The extra accuracy is attributed to the exact representation of the solute–solvent interface; in contrast, commonly used planar-triangle discretizations can only offer improved approximations with increasing discretization and associated increases in computational resources. The results clearly demonstrate that our methods for approximate integration on an exact geometry are far more accurate than exact integration on an approximate geometry. A MATLAB implementation of the presented integration methods and sample data files containing curved-element discretizations of several small molecules are available online at http://web.mit.edu/tidor. PMID:17627358

  13. The graphic cell method: a new look at digitizing geologic maps

    USGS Publications Warehouse

    Hanley, J.T.

    1982-01-01

    The graphic cell method is an alternative method of digitizing areal geologic information. It involves a discrete-point sampling scheme in which the computer establishes a matrix of cells over the map. Each cell and the whole cell is assigned the identity or value of the geologic information that is recognized at its center. Cell size may be changed to suit the needs of the user. The computer program resolves the matrix and identifies potential errors such as multiple assignments. Input includes the digitized boundaries of each geologic formation. This method should eliminate a primary bottleneck in the creation and testing of geomathematical models in such disciplines as resource appraisal. ?? 1982.

  14. Analysis of survival data from telemetry projects

    USGS Publications Warehouse

    Bunck, C.M.; Winterstein, S.R.; Pollock, K.H.

    1985-01-01

    Telemetry techniques can be used to study the survival rates of animal populations and are particularly suitable for species or settings for which band recovery models are not. Statistical methods for estimating survival rates and parameters of survival distributions from observations of radio-tagged animals will be described. These methods have been applied to medical and engineering studies and to the study of nest success. Estimates and tests based on discrete models, originally introduced by Mayfield, and on continuous models, both parametric and nonparametric, will be described. Generalizations, including staggered entry of subjects into the study and identification of mortality factors will be considered. Additional discussion topics will include sample size considerations, relocation frequency for subjects, and use of covariates.

  15. Couple stresses and the fracture of rock.

    PubMed

    Atkinson, Colin; Coman, Ciprian D; Aldazabal, Javier

    2015-03-28

    An assessment is made here of the role played by the micropolar continuum theory on the cracked Brazilian disc test used for determining rock fracture toughness. By analytically solving the corresponding mixed boundary-value problems and employing singular-perturbation arguments, we provide closed-form expressions for the energy release rate and the corresponding stress-intensity factors for both mode I and mode II loading. These theoretical results are augmented by a set of fracture toughness experiments on both sandstone and marble rocks. It is further shown that the morphology of the fracturing process in our centrally pre-cracked circular samples correlates very well with discrete element simulations. © 2015 The Author(s) Published by the Royal Society. All rights reserved.

  16. Rule Learning in Autism: The Role of Reward Type and Social Context

    PubMed Central

    Jones, E. J. H.; Webb, S. J.; Estes, A.; Dawson, G.

    2013-01-01

    Learning abstract rules is central to social and cognitive development. Across two experiments, we used Delayed Non-Matching to Sample tasks to characterize the longitudinal development and nature of rule-learning impairments in children with Autism Spectrum Disorder (ASD). Results showed that children with ASD consistently experienced more difficulty learning an abstract rule from a discrete physical reward than children with DD. Rule learning was facilitated by the provision of more concrete reinforcement, suggesting an underlying difficulty in forming conceptual connections. Learning abstract rules about social stimuli remained challenging through late childhood, indicating the importance of testing executive functions in both social and non-social contexts. PMID:23311315

  17. Physical and chemical characterization of actinides in soil from Johnston Atoll

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wolf, S.F.; Bates, J.K.; Buck, E.C.

    1997-02-01

    Characterization of the actinide content of a sample of contaminated coral soil from Johnston Atoll, the site of three non-nuclear destructs of nuclear warhead-carrying THOR missiles in 1962, revealed that >99% of the total actinide content is associated with discrete bomb fragments. After removal of these fragments, there was an inverse correlation between actinide content and soil particle size in particles from 43 to 0.4 {mu}m diameter. Detailed analyses of this remaining soil revealed no discrete actinide phase in these soil particles, despite measurable actinide content. Observations indicate that exposure to the environment has caused the conversion of relatively insolublemore » actinide oxides to the more soluble actinyl oxides and actinyl carbonate coordinated complexes. This process has led to dissolution of actinides from discrete particles and migration to the surrounding soil surfaces, resulting in a dispersion greater than would be expected by physical transport of discrete particles alone. 26 refs., 4 figs., 1 tab.« less

  18. DRME: Count-based differential RNA methylation analysis at small sample size scenario.

    PubMed

    Liu, Lian; Zhang, Shao-Wu; Gao, Fan; Zhang, Yixin; Huang, Yufei; Chen, Runsheng; Meng, Jia

    2016-04-15

    Differential methylation, which concerns difference in the degree of epigenetic regulation via methylation between two conditions, has been formulated as a beta or beta-binomial distribution to address the within-group biological variability in sequencing data. However, a beta or beta-binomial model is usually difficult to infer at small sample size scenario with discrete reads count in sequencing data. On the other hand, as an emerging research field, RNA methylation has drawn more and more attention recently, and the differential analysis of RNA methylation is significantly different from that of DNA methylation due to the impact of transcriptional regulation. We developed DRME to better address the differential RNA methylation problem. The proposed model can effectively describe within-group biological variability at small sample size scenario and handles the impact of transcriptional regulation on RNA methylation. We tested the newly developed DRME algorithm on simulated and 4 MeRIP-Seq case-control studies and compared it with Fisher's exact test. It is in principle widely applicable to several other RNA-related data types as well, including RNA Bisulfite sequencing and PAR-CLIP. The code together with an MeRIP-Seq dataset is available online (https://github.com/lzcyzm/DRME) for evaluation and reproduction of the figures shown in this article. Copyright © 2016 Elsevier Inc. All rights reserved.

  19. Classifier-Guided Sampling for Complex Energy System Optimization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Backlund, Peter B.; Eddy, John P.

    2015-09-01

    This report documents the results of a Laboratory Directed Research and Development (LDRD) effort enti tled "Classifier - Guided Sampling for Complex Energy System Optimization" that was conducted during FY 2014 and FY 2015. The goal of this proj ect was to develop, implement, and test major improvements to the classifier - guided sampling (CGS) algorithm. CGS is type of evolutionary algorithm for perform ing search and optimization over a set of discrete design variables in the face of one or more objective functions. E xisting evolutionary algorithms, such as genetic algorithms , may require a large number of omore » bjecti ve function evaluations to identify optimal or near - optimal solutions . Reducing the number of evaluations can result in significant time savings, especially if the objective function is computationally expensive. CGS reduce s the evaluation count by us ing a Bayesian network classifier to filter out non - promising candidate designs , prior to evaluation, based on their posterior probabilit ies . In this project, b oth the single - objective and multi - objective version s of the CGS are developed and tested on a set of benchm ark problems. As a domain - specific case study, CGS is used to design a microgrid for use in islanded mode during an extended bulk power grid outage.« less

  20. A novel method for correcting scanline-observational bias of discontinuity orientation

    PubMed Central

    Huang, Lei; Tang, Huiming; Tan, Qinwen; Wang, Dingjian; Wang, Liangqing; Ez Eldin, Mutasim A. M.; Li, Changdong; Wu, Qiong

    2016-01-01

    Scanline observation is known to introduce an angular bias into the probability distribution of orientation in three-dimensional space. In this paper, numerical solutions expressing the functional relationship between the scanline-observational distribution (in one-dimensional space) and the inherent distribution (in three-dimensional space) are derived using probability theory and calculus under the independence hypothesis of dip direction and dip angle. Based on these solutions, a novel method for obtaining the inherent distribution (also for correcting the bias) is proposed, an approach which includes two procedures: 1) Correcting the cumulative probabilities of orientation according to the solutions, and 2) Determining the distribution of the corrected orientations using approximation methods such as the one-sample Kolmogorov-Smirnov test. The inherent distribution corrected by the proposed method can be used for discrete fracture network (DFN) modelling, which is applied to such areas as rockmass stability evaluation, rockmass permeability analysis, rockmass quality calculation and other related fields. To maximize the correction capacity of the proposed method, the observed sample size is suggested through effectiveness tests for different distribution types, dispersions and sample sizes. The performance of the proposed method and the comparison of its correction capacity with existing methods are illustrated with two case studies. PMID:26961249

  1. COED Transactions, Vol. X, No. 10, October 1978. Simulation of a Sampled-Data System on a Hybrid Computer.

    ERIC Educational Resources Information Center

    Mitchell, Eugene E., Ed.

    The simulation of a sampled-data system is described that uses a full parallel hybrid computer. The sampled data system simulated illustrates the proportional-integral-derivative (PID) discrete control of a continuous second-order process representing a stirred-tank. The stirred-tank is simulated using continuous analog components, while PID…

  2. Narrowband Interference Suppression in Spread Spectrum Communication Systems

    DTIC Science & Technology

    1995-12-01

    receiver input. As stated earlier, these waveforms must be sampled to obtain the discrete time sequences. The sampling theorem states: A bandlimited...From the FFT chips, the data is passed to a Plessey PDSP16330 Pythagoras Processor. The 16330 is a high-speed digital CMOS IC that converts real and

  3. Designing efficient nitrous oxide sampling strategies in agroecosystems using simulation models

    USDA-ARS?s Scientific Manuscript database

    Cumulative nitrous oxide (N2O) emissions calculated from discrete chamber-based flux measurements have unknown uncertainty. This study used an agroecosystems simulation model to design sampling strategies that yield accurate cumulative N2O flux estimates with a known uncertainty level. Daily soil N2...

  4. Teachers' Emotional Labour, Discrete Emotions, and Classroom Management Self-Efficacy

    ERIC Educational Resources Information Center

    Lee, Mikyoung; van Vlack, Stephen

    2018-01-01

    Extending research on teachers' emotions beyond general educational contexts and Western samples, we examined how teachers' emotions correlated with their emotional labour strategies and classroom management self-efficacy with an East-Asian sample in an English teaching context (127 Korean English teachers). Surface acting (emotional expressions…

  5. ESTIMATION OF TOTAL DISSOLVED NITRATE LOAD IN NATURAL STREAM FLOWS USING AN IN-STREAM MONITOR

    EPA Science Inventory

    Estuaries respond rapidly to rain events and the nutrients carried by inflowing rivers such that discrete samples at weekly or monthly intervals are inadequate to catch the maxima and minima in nutrient variability. To acquire data with sufficient sampling frequency to realistica...

  6. A Short-Segment Fourier Transform Methodology

    DTIC Science & Technology

    2009-03-01

    defined sampling of the continuous-valued discrete-time Fourier transform, superresolution in the frequency domain and allowance of Dirac delta functions associated with pure sinusoidal input data components.

  7. Real-time PCR strategy for the identification of Trypanosoma cruzi discrete typing units directly in chronically infected human blood.

    PubMed

    Muñoz-San Martín, Catalina; Apt, Werner; Zulantay, Inés

    2017-04-01

    The protozoan Trypanosoma cruzi is the causative agent of Chagas disease, a major public health problem in Latin America. This parasite has a complex population structure comprised by six or seven major evolutionary lineages (discrete typing units or DTUs) TcI-TcVI and TcBat, some of which have apparently resulted from ancient hybridization events. Because of the existence of significant biological differences between these lineages, strain characterization methods have been essential to study T. cruzi in its different vectors and hosts. However, available methods can be laborious and costly, limited in resolution or sensitivity. In this study, a new genotyping strategy by real-time PCR to identify each of the six DTUs in clinical blood samples have been developed and evaluated. Two nuclear (SL-IR and 18S rDNA) and two mitochondrial genes (COII and ND1) were selected to develop original primers. The method was evaluated with eight genomic DNA of T. cruzi populations belonging to the six DTUs, one genomic DNA of Trypanosoma rangeli, and 53 blood samples from individuals with chronic Chagas disease. The assays had an analytical sensitivity of 1-25fg of DNA per reaction tube depending on the DTU analyzed. The selectivity of trials with 20fg/μL of genomic DNA identified each DTU, excluding non-targets DTUs in every test. The method was able to characterize 67.9% of the chronically infected clinical samples with high detection of TcII followed by TcI. With the proposed original genotyping methodology, each DTU was established with high sensitivity after a single real-time PCR assay. This novel protocol reduces carryover contamination, enables detection of each DTU independently and in the future, the quantification of each DTU in clinical blood samples. Copyright © 2017 Elsevier B.V. All rights reserved.

  8. A test of geographic assignment using isotope tracers in feathers of known origin

    USGS Publications Warehouse

    Wunder, Michael B.; Kester, C.L.; Knopf, F.L.; Rye, R.O.

    2005-01-01

    We used feathers of known origin collected from across the breeding range of a migratory shorebird to test the use of isotope tracers for assigning breeding origins. We analyzed δD, δ13C, and δ15N in feathers from 75 mountain plover (Charadrius montanus) chicks sampled in 2001 and from 119 chicks sampled in 2002. We estimated parameters for continuous-response inverse regression models and for discrete-response Bayesian probability models from data for each year independently. We evaluated model predictions with both the training data and by using the alternate year as an independent test dataset. Our results provide weak support for modeling latitude and isotope values as monotonic functions of one another, especially when data are pooled over known sources of variation such as sample year or location. We were unable to make even qualitative statements, such as north versus south, about the likely origin of birds using both δD and δ13C in inverse regression models; results were no better than random assignment. Probability models provided better results and a more natural framework for the problem. Correct assignment rates were highest when considering all three isotopes in the probability framework, but the use of even a single isotope was better than random assignment. The method appears relatively robust to temporal effects and is most sensitive to the isotope discrimination gradients over which samples are taken. We offer that the problem of using isotope tracers to infer geographic origin is best framed as one of assignment, rather than prediction.

  9. Prompting children to reason proportionally: Processing discrete units as continuous amounts.

    PubMed

    Boyer, Ty W; Levine, Susan C

    2015-05-01

    Recent studies reveal that children can solve proportional reasoning problems presented with continuous amounts that enable intuitive strategies by around 6 years of age but have difficulties with problems presented with discrete units that tend to elicit explicit count-and-match strategies until at least 10 years of age. The current study tests whether performance on discrete unit problems might be improved by prompting intuitive reasoning with continuous-format problems. Participants were kindergarten, second-grade, and fourth-grade students (N = 194) assigned to either an experimental condition, where they were given continuous amount proportion problems before discrete unit proportion problems, or a control condition, where they were given all discrete unit problems. Results of a three-way mixed-model analysis of variance examining school grade, experimental condition, and block of trials indicated that fourth-grade students in the experimental condition outperformed those in the control condition on discrete unit problems in the second half of the experiment, but kindergarten and second-grade students did not differ by condition. This suggests that older children can be prompted to use intuitive strategies to reason proportionally. (c) 2015 APA, all rights reserved).

  10. Separate representations of dynamics in rhythmic and discrete movements: evidence from motor learning

    PubMed Central

    Ingram, James N.; Wolpert, Daniel M.

    2011-01-01

    Rhythmic and discrete arm movements occur ubiquitously in everyday life, and there is a debate as to whether these two classes of movements arise from the same or different underlying neural mechanisms. Here we examine interference in a motor-learning paradigm to test whether rhythmic and discrete movements employ at least partially separate neural representations. Subjects were required to make circular movements of their right hand while they were exposed to a velocity-dependent force field that perturbed the circularity of the movement path. The direction of the force-field perturbation reversed at the end of each block of 20 revolutions. When subjects made only rhythmic or only discrete circular movements, interference was observed when switching between the two opposing force fields. However, when subjects alternated between blocks of rhythmic and discrete movements, such that each was uniquely associated with one of the perturbation directions, interference was significantly reduced. Only in this case did subjects learn to corepresent the two opposing perturbations, suggesting that different neural resources were employed for the two movement types. Our results provide further evidence that rhythmic and discrete movements employ at least partially separate control mechanisms in the motor system. PMID:21273324

  11. Preferences of older patient regarding hip fracture rehabilitation service configuration: A feasibility discrete choice experiment.

    PubMed

    Charles, Joanna M; Roberts, Jessica L; Din, Nafees Ud; Williams, Nefyn H; Yeo, Seow Tien; Edwards, Rhiannon T

    2018-05-14

    As part of a wider feasibility study, the feasibility of gaining older patients' views for hip fracture rehabilitation services was tested using a discrete choice experiment in a UK context. Discrete choice experiment is a method used for eliciting individuals' preferences about goods and services. The discrete choice experiment was administered to 41 participants who had experienced hip fracture (mean age 79.3 years; standard deviation (SD) 7.5 years), recruited from a larger feasibility study exploring a new multidisciplinary rehabilitation for hip fracture. Attributes and levels for this discrete choice experiment were identified from a systematic review and focus groups. The questionnaire was administered at the 3-month follow-up. Participants indicated a significant preference for a fully-qualified physiotherapist or occupational therapist to deliver the rehabilitation sessions (β = 0·605, 95% confidence interval (95% CI) 0.462-0.879), and for their rehabilitation session to last less than 90 min (β = -0.192, 95% CI -0.381 to -0.051). The design of the discrete choice experiment using attributes associated with service configuration could have the potential to inform service implementation, and assist rehabilitation service design that incorporates the preferences of patients.

  12. Discrete post-processing of total cloud cover ensemble forecasts

    NASA Astrophysics Data System (ADS)

    Hemri, Stephan; Haiden, Thomas; Pappenberger, Florian

    2017-04-01

    This contribution presents an approach to post-process ensemble forecasts for the discrete and bounded weather variable of total cloud cover. Two methods for discrete statistical post-processing of ensemble predictions are tested. The first approach is based on multinomial logistic regression, the second involves a proportional odds logistic regression model. Applying them to total cloud cover raw ensemble forecasts from the European Centre for Medium-Range Weather Forecasts improves forecast skill significantly. Based on station-wise post-processing of raw ensemble total cloud cover forecasts for a global set of 3330 stations over the period from 2007 to early 2014, the more parsimonious proportional odds logistic regression model proved to slightly outperform the multinomial logistic regression model. Reference Hemri, S., Haiden, T., & Pappenberger, F. (2016). Discrete post-processing of total cloud cover ensemble forecasts. Monthly Weather Review 144, 2565-2577.

  13. Calculating Preference Weights for the Labor and Delivery Index: A Discrete Choice Experiment on Women's Birth Experiences.

    PubMed

    Gärtner, Fania R; de Bekker-Grob, Esther W; Stiggelbout, Anne M; Rijnders, Marlies E; Freeman, Liv M; Middeldorp, Johanna M; Bloemenkamp, Kitty W M; de Miranda, Esteriek; van den Akker-van Marle, M Elske

    2015-09-01

    The aim of this study was to calculate preference weights for the Labor and Delivery Index (LADY-X) to make it suitable as a utility measure for perinatal care studies. In an online discrete choice experiment, 18 pairs of hypothetical scenarios were presented to respondents, from which they had to choose a preferred option. The scenarios describe the birth experience in terms of the seven LADY-X attributes. A D-efficient discrete choice experiment design with priors based on a small sample (N = 110) was applied. Two samples were gathered, women who had recently given birth and subjects from the general population. Both samples were analyzed separately using a panel mixed logit (MMNL) model. Using the panel mixed multinomial logit (MMNL) model results and accounting for preference heterogeneity, we calculated the average preference weights for LADY-X attribute levels. These were transformed to represent a utility score between 0 and 1, with 0 representing the worst and 1 representing the best birth experience. In total, 1097 women who had recently given birth and 367 subjects from the general population participated. Greater value was placed on differences between bottom and middle attribute levels than on differences between middle and top levels. The attributes that resulted in larger utility increases than the other attributes were "feeling of safety" in the sample of women who had recently given birth and "feeling of safety" and "availability of professionals" in the general population sample. By using the derived preference weights, LADY-X has the potential to be used as a utility measure for perinatal (cost-) effectiveness studies. Copyright © 2015 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.

  14. Characterizing decision-making and reward processing in bipolar disorder: A cluster analysis.

    PubMed

    Jiménez, E; Solé, B; Arias, B; Mitjans, M; Varo, C; Reinares, M; Bonnín, C M; Salagre, E; Ruíz, V; Torres, I; Tomioka, Y; Sáiz, P A; García-Portilla, M P; Burón, P; Bobes, J; Martínez-Arán, A; Torrent, C; Vieta, E; Benabarre, A

    2018-05-25

    The presence of abnormalities in emotional decision-making and reward processing among bipolar patients (BP) has been well rehearsed. These disturbances are not limited to acute phases and are common even during remission. In recent years, the existence of discrete cognitive profiles in this psychiatric population has been replicated. However, emotional decision making and reward processing domains have barely been studied. Therefore, our aim was to explore the existence of different profiles on the aforementioned cognitive dimensions in BP. The sample consisted of 126 euthymic BP. Main sociodemographic, clinical, functioning, and neurocognitive variables were gathered. A hierarchical-clustering technique was used to identify discrete neurocognitive profiles based on the performance in the Iowa Gambling Task. Afterward, the resulting clusters were compared using ANOVA or Chi-squared Test, as appropriate. Evidence for the existence of three different profiles was provided. Cluster 1 was mainly characterized by poor decision ability. Cluster 2 presented the lowest sensitivity to punishment. Finally, cluster 3 presented the best decision-making ability and the highest levels of punishment sensitivity. Comparison between the three clusters indicated that cluster 2 was the most functionally impaired group. The poorest outcomes in attention, executive function domains, and social cognition were also observed within the same group. In conclusion, similarly to that observed in "cold cognitive" domains, our results suggest the existence of three discrete cognitive profiles concerning emotional decision making and reward processing. Amongst all the indexes explored, low punishment sensitivity emerge as a potential correlate of poorer cognitive and functional outcomes in bipolar disorder. Copyright © 2018 Elsevier B.V. and ECNP. All rights reserved.

  15. Combined use of field and laboratory testing to predict preferred flow paths in an heterogeneous aquifer.

    PubMed

    Gierczak, R F D; Devlin, J F; Rudolph, D L

    2006-01-05

    Elevated nitrate concentrations within a municipal water supply aquifer led to pilot testing of a field-scale, in situ denitrification technology based on carbon substrate injections. In advance of the pilot test, detailed characterization of the site was undertaken. The aquifer consisted of complex, discontinuous and interstratified silt, sand and gravel units, similar to other well studied aquifers of glaciofluvial origin, 15-40 m deep. Laboratory and field tests, including a conservative tracer test, a pumping test, a borehole flowmeter test, grain-size analysis of drill cuttings and core material, and permeameter testing performed on core samples, were performed on the most productive depth range (27-40 m), and the results were compared. The velocity profiles derived from the tracer tests served as the basis for comparison with other methods. The spatial variation in K, based on grain-size analysis, using the Hazen method, were poorly correlated with the breakthrough data. Trends in relative hydraulic conductivity (K/K(avg)) from permeameter testing compared somewhat better. However, the trends in transient drawdown with depth, measured in multilevel sampling points, corresponded particularly well with those of solute mass flux. Estimates of absolute K, based on standard pumping test analysis of the multilevel drawdown data, were inversely correlated with the tracer test data. The inverse nature of the correlation was attributed to assumptions in the transient drawdown packages that were inconsistent with the variable diffusivities encountered at the scale of the measurements. Collectively, the data showed that despite a relatively low variability in K within the aquifer under study (within a factor of 3), water and solute mass fluxes were concentrated in discrete intervals that could be targeted for later bioremediation.

  16. 40 CFR 53.30 - General provisions.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... more information about the site. Any such pre-test approval of a test site by the EPA shall indicate... Methods and Reference Methods § 53.30 General provisions. (a) Determination of comparability. The test... discretion of the Administrator. (b) Selection of test sites. (1) Each test site shall be in an area which...

  17. 40 CFR 53.30 - General provisions.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... more information about the site. Any such pre-test approval of a test site by the EPA shall indicate... Methods and Reference Methods § 53.30 General provisions. (a) Determination of comparability. The test... discretion of the Administrator. (b) Selection of test sites. (1) Each test site shall be in an area which...

  18. 40 CFR 53.30 - General provisions.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... more information about the site. Any such pre-test approval of a test site by the EPA shall indicate... Methods and Reference Methods § 53.30 General provisions. (a) Determination of comparability. The test... discretion of the Administrator. (b) Selection of test sites. (1) Each test site shall be in an area which...

  19. 40 CFR 53.30 - General provisions.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... more information about the site. Any such pre-test approval of a test site by the EPA shall indicate... Methods and Reference Methods § 53.30 General provisions. (a) Determination of comparability. The test... discretion of the Administrator. (b) Selection of test sites. (1) Each test site shall be in an area which...

  20. 40 CFR 53.30 - General provisions.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... more information about the site. Any such pre-test approval of a test site by the EPA shall indicate... Methods and Reference Methods § 53.30 General provisions. (a) Determination of comparability. The test... discretion of the Administrator. (b) Selection of test sites. (1) Each test site shall be in an area which...

  1. High-Performance Acousto-Ultrasonic Scan System Being Developed

    NASA Technical Reports Server (NTRS)

    Roth, Don J.; Martin, Richard E.; Cosgriff, Laura M.; Gyekenyesi, Andrew L.; Kautz, Harold E.

    2003-01-01

    Acousto-ultrasonic (AU) interrogation is a single-sided nondestructive evaluation (NDE) technique employing separated sending and receiving transducers. It is used for assessing the microstructural condition and distributed damage state of the material between the transducers. AU is complementary to more traditional NDE methods, such as ultrasonic cscan, x-ray radiography, and thermographic inspection, which tend to be used primarily for discrete flaw detection. Throughout its history, AU has been used to inspect polymer matrix composites, metal matrix composites, ceramic matrix composites, and even monolithic metallic materials. The development of a high-performance automated AU scan system for characterizing within-sample microstructural and property homogeneity is currently in a prototype stage at NASA. This year, essential AU technology was reviewed. In addition, the basic hardware and software configuration for the scanner was developed, and preliminary results with the system were described. Mechanical and environmental loads applied to composite materials can cause distributed damage (as well as discrete defects) that plays a significant role in the degradation of physical properties. Such damage includes fiber/matrix debonding (interface failure), matrix microcracking, and fiber fracture and buckling. Investigations at the NASA Glenn Research Center have shown that traditional NDE scan inspection methods such as ultrasonic c-scan, x-ray imaging, and thermographic imaging tend to be more suited to discrete defect detection rather than the characterization of accumulated distributed micro-damage in composites. Since AU is focused on assessing the distributed micro-damage state of the material in between the sending and receiving transducers, it has proven to be quite suitable for assessing the relative composite material state. One major success story at Glenn with AU measurements has been the correlation between the ultrasonic decay rate obtained during AU inspection and the mechanical modulus (stiffness) seen during fatigue experiments with silicon carbide/silicon carbide (SiC/SiC) ceramic matrix composite samples. As shown in the figure, ultrasonic decay increased as the modulus decreased for the ceramic matrix composite tensile fatigue samples. The likely microstructural reason for the decrease in modulus (and increase in ultrasonic decay) is the matrix microcracking that commonly occurs during fatigue testing of these materials. Ultrasonic decay has shown the capability to track the pattern of transverse cracking and fiber breakage in these composites.

  2. High-Performance Acousto-Ultrasonic Scan System Being Developed

    NASA Technical Reports Server (NTRS)

    Roth, Don J.; Martin, Richard E.; Cosgriff, Laura M.; Gyekenyesi, Andrew L.; Kautz, Harold E.

    2003-01-01

    Acousto-ultrasonic (AU) interrogation is a single-sided nondestructive evaluation (NDE) technique employing separated sending and receiving transducers. It is used for assessing the microstructural condition and distributed damage state of the material between the transducers. AU is complementary to more traditional NDE methods, such as ultrasonic cscan, x-ray radiography, and thermographic inspection, which tend to be used primarily for discrete flaw detection. Throughout its history, AU has been used to inspect polymer matrix composites, metal matrix composites, ceramic matrix composites, and even monolithic metallic materials. The development of a high-performance automated AU scan system for characterizing within-sample microstructural and property homogeneity is currently in a prototype stage at NASA. This year, essential AU technology was reviewed. In addition, the basic hardware and software configuration for the scanner was developed, and preliminary results with the system were described. Mechanical and environmental loads applied to composite materials can cause distributed damage (as well as discrete defects) that plays a significant role in the degradation of physical properties. Such damage includes fiber/matrix debonding (interface failure), matrix microcracking, and fiber fracture and buckling. Investigations at the NASA Glenn Research Center have shown that traditional NDE scan inspection methods such as ultrasonic c-scan, x-ray imaging, and thermographic imaging tend to be more suited to discrete defect detection rather than the characterization of accumulated distributed microdamage in composites. Since AU is focused on assessing the distributed microdamage state of the material in between the sending and receiving transducers, it has proven to be quite suitable for assessing the relative composite material state. One major success story at Glenn with AU measurements has been the correlation between the ultrasonic decay rate obtained during AU inspection and the mechanical modulus (stiffness) seen during fatigue experiments with silicon carbide/silicon carbide (SiC/SiC) ceramic matrix composite samples. As shown in the figure, ultrasonic decay increased as the modulus decreased for the ceramic matrix composite tensile fatigue samples. The likely microstructural reason for the decrease in modulus (and increase in ultrasonic decay) is the matrix microcracking that commonly occurs during fatigue testing of these materials. Ultrasonic decay has shown the capability to track the pattern of transverse cracking and fiber breakage in these composites.

  3. Simultaneous contrast: evidence from licking microstructure and cross-solution comparisons.

    PubMed

    Dwyer, Dominic M; Lydall, Emma S; Hayward, Andrew J

    2011-04-01

    The microstructure of rats' licking responses was analyzed to investigate both "classic" simultaneous contrast (e.g., Flaherty & Largen, 1975) and a novel discrete-trial contrast procedure where access to an 8% test solution of sucrose was preceded by a sample of either 2%, 8%, or 32% sucrose (Experiments 1 and 2, respectively). Consumption of a given concentration of sucrose was higher when consumed alongside a low rather than high concentration comparison solution (positive contrast) and consumption of a given concentration of sucrose was lower when consumed alongside a high rather than a low concentration comparison solution (negative contrast). Furthermore, positive contrast increased the size of lick clusters while negative contrast decreased the size of lick clusters. Lick cluster size has a positive monotonic relationship with the concentration of palatable solutions and so positive and negative contrasts produced changes in lick cluster size that were analogous to raising or lowering the concentration of the test solution respectively. Experiment 3 utilized the discrete-trial procedure and compared contrast between two solutions of the same type (sucrose-sucrose or maltodextrin-maltodextrin) or contrast across solutions (sucrose-maltodextrin or maltodextrin-sucrose). Contrast effects on consumption were present, but reduced in size, in the cross-solution conditions. Moreover, lick cluster sizes were not affected at all by cross-solution contrasts as they were by same-solution contrasts. These results are consistent with the idea that simultaneous contrast effects depend, at least partially, on sensory mechanisms.

  4. Analyzing thematic maps and mapping for accuracy

    USGS Publications Warehouse

    Rosenfield, G.H.

    1982-01-01

    Two problems which exist while attempting to test the accuracy of thematic maps and mapping are: (1) evaluating the accuracy of thematic content, and (2) evaluating the effects of the variables on thematic mapping. Statistical analysis techniques are applicable to both these problems and include techniques for sampling the data and determining their accuracy. In addition, techniques for hypothesis testing, or inferential statistics, are used when comparing the effects of variables. A comprehensive and valid accuracy test of a classification project, such as thematic mapping from remotely sensed data, includes the following components of statistical analysis: (1) sample design, including the sample distribution, sample size, size of the sample unit, and sampling procedure; and (2) accuracy estimation, including estimation of the variance and confidence limits. Careful consideration must be given to the minimum sample size necessary to validate the accuracy of a given. classification category. The results of an accuracy test are presented in a contingency table sometimes called a classification error matrix. Usually the rows represent the interpretation, and the columns represent the verification. The diagonal elements represent the correct classifications. The remaining elements of the rows represent errors by commission, and the remaining elements of the columns represent the errors of omission. For tests of hypothesis that compare variables, the general practice has been to use only the diagonal elements from several related classification error matrices. These data are arranged in the form of another contingency table. The columns of the table represent the different variables being compared, such as different scales of mapping. The rows represent the blocking characteristics, such as the various categories of classification. The values in the cells of the tables might be the counts of correct classification or the binomial proportions of these counts divided by either the row totals or the column totals from the original classification error matrices. In hypothesis testing, when the results of tests of multiple sample cases prove to be significant, some form of statistical test must be used to separate any results that differ significantly from the others. In the past, many analyses of the data in this error matrix were made by comparing the relative magnitudes of the percentage of correct classifications, for either individual categories, the entire map or both. More rigorous analyses have used data transformations and (or) two-way classification analysis of variance. A more sophisticated step of data analysis techniques would be to use the entire classification error matrices using the methods of discrete multivariate analysis or of multiviariate analysis of variance.

  5. A novel discrete PSO algorithm for solving job shop scheduling problem to minimize makespan

    NASA Astrophysics Data System (ADS)

    Rameshkumar, K.; Rajendran, C.

    2018-02-01

    In this work, a discrete version of PSO algorithm is proposed to minimize the makespan of a job-shop. A novel schedule builder has been utilized to generate active schedules. The discrete PSO is tested using well known benchmark problems available in the literature. The solution produced by the proposed algorithms is compared with best known solution published in the literature and also compared with hybrid particle swarm algorithm and variable neighborhood search PSO algorithm. The solution construction methodology adopted in this study is found to be effective in producing good quality solutions for the various benchmark job-shop scheduling problems.

  6. Non-holonomic integrators

    NASA Astrophysics Data System (ADS)

    Cortés, J.; Martínez, S.

    2001-09-01

    We introduce a discretization of the Lagrange-d'Alembert principle for Lagrangian systems with non-holonomic constraints, which allows us to construct numerical integrators that approximate the continuous flow. We study the geometric invariance properties of the discrete flow which provide an explanation for the good performance of the proposed method. This is tested on two examples: a non-holonomic particle with a quadratic potential and a mobile robot with fixed orientation.

  7. Development and Application of Methods for Estimating Operating Characteristics of Discrete Test Item Responses without Assuming any Mathematical Form.

    ERIC Educational Resources Information Center

    Samejima, Fumiko

    In latent trait theory the latent space, or space of the hypothetical construct, is usually represented by some unidimensional or multi-dimensional continuum of real numbers. Like the latent space, the item response can either be treated as a discrete variable or as a continuous variable. Latent trait theory relates the item response to the latent…

  8. DEKFIS user's guide: Discrete Extended Kalman Filter/Smoother program for aircraft and rotorcraft data consistency

    NASA Technical Reports Server (NTRS)

    1979-01-01

    The computer program DEKFIS (discrete extended Kalman filter/smoother), formulated for aircraft and helicopter state estimation and data consistency, is described. DEKFIS is set up to pre-process raw test data by removing biases, correcting scale factor errors and providing consistency with the aircraft inertial kinematic equations. The program implements an extended Kalman filter/smoother using the Friedland-Duffy formulation.

  9. Among-character rate variation distributions in phylogenetic analysis of discrete morphological characters.

    PubMed

    Harrison, Luke B; Larsson, Hans C E

    2015-03-01

    Likelihood-based methods are commonplace in phylogenetic systematics. Although much effort has been directed toward likelihood-based models for molecular data, comparatively less work has addressed models for discrete morphological character (DMC) data. Among-character rate variation (ACRV) may confound phylogenetic analysis, but there have been few analyses of the magnitude and distribution of rate heterogeneity among DMCs. Using 76 data sets covering a range of plants, invertebrate, and vertebrate animals, we used a modified version of MrBayes to test equal, gamma-distributed and lognormally distributed models of ACRV, integrating across phylogenetic uncertainty using Bayesian model selection. We found that in approximately 80% of data sets, unequal-rates models outperformed equal-rates models, especially among larger data sets. Moreover, although most data sets were equivocal, more data sets favored the lognormal rate distribution relative to the gamma rate distribution, lending some support for more complex character correlations than in molecular data. Parsimony estimation of the underlying rate distributions in several data sets suggests that the lognormal distribution is preferred when there are many slowly evolving characters and fewer quickly evolving characters. The commonly adopted four rate category discrete approximation used for molecular data was found to be sufficient to approximate a gamma rate distribution with discrete characters. However, among the two data sets tested that favored a lognormal rate distribution, the continuous distribution was better approximated with at least eight discrete rate categories. Although the effect of rate model on the estimation of topology was difficult to assess across all data sets, it appeared relatively minor between the unequal-rates models for the one data set examined carefully. As in molecular analyses, we argue that researchers should test and adopt the most appropriate model of rate variation for the data set in question. As discrete characters are increasingly used in more sophisticated likelihood-based phylogenetic analyses, it is important that these studies be built on the most appropriate and carefully selected underlying models of evolution. © The Author(s) 2014. Published by Oxford University Press, on behalf of the Society of Systematic Biologists. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  10. Evaluation of borehole geophysical logs and hydraulic tests, phase III, at AIW Frank/Mid-County Mustang Superfund Site, Chester County, Pennsylvania

    USGS Publications Warehouse

    Sloto, Ronald A.

    2001-01-01

    Borehole geophysical logs, heatpulse-flowmeter measurements, and aquifer-isolation tests were used to characterize the ground-water-flow system at the AIW Frank/Mid-County Mustang Superfund Site. The site is underlain by fractured carbonate rocks. Caliper, natural-gamma, single-point-resistance, fluid-resistivity, and fluid-temperature logs were run in six wells, and an acoustic borehole televiewer and borehole deviation log was run in one well. The direction and rate of borehole-fluid movement was measured with a high-resolution heatpulse flowmeter for both nonpumping and pumping conditions in four wells. The heatpulse-flowmeter measurements showed flow within the borehole during nonpumping conditions in three of the four wells tested. Flow rates up to 1.4 gallons per minute were measured. Flow was upward in one well and both upward and downward in two wells. Aquifer-isolation (packer) tests were conducted in four wells to determine depth-discrete specific capacity values, to obtain depth-discrete water samples, and to determine the effect of pumping an individual fracture or fracture zone in one well on water levels in nearby wells. Water-level data collected during aquifer-isolation tests were consistent with and confirmed interpretations of borehole geophysical logs and heatpulse-flowmeter measurements. Seven of the 13 fractures identified as water-producing or water-receiving zones by borehole geophysical methods produced water at a rate equal to or greater than 7.5 gallons per minute when isolated and pumped. The specific capacities of isolated fractures range over three orders of magnitude, from 0.005 to 7.1 gallons per minute per foot. Vertical distribution of specific capacity between land surface and 298 feet below land surface is not related to depth. The four highest specific capacities, in descending order, are at depths of 174-198, 90-92, 118-119, and 34-37 feet below land surface.

  11. Corrective Action Investigation Plan for Corrective Action Unit 428: Area 3 Septic Waste Systems 1 and 5, Tonopah Test Range, Nevada, REVISION 0, march 1999

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    DOE /NV

    1999-03-26

    The Corrective Action Investigation Plan for Corrective Action Unit 428, Area 3 Septic Waste Systems 1 and 5, has been developed in accordance with the Federal Facility Agreement and Consent Order that was agreed to by the U. S. Department of Energy, Nevada Operations Office; the State of Nevada Division of Environmental Protection; and the U. S. Department of Defense. Corrective Action Unit 428 consists of Corrective Action Sites 03- 05- 002- SW01 and 03- 05- 002- SW05, respectively known as Area 3 Septic Waste System 1 and Septic Waste System 5. This Corrective Action Investigation Plan is used inmore » combination with the Work Plan for Leachfield Corrective Action Units: Nevada Test Site and Tonopah Test Range, Nevada , Rev. 1 (DOE/ NV, 1998c). The Leachfield Work Plan was developed to streamline investigations at leachfield Corrective Action Units by incorporating management, technical, quality assurance, health and safety, public involvement, field sampling, and waste management information common to a set of Corrective Action Units with similar site histories and characteristics into a single document that can be referenced. This Corrective Action Investigation Plan provides investigative details specific to Corrective Action Unit 428. A system of leachfields and associated collection systems was used for wastewater disposal at Area 3 of the Tonopah Test Range until a consolidated sewer system was installed in 1990 to replace the discrete septic waste systems. Operations within various buildings at Area 3 generated sanitary and industrial wastewaters potentially contaminated with contaminants of potential concern and disposed of in septic tanks and leachfields. Corrective Action Unit 428 is composed of two leachfield systems in the northern portion of Area 3. Based on site history collected to support the Data Quality Objectives process, contaminants of potential concern for the site include oil/ diesel range total petroleum hydrocarbons, and Resource Conservation and Recovery Act characteristic volatile organic compounds, semivolatile organic compounds, and metals. A limited number of samples will be analyzed for gamma- emitting radionuclides and isotopic uranium from four of the septic tanks and if radiological field screening levels are exceeded. Additional samples will be analyzed for geotechnical and hydrological properties and a bioassessment may be performed. The technical approach for investigating this Corrective Action Unit consists of the following activities: (1) Perform video surveys of the discharge and outfall lines. (2) Collect samples of material in the septic tanks. (3) Conduct exploratory trenching to locate and inspect subsurface components. (4) Collect subsurface soil samples in areas of the collection system including the septic tanks and outfall end of distribution boxes. (5) Collect subsurface soil samples underlying the leachfield distribution pipes via trenching. (6) Collect surface and near- surface samples near potential locations of the Acid Sewer Outfall if Septic Waste System 5 Leachfield cannot be located. (7) Field screen samples for volatile organic compounds, total petroleum hydrocarbons, and radiological activity. (8) Drill boreholes and collect subsurface soil samples if required. (9) Analyze samples for total volatile organic compounds, total semivolatile organic compounds, total Resource Conservation and Recovery Act metals, and total petroleum hydrocarbons (oil/ diesel range organics). Limited number of samples will be analyzed for gamma- emitting radionuclides and isotopic uranium from particular septic tanks and if radiological field screening levels are exceeded. (10) Collect samples from native soils beneath the distribution system and analyze for geotechnical/ hydrologic parameters. (11) Collect and analyze bioassessment samples at the discretion of the Site Supervisor if total petroleum hydrocarbons exceed field- screening levels.« less

  12. On wavelet analysis of auditory evoked potentials.

    PubMed

    Bradley, A P; Wilson, W J

    2004-05-01

    To determine a preferred wavelet transform (WT) procedure for multi-resolution analysis (MRA) of auditory evoked potentials (AEP). A number of WT algorithms, mother wavelets, and pre-processing techniques were examined by way of critical theoretical discussion followed by experimental testing of key points using real and simulated auditory brain-stem response (ABR) waveforms. Conclusions from these examinations were then tested on a normative ABR dataset. The results of the various experiments are reported in detail. Optimal AEP WT MRA is most likely to occur when an over-sampled discrete wavelet transformation (DWT) is used, utilising a smooth (regularity >or=3) and symmetrical (linear phase) mother wavelet, and a reflection boundary extension policy. This study demonstrates the practical importance of, and explains how to minimize potential artefacts due to, 4 inter-related issues relevant to AEP WT MRA, namely shift variance, phase distortion, reconstruction smoothness, and boundary artefacts.

  13. A powerful and flexible approach to the analysis of RNA sequence count data.

    PubMed

    Zhou, Yi-Hui; Xia, Kai; Wright, Fred A

    2011-10-01

    A number of penalization and shrinkage approaches have been proposed for the analysis of microarray gene expression data. Similar techniques are now routinely applied to RNA sequence transcriptional count data, although the value of such shrinkage has not been conclusively established. If penalization is desired, the explicit modeling of mean-variance relationships provides a flexible testing regimen that 'borrows' information across genes, while easily incorporating design effects and additional covariates. We describe BBSeq, which incorporates two approaches: (i) a simple beta-binomial generalized linear model, which has not been extensively tested for RNA-Seq data and (ii) an extension of an expression mean-variance modeling approach to RNA-Seq data, involving modeling of the overdispersion as a function of the mean. Our approaches are flexible, allowing for general handling of discrete experimental factors and continuous covariates. We report comparisons with other alternate methods to handle RNA-Seq data. Although penalized methods have advantages for very small sample sizes, the beta-binomial generalized linear model, combined with simple outlier detection and testing approaches, appears to have favorable characteristics in power and flexibility. An R package containing examples and sample datasets is available at http://www.bios.unc.edu/research/genomic_software/BBSeq yzhou@bios.unc.edu; fwright@bios.unc.edu Supplementary data are available at Bioinformatics online.

  14. Evaluating the effectiveness of Washington state repeated job search services on the employment rate of prime-age female welfare recipients☆

    PubMed Central

    Hsiao, Cheng; Shen, Yan; Wang, Boqing; Weeks, Greg

    2014-01-01

    This paper uses an unbalanced panel dataset to evaluate how repeated job search services (JSS) and personal characteristics affect the employment rate of the prime-age female welfare recipients in the State of Washington. We propose a transition probability model to take into account issues of sample attrition, sample refreshment and duration dependence. We also generalize Honoré and Kyriazidou’s [Honoré, B.E., Kyriazidou, E., 2000. Panel data discrete choice models with lagged dependent variables. Econometrica 68 (4), 839–874] conditional maximum likelihood estimator to allow for the presence of individual-specific effects. A limited information test is suggested to test for selection issues in non-experimental data. The specification tests indicate that the (conditional on the set of the confounding variables considered) assumptions of no selection due to unobservables and/or no unobserved individual-specific effects are not violated. Our findings indicate that the first job search service does have positive and significant impacts on the employment rate. However, providing repeated JSS to the same client has no significant impact. Further, we find that there are significant experience-enhancing effects. These findings suggest that providing one job search services training to individuals may have a lasting impact on raising their employment rates. PMID:26052178

  15. Examining Potential Boundary Bias Effects in Kernel Smoothing on Equating: An Introduction for the Adaptive and Epanechnikov Kernels.

    PubMed

    Cid, Jaime A; von Davier, Alina A

    2015-05-01

    Test equating is a method of making the test scores from different test forms of the same assessment comparable. In the equating process, an important step involves continuizing the discrete score distributions. In traditional observed-score equating, this step is achieved using linear interpolation (or an unscaled uniform kernel). In the kernel equating (KE) process, this continuization process involves Gaussian kernel smoothing. It has been suggested that the choice of bandwidth in kernel smoothing controls the trade-off between variance and bias. In the literature on estimating density functions using kernels, it has also been suggested that the weight of the kernel depends on the sample size, and therefore, the resulting continuous distribution exhibits bias at the endpoints, where the samples are usually smaller. The purpose of this article is (a) to explore the potential effects of atypical scores (spikes) at the extreme ends (high and low) on the KE method in distributions with different degrees of asymmetry using the randomly equivalent groups equating design (Study I), and (b) to introduce the Epanechnikov and adaptive kernels as potential alternative approaches to reducing boundary bias in smoothing (Study II). The beta-binomial model is used to simulate observed scores reflecting a range of different skewed shapes.

  16. Evaluating the effectiveness of Washington state repeated job search services on the employment rate of prime-age female welfare recipients.

    PubMed

    Hsiao, Cheng; Shen, Yan; Wang, Boqing; Weeks, Greg

    2008-07-01

    This paper uses an unbalanced panel dataset to evaluate how repeated job search services (JSS) and personal characteristics affect the employment rate of the prime-age female welfare recipients in the State of Washington. We propose a transition probability model to take into account issues of sample attrition, sample refreshment and duration dependence. We also generalize Honoré and Kyriazidou's [Honoré, B.E., Kyriazidou, E., 2000. Panel data discrete choice models with lagged dependent variables. Econometrica 68 (4), 839-874] conditional maximum likelihood estimator to allow for the presence of individual-specific effects. A limited information test is suggested to test for selection issues in non-experimental data. The specification tests indicate that the (conditional on the set of the confounding variables considered) assumptions of no selection due to unobservables and/or no unobserved individual-specific effects are not violated. Our findings indicate that the first job search service does have positive and significant impacts on the employment rate. However, providing repeated JSS to the same client has no significant impact. Further, we find that there are significant experience-enhancing effects. These findings suggest that providing one job search services training to individuals may have a lasting impact on raising their employment rates.

  17. Model documentation for relations between continuous real-time and discrete water-quality constituents in the North Fork Ninnescah River upstream from Cheney Reservoir, south-central Kansas, 1999--2009

    USGS Publications Warehouse

    Stone, Mandy L.; Graham, Jennifer L.; Gatotho, Jackline W.

    2013-01-01

    Cheney Reservoir in south-central Kansas is one of the primary sources of water for the city of Wichita. The North Fork Ninnescah River is the largest contributing tributary to Cheney Reservoir. The U.S. Geological Survey has operated a continuous real-time water-quality monitoring station since 1998 on the North Fork Ninnescah River. Continuously measured water-quality physical properties include streamflow, specific conductance, pH, water temperature, dissolved oxygen, and turbidity. Discrete water-quality samples were collected during 1999 through 2009 and analyzed for sediment, nutrients, bacteria, and other water-quality constituents. Regression models were developed to establish relations between discretely sampled constituent concentrations and continuously measured physical properties to estimate concentrations of those constituents of interest that are not easily measured in real time because of limitations in sensor technology and fiscal constraints. Regression models were published in 2006 that were based on a different dataset collected during 1997 through 2003. This report updates those models using discrete and continuous data collected during January 1999 through December 2009. Models also were developed for five new constituents, including additional nutrient species and indicator bacteria. The water-quality information in this report is important to the city of Wichita because it allows the concentrations of many potential pollutants of interest, including nutrients and sediment, to be estimated in real time and characterized over conditions and time scales that would not be possible otherwise.

  18. Energy Dissipation in Calico Hills Tuff due to Pore Collapse

    NASA Astrophysics Data System (ADS)

    Lockner, D. A.; Morrow, C. A.

    2008-12-01

    Laboratory tests indicate that the weakest portions of the Calico Hills tuff formation are at or near yield stress under in situ conditions and that the energy expended during incremental loading can be more than 90 percent irrecoverable. The Calico Hills tuff underlies the Yucca Mountain waste repository site at a depth of 400 to 500 m within the unsaturated zone. The formation is highly variable in the degree of both vitrification and zeolitization. Since 1980, a number of boreholes penetrated this formation to provide site characterization for the YM repository. In the past, standard strength measurements were conducted on core samples from the drillholes. However, a significant sampling bias occurred in that tests were preferentially conducted on highly vitrified, higher-strength samples. In fact, the most recent holes were drilled with a dry coring technique that would pulverize the weakest layers, leaving none of this material for testing. We have re-examined Calico Hills samples preserved at the YM Core Facility and selected the least vitrified examples (some cores exceeded 50 percent porosity) for mechanical testing. Three basic tests were performed: (i) hydrostatic crushing tests (to 350 MPa), (ii) standard triaxial deformation tests at constant effective confining pressure (to 70 MPa), and (iii) plane strain tests with initial conditions similar to in situ stresses. In all cases, constant pore pressure of 10 MPa was maintained using argon gas as a pore fluid and pore volume loss was monitored during deformation. The strongest samples typically failed along discrete fractures in agreement with standard Mohr-Coulomb failure. The weaker, high porosity samples, however, would fail by pure pore collapse or by a combined shear-induced compaction mechanism similar to failure mechanisms described for porous sandstones and carbonates. In the plane-strain experiments, energy dissipation due to pore collapse was determined for eventual input into dynamic wave calculations. These calculations will simulate ground accelerations at the YM repository due to propagation of high-amplitude compressional waves generated by scenario earthquakes. As an example, in one typical test on a sample with 43 percent starting porosity, an axial stress increase of 25 MPa resulted from 6 percent shortening and energy dissipation (due to grain crushing and pore collapse) of approximately 1.5x106 J/m3. Under proper conditions, this dissipation mechanism could represent a significant absorption of radiated seismic energy and the possible shielding of the repository from extreme ground shaking.

  19. Pavlovian-conditioned alcohol-seeking behavior in rats is invigorated by the interaction between discrete and contextual alcohol cues: implications for relapse

    PubMed Central

    Remedios, Jessica; Woods, Catherine; Tardif, Catherine; Janak, Patricia H; Chaudhri, Nadia

    2014-01-01

    Introduction Drug craving can be independently stimulated by cues that are directly associated with drug intake (discrete drug cues), as well as by environmental contexts in which drug use occurs (contextual drug cues). We tested the hypothesis that the context in which a discrete alcohol-predictive cue is experienced can influence how robustly that cue stimulates alcohol-seeking behavior. Methods Male, Long-Evans rats received Pavlovian discrimination training (PDT) sessions in which one conditioned stimulus (CS+; 16 trials/session) was paired with ethanol (0.2 mL/CS+) and a second stimulus (CS−; 16 trials/session) was not. PDT occurred in a specific context, and entries into a fluid port where ethanol was delivered were measured during each CS. Next, rats were acclimated to an alternate (nonalcohol) context where cues and ethanol were withheld. Responses to the nonextinguished CS+ and CS− were then tested without ethanol in the alcohol-associated PDT context, the nonalcohol context or a third, novel context. Results Across PDT the CS+ elicited more port entries than the CS−, indicative of Pavlovian discrimination learning. At test, the CS+ elicited more port entries than the CS− in all three contexts: however, alcohol seeking driven by the CS+ was more robust in the alcohol-associated context. In a separate experiment, extinguishing the context-alcohol association did not influence subsequent CS+ responding but reduced alcohol seeking during non-CS+ intervals during a spontaneous recovery test. Conclusion These results indicate that alcohol-seeking behavior driven by a discrete Pavlovian alcohol cue is strongly invigorated by an alcohol context, and suggest that contexts may function as excitatory Pavlovian conditioned stimuli that directly trigger alcohol-seeking behavior. PMID:24683519

  20. Determining Criteria and Weights for Prioritizing Health Technologies Based on the Preferences of the General Population: A New Zealand Pilot Study.

    PubMed

    Sullivan, Trudy; Hansen, Paul

    2017-04-01

    The use of multicriteria decision analysis for health technology prioritization depends on decision-making criteria and weights according to their relative importance. We report on a methodology for determining criteria and weights that was developed and piloted in New Zealand and enables extensive participation by members of the general population. Stimulated by a preliminary ranking exercise that involved prioritizing 14 diverse technologies, six focus groups discussed what matters to people when thinking about technologies that should be funded. These discussions informed the specification of criteria related to technologies' benefits for use in a discrete choice survey designed to generate weights for each individual participant as well as mean weights. A random sample of 3218 adults was invited to participate. To check test-retest reliability, a subsample completed the survey twice. Cluster analysis was performed to identify participants with similar patterns of weights. Six benefits-related criteria were distilled from the focus group discussions and included in the discrete choice survey, which was completed by 322 adults (10% response rate). Most participants (85%) found the survey easy to understand, and the survey exhibited test-retest reliability. The cluster analysis revealed that participant weights are related more to idiosyncratic personal preferences than to demographic and background characteristics. The methodology enables extensive participation by members of the general population, for whom it is both acceptable and reliable. Generating weights for each participant allows the heterogeneity of individual preferences, and the extent to which they are related to demographic and background characteristics, to be tested. Copyright © 2017 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.

  1. Influenza testing trends in sentinel surveillance general practices in Victoria 2007 to 2014.

    PubMed

    Cowie, Genevieve A; Cowie, Benjamin C; Fielding, James E

    2017-03-31

    The Victorian Sentinel Practice Influenza Network conducts syndromic surveillance for influenza-like illness (ILI), with testing for laboratory confirmation of a proportion of cases at the discretion of general practitioners. The aim of this study was to evaluate the consistency of sentinel general practitioners' swabbing practice within and between influenza seasons. Aggregated, weekly, non-identified data for May to October each year from 2007 to 2014 were used to calculate the proportion of patients presenting with ILI (defined as cough, fever and fatigue), proportion of ILI patients swabbed and proportion of swabs positive for influenza. Data on the proportion of consultations for ILI and the proportion of ILI patients swabbed were aggregated into time-period quintiles for each year. Analysis of variance was used to compare ILI patients swabbed for each aggregated time-period quintile over all 8 years. Spearman's correlation and Bland-Altman analyses were used to measure association and agreement respectively between ILI proportions of consultations and swabs positive for influenza in time period quintiles within each year. Data were aggregated by year for the rest of the analyses. Between 2007 and 2014 there was a slight decrease in the proportion of positive tests and the proportion of ILI patients was generally a good proxy for influenza test positivity. There was consistency in testing within and between seasons, despite an overall testing increase between 2007 and 2014. There was no evidence for temporal sampling bias in these data despite testing not being performed on a systematic basis. This sampling regimen could also be considered in other similar surveillance systems.

  2. Concurrent tumor segmentation and registration with uncertainty-based sparse non-uniform graphs.

    PubMed

    Parisot, Sarah; Wells, William; Chemouny, Stéphane; Duffau, Hugues; Paragios, Nikos

    2014-05-01

    In this paper, we present a graph-based concurrent brain tumor segmentation and atlas to diseased patient registration framework. Both segmentation and registration problems are modeled using a unified pairwise discrete Markov Random Field model on a sparse grid superimposed to the image domain. Segmentation is addressed based on pattern classification techniques, while registration is performed by maximizing the similarity between volumes and is modular with respect to the matching criterion. The two problems are coupled by relaxing the registration term in the tumor area, corresponding to areas of high classification score and high dissimilarity between volumes. In order to overcome the main shortcomings of discrete approaches regarding appropriate sampling of the solution space as well as important memory requirements, content driven samplings of the discrete displacement set and the sparse grid are considered, based on the local segmentation and registration uncertainties recovered by the min marginal energies. State of the art results on a substantial low-grade glioma database demonstrate the potential of our method, while our proposed approach shows maintained performance and strongly reduced complexity of the model. Copyright © 2014 Elsevier B.V. All rights reserved.

  3. Escalate shamefully, de-escalate angrily or gratefully: the influence of discrete emotions on escalation of commitment.

    PubMed

    Dang, Junhua; Xiao, Shanshan; Liljedahl, Sophie

    2014-08-01

    Decision makers often tend to escalate their commitment when faced with a dilemma of whether to continue a losing course of action. Researchers recently began to investigate the influence of discrete emotions on this decision tendency. However, this work has mainly focused on negative emotions and rarely considered positive emotions, to say nothing of comparing the effects of both of them simultaneously. The current study addresses this need by presenting the results of three experiments that examined the effects of four emotions of both positive and negative valences in escalation situations. Experiment 1 investigated the relationships of three trait emotions (hope, shame, and anger) and escalation of commitment. Experiments 2 and 3 examined the effects of three induced emotions (anger, shame, and gratitude) on escalation of commitment in a student sample and an employee sample, respectively. The results revealed that the effects of discrete emotions in escalation situations are mainly due to their associated differences on the appraisal dimension of responsibility that is related to escalation situations rather than their valence. The theoretical and practical implications are discussed. © 2014 Scandinavian Psychological Associations and John Wiley & Sons Ltd.

  4. Evaluation of surface waters associated with animal feeding operations for estrogenic chemicals and activity

    USDA-ARS?s Scientific Manuscript database

    Estrogens and estrogenic activity (EA) were evaluated in surface waters associated with animal feeding operations. Water was sampled at 19 sites in 12 states using discrete (n=41) and POCIS (n=19) sampling methods. Estrogenic chemicals measured in unfiltered water by GC/MS2 included: estrone (E1),17...

  5. Eigenvalue sensitivity of sampled time systems operating in closed loop

    NASA Astrophysics Data System (ADS)

    Bernal, Dionisio

    2018-05-01

    The use of feedback to create closed-loop eigenstructures with high sensitivity has received some attention in the Structural Health Monitoring field. Although practical implementation is necessarily digital, and thus in sampled time, work thus far has center on the continuous time framework, both in design and in checking performance. It is shown in this paper that the performance in discrete time, at typical sampling rates, can differ notably from that anticipated in the continuous time formulation and that discrepancies can be particularly large on the real part of the eigenvalue sensitivities; a consequence being important error on the (linear estimate) of the level of damage at which closed-loop stability is lost. As one anticipates, explicit consideration of the sampling rate poses no special difficulties in the closed-loop eigenstructure design and the relevant expressions are developed in the paper, including a formula for the efficient evaluation of the derivative of the matrix exponential based on the theory of complex perturbations. The paper presents an easily reproduced numerical example showing the level of error that can result when the discrete time implementation of the controller is not considered.

  6. A novel recursive Fourier transform for nonuniform sampled signals: application to heart rate variability spectrum estimation.

    PubMed

    Holland, Alexander; Aboy, Mateo

    2009-07-01

    We present a novel method to iteratively calculate discrete Fourier transforms for discrete time signals with sample time intervals that may be widely nonuniform. The proposed recursive Fourier transform (RFT) does not require interpolation of the samples to uniform time intervals, and each iterative transform update of N frequencies has computational order N. Because of the inherent non-uniformity in the time between successive heart beats, an application particularly well suited for this transform is power spectral density (PSD) estimation for heart rate variability. We compare RFT based spectrum estimation with Lomb-Scargle Transform (LST) based estimation. PSD estimation based on the LST also does not require uniform time samples, but the LST has a computational order greater than Nlog(N). We conducted an assessment study involving the analysis of quasi-stationary signals with various levels of randomly missing heart beats. Our results indicate that the RFT leads to comparable estimation performance to the LST with significantly less computational overhead and complexity for applications requiring iterative spectrum estimations.

  7. Computing the Feasible Spaces of Optimal Power Flow Problems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Molzahn, Daniel K.

    The solution to an optimal power flow (OPF) problem provides a minimum cost operating point for an electric power system. The performance of OPF solution techniques strongly depends on the problem’s feasible space. This paper presents an algorithm that is guaranteed to compute the entire feasible spaces of small OPF problems to within a specified discretization tolerance. Specifically, the feasible space is computed by discretizing certain of the OPF problem’s inequality constraints to obtain a set of power flow equations. All solutions to the power flow equations at each discretization point are obtained using the Numerical Polynomial Homotopy Continuation (NPHC)more » algorithm. To improve computational tractability, “bound tightening” and “grid pruning” algorithms use convex relaxations to preclude consideration of many discretization points that are infeasible for the OPF problem. Here, the proposed algorithm is used to generate the feasible spaces of two small test cases.« less

  8. A fast numerical method for the valuation of American lookback put options

    NASA Astrophysics Data System (ADS)

    Song, Haiming; Zhang, Qi; Zhang, Ran

    2015-10-01

    A fast and efficient numerical method is proposed and analyzed for the valuation of American lookback options. American lookback option pricing problem is essentially a two-dimensional unbounded nonlinear parabolic problem. We reformulate it into a two-dimensional parabolic linear complementary problem (LCP) on an unbounded domain. The numeraire transformation and domain truncation technique are employed to convert the two-dimensional unbounded LCP into a one-dimensional bounded one. Furthermore, the variational inequality (VI) form corresponding to the one-dimensional bounded LCP is obtained skillfully by some discussions. The resulting bounded VI is discretized by a finite element method. Meanwhile, the stability of the semi-discrete solution and the symmetric positive definiteness of the full-discrete matrix are established for the bounded VI. The discretized VI related to options is solved by a projection and contraction method. Numerical experiments are conducted to test the performance of the proposed method.

  9. Computing the Feasible Spaces of Optimal Power Flow Problems

    DOE PAGES

    Molzahn, Daniel K.

    2017-03-15

    The solution to an optimal power flow (OPF) problem provides a minimum cost operating point for an electric power system. The performance of OPF solution techniques strongly depends on the problem’s feasible space. This paper presents an algorithm that is guaranteed to compute the entire feasible spaces of small OPF problems to within a specified discretization tolerance. Specifically, the feasible space is computed by discretizing certain of the OPF problem’s inequality constraints to obtain a set of power flow equations. All solutions to the power flow equations at each discretization point are obtained using the Numerical Polynomial Homotopy Continuation (NPHC)more » algorithm. To improve computational tractability, “bound tightening” and “grid pruning” algorithms use convex relaxations to preclude consideration of many discretization points that are infeasible for the OPF problem. Here, the proposed algorithm is used to generate the feasible spaces of two small test cases.« less

  10. Analysis of a mesoscale infiltration and water seepage test in unsaturated fractured rock: Spatial variabilities and discrete fracture patterns

    USGS Publications Warehouse

    Zhou, Q.; Salve, R.; Liu, H.-H.; Wang, J.S.Y.; Hudson, D.

    2006-01-01

    A mesoscale (21??m in flow distance) infiltration and seepage test was recently conducted in a deep, unsaturated fractured rock system at the crossover point of two underground tunnels. Water was released from a 3??m ?? 4??m infiltration plot on the floor of an alcove in the upper tunnel, and seepage was collected from the ceiling of a niche in the lower tunnel. Significant temporal and (particularly) spatial variabilities were observed in both measured infiltration and seepage rates. To analyze the test results, a three-dimensional unsaturated flow model was used. A column-based scheme was developed to capture heterogeneous hydraulic properties reflected by these spatial variabilities observed. Fracture permeability and van Genuchten ?? parameter [van Genuchten, M.T., 1980. A closed-form equation for predicting the hydraulic conductivity of unsaturated soils. Soil Sci. Soc. Am. J. 44, 892-898] were calibrated for each rock column in the upper and lower hydrogeologic units in the test bed. The calibrated fracture properties for the infiltration and seepage zone enabled a good match between simulated and measured (spatially varying) seepage rates. The numerical model was also able to capture the general trend of the highly transient seepage processes through a discrete fracture network. The calibrated properties and measured infiltration/seepage rates were further compared with mapped discrete fracture patterns at the top and bottom boundaries. The measured infiltration rates and calibrated fracture permeability of the upper unit were found to be partially controlled by the fracture patterns on the infiltration plot (as indicated by their positive correlations with fracture density). However, no correlation could be established between measured seepage rates and density of fractures mapped on the niche ceiling. This lack of correlation indicates the complexity of (preferential) unsaturated flow within the discrete fracture network. This also indicates that continuum-based modeling of unsaturated flow in fractured rock at mesoscale or a larger scale is not necessarily conditional explicitly on discrete fracture patterns. ?? 2006 Elsevier B.V. All rights reserved.

  11. Development of dynamic Bayesian models for web application test management

    NASA Astrophysics Data System (ADS)

    Azarnova, T. V.; Polukhin, P. V.; Bondarenko, Yu V.; Kashirina, I. L.

    2018-03-01

    The mathematical apparatus of dynamic Bayesian networks is an effective and technically proven tool that can be used to model complex stochastic dynamic processes. According to the results of the research, mathematical models and methods of dynamic Bayesian networks provide a high coverage of stochastic tasks associated with error testing in multiuser software products operated in a dynamically changing environment. Formalized representation of the discrete test process as a dynamic Bayesian model allows us to organize the logical connection between individual test assets for multiple time slices. This approach gives an opportunity to present testing as a discrete process with set structural components responsible for the generation of test assets. Dynamic Bayesian network-based models allow us to combine in one management area individual units and testing components with different functionalities and a direct influence on each other in the process of comprehensive testing of various groups of computer bugs. The application of the proposed models provides an opportunity to use a consistent approach to formalize test principles and procedures, methods used to treat situational error signs, and methods used to produce analytical conclusions based on test results.

  12. Developing a shale heterogeneity index to predict fracture response in the Mancos Shale

    NASA Astrophysics Data System (ADS)

    DeReuil, Aubry; Birgenheier, Lauren; McLennan, John

    2017-04-01

    The interplay between sedimentary heterogeneity and fracture propagation in mudstone is crucial to assess the potential of low permeability rocks as unconventional reservoirs. Previous experimental research has demonstrated a relationship between heterogeneity and fracture of brittle rocks, as discontinuities in a rock mass influence micromechanical processes such as microcracking and strain localization, which evolve into macroscopic fractures. Though numerous studies have observed heterogeneity influencing fracture development, fundamental understanding of the entire fracture process and the physical controls on this process is still lacking. This is partly due to difficulties in quantifying heterogeneity in fine-grained rocks. Our study tests the hypothesis that there is a correlation between sedimentary heterogeneity and the manner in which mudstone is fractured. An extensive range of heterogeneity related to complex sedimentology is represented by various samples from cored intervals of the Mancos Shale. Samples were categorized via facies analysis consisting of: visual core description, XRF and XRD analysis, SEM and thin section microscopy, and reservoir quality analysis that tested porosity, permeability, water saturation, and TOC. Systematic indirect tensile testing on a broad variety of facies has been performed, and uniaxial and triaxial compression testing is underway. A novel tool based on analytically derived and statistically proven relationships between sedimentary geologic and geomechanical heterogeneity is the ultimate result, referred to as the shale heterogeneity index. Preliminary conclusions from development of the shale heterogeneity index reveal that samples with compositionally distinct bedding withstand loading at higher stress values, while texturally and compositionally homogeneous, bedded samples fail at lower stress values. The highest tensile strength results from cemented Ca-enriched samples, medial to high strength samples have approximately equivalent proportions of Al-Ca-Si compositions, while Al-rich samples have consistently low strength. Moisture preserved samples fail on average at approximately 5 MPa lower than dry samples of similar facies. Additionally, moisture preserved samples fail in a step-like pattern when tested perpendicular to bedding. Tensile fractures are halted at heterogeneities and propagate parallel to bedding planes before developing a through-going failure plane, as opposed to the discrete, continuous fractures that crosscut dry samples. This result suggests that sedimentary heterogeneity plays a greater role in fracture propagation in moisture preserved samples, which are more indicative of in-situ reservoir conditions. Stress-strain curves will be further analyzed, including estimation of an energy released term based on post-failure response, and an estimation of volume of cracking measure on the physical fracture surface.

  13. A discrete mesoscopic particle model of the mechanics of a multi-constituent arterial wall.

    PubMed

    Witthoft, Alexandra; Yazdani, Alireza; Peng, Zhangli; Bellini, Chiara; Humphrey, Jay D; Karniadakis, George Em

    2016-01-01

    Blood vessels have unique properties that allow them to function together within a complex, self-regulating network. The contractile capacity of the wall combined with complex mechanical properties of the extracellular matrix enables vessels to adapt to changes in haemodynamic loading. Homogenized phenomenological and multi-constituent, structurally motivated continuum models have successfully captured these mechanical properties, but truly describing intricate microstructural details of the arterial wall may require a discrete framework. Such an approach would facilitate modelling interactions between or the separation of layers of the wall and would offer the advantage of seamless integration with discrete models of complex blood flow. We present a discrete particle model of a multi-constituent, nonlinearly elastic, anisotropic arterial wall, which we develop using the dissipative particle dynamics method. Mimicking basic features of the microstructure of the arterial wall, the model comprises an elastin matrix having isotropic nonlinear elastic properties plus anisotropic fibre reinforcement that represents the stiffer collagen fibres of the wall. These collagen fibres are distributed evenly and are oriented in four directions, symmetric to the vessel axis. Experimental results from biaxial mechanical tests of an artery are used for model validation, and a delamination test is simulated to demonstrate the new capabilities of the model. © 2016 The Author(s).

  14. Hydraulic tomography of discrete networks of conduits and fractures in a karstic aquifer by using a deterministic inversion algorithm

    NASA Astrophysics Data System (ADS)

    Fischer, P.; Jardani, A.; Lecoq, N.

    2018-02-01

    In this paper, we present a novel inverse modeling method called Discrete Network Deterministic Inversion (DNDI) for mapping the geometry and property of the discrete network of conduits and fractures in the karstified aquifers. The DNDI algorithm is based on a coupled discrete-continuum concept to simulate numerically water flows in a model and a deterministic optimization algorithm to invert a set of observed piezometric data recorded during multiple pumping tests. In this method, the model is partioned in subspaces piloted by a set of parameters (matrix transmissivity, and geometry and equivalent transmissivity of the conduits) that are considered as unknown. In this way, the deterministic optimization process can iteratively correct the geometry of the network and the values of the properties, until it converges to a global network geometry in a solution model able to reproduce the set of data. An uncertainty analysis of this result can be performed from the maps of posterior uncertainties on the network geometry or on the property values. This method has been successfully tested for three different theoretical and simplified study cases with hydraulic responses data generated from hypothetical karstic models with an increasing complexity of the network geometry, and of the matrix heterogeneity.

  15. Successive and discrete spaced conditioning in active avoidance learning in young and aged zebrafish.

    PubMed

    Yang, Peng; Kajiwara, Riki; Tonoki, Ayako; Itoh, Motoyuki

    2018-05-01

    We designed an automated device to study active avoidance learning abilities of zebrafish. Open source tools were used for the device control, statistical computing, and graphic outputs of data. Using the system, we developed active avoidance tests to examine the effects of trial spacing and aging on learning. Seven-month-old fish showed stronger avoidance behavior as measured by color preference index with discrete spaced training as compared to successive spaced training. Fifteen-month-old fish showed a similar trend, but with reduced cognitive abilities compared with 7-month-old fish. Further, in 7-month-old fish, an increase in learning ability during trials was observed with discrete, but not successive, spaced training. In contrast, 15-month-old fish did not show increase in learning ability during trials. Therefore, these data suggest that discrete spacing is more effective for learning than successive spacing, with the zebrafish active avoidance paradigm, and that the time course analysis of active avoidance using discrete spaced training is useful to detect age-related learning impairment. Copyright © 2017 Elsevier Ireland Ltd and Japan Neuroscience Society. All rights reserved.

  16. Stability analysis of implicit time discretizations for the Compton-scattering Fokker-Planck equation

    NASA Astrophysics Data System (ADS)

    Densmore, Jeffery D.; Warsa, James S.; Lowrie, Robert B.; Morel, Jim E.

    2009-09-01

    The Fokker-Planck equation is a widely used approximation for modeling the Compton scattering of photons in high energy density applications. In this paper, we perform a stability analysis of three implicit time discretizations for the Compton-Scattering Fokker-Planck equation. Specifically, we examine (i) a Semi-Implicit (SI) scheme that employs backward-Euler differencing but evaluates temperature-dependent coefficients at their beginning-of-time-step values, (ii) a Fully Implicit (FI) discretization that instead evaluates temperature-dependent coefficients at their end-of-time-step values, and (iii) a Linearized Implicit (LI) scheme, which is developed by linearizing the temperature dependence of the FI discretization within each time step. Our stability analysis shows that the FI and LI schemes are unconditionally stable and cannot generate oscillatory solutions regardless of time-step size, whereas the SI discretization can suffer from instabilities and nonphysical oscillations for sufficiently large time steps. With the results of this analysis, we present time-step limits for the SI scheme that prevent undesirable behavior. We test the validity of our stability analysis and time-step limits with a set of numerical examples.

  17. Optimal Digital Controller Design for a Servo Motor Taking Account of Intersample Behavior

    NASA Astrophysics Data System (ADS)

    Akiyoshi, Tatsuro; Imai, Jun; Funabiki, Shigeyuki

    A continuous-time plant with discretized continuous-time controller do not yield stability if the sampling rate is lower than some certain level. Thus far, high functioning electronic control has made use of high cost hardwares which are needed to implement discretized continuous-time controllers, while low cost hardwares generally do not have high enough sampling rate. This technical note presents results comparing performance indices with and without intersample behavior, and some answer to the question how a low specification device can control a plant effectively. We consider a machine simulating wafer handling robots at semiconductor factories, which is an electromechanical system driven by a direct drive motor. We illustrate controller design for the robot with and without intersample behavior, and simulations and experimental results by using these controllers. Taking intersample behavior into account proves to be effective to make control performance better and enables it to choose relatively long sampling period. By controller design via performance index with intersample behavior, we can cope with situation where short enough sampling period may not be employed, and freedom of controller design might be widened especially on choice of sampling period.

  18. Hybrid Modeling for Testing Intelligent Software for Lunar-Mars Closed Life Support

    NASA Technical Reports Server (NTRS)

    Malin, Jane T.; Nicholson, Leonard S. (Technical Monitor)

    1999-01-01

    Intelligent software is being developed for closed life support systems with biological components, for human exploration of the Moon and Mars. The intelligent software functions include planning/scheduling, reactive discrete control and sequencing, management of continuous control, and fault detection, diagnosis, and management of failures and errors. Four types of modeling information have been essential to system modeling and simulation to develop and test the software and to provide operational model-based what-if analyses: discrete component operational and failure modes; continuous dynamic performance within component modes, modeled qualitatively or quantitatively; configuration of flows and power among components in the system; and operations activities and scenarios. CONFIG, a multi-purpose discrete event simulation tool that integrates all four types of models for use throughout the engineering and operations life cycle, has been used to model components and systems involved in the production and transfer of oxygen and carbon dioxide in a plant-growth chamber and between that chamber and a habitation chamber with physicochemical systems for gas processing.

  19. Periodicity and chaos from switched flow systems - Contrasting examples of discretely controlled continuous systems

    NASA Technical Reports Server (NTRS)

    Chase, Christopher; Serrano, Joseph; Ramadge, Peter J.

    1993-01-01

    We analyze two examples of the discrete control of a continuous variable system. These examples exhibit what may be regarded as the two extremes of complexity of the closed-loop behavior: one is eventually periodic, the other is chaotic. Our examples are derived from sampled deterministic flow models. These are of interest in their own right but have also been used as models for certain aspects of manufacturing systems. In each case, we give a precise characterization of the closed-loop behavior.

  20. A computational study of the discretization error in the solution of the Spencer-Lewis equation by doubling applied to the upwind finite-difference approximation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nelson, P.; Seth, D.L.; Ray, A.K.

    A detailed and systematic study of the nature of the discretization error associated with the upwind finite-difference method is presented. A basic model problem has been identified and based upon the results for this problem, a basic hypothesis regarding the accuracy of the computational solution of the Spencer-Lewis equation is formulated. The basic hypothesis is then tested under various systematic single complexifications of the basic model problem. The results of these tests provide the framework of the refined hypothesis presented in the concluding comments. 27 refs., 3 figs., 14 tabs.

  1. Effects of automobile steering characteristics on driver/vehicle system performance in discrete maneuvers

    NASA Technical Reports Server (NTRS)

    Klein, R. H.; Mcruer, D. T.

    1975-01-01

    A series of discrete maneuver tasks were used to evaluate the effects of steering gain and directional mode dynamic parameters on driver/vehicle responses. The importance and ranking of these parameters were evaluated through changes in subjective driver ratings and performance measures obtained from transient maneuvers such as a double lane change, an emergency lane change, and an unexpected obstacle. The unexpected obstacle maneuver proved more sensitive to individual driver differences than to vehicle differences. Results were based on full scale tests with an experienced test driver evaluating many different dynamic configurations plus seventeen ordinary drivers evaluating six key configurations.

  2. Electromigration Mechanism of Failure in Flip-Chip Solder Joints Based on Discrete Void Formation.

    PubMed

    Chang, Yuan-Wei; Cheng, Yin; Helfen, Lukas; Xu, Feng; Tian, Tian; Scheel, Mario; Di Michiel, Marco; Chen, Chih; Tu, King-Ning; Baumbach, Tilo

    2017-12-20

    In this investigation, SnAgCu and SN100C solders were electromigration (EM) tested, and the 3D laminography imaging technique was employed for in-situ observation of the microstructure evolution during testing. We found that discrete voids nucleate, grow and coalesce along the intermetallic compound/solder interface during EM testing. A systematic analysis yields quantitative information on the number, volume, and growth rate of voids, and the EM parameter of DZ*. We observe that fast intrinsic diffusion in SnAgCu solder causes void growth and coalescence, while in the SN100C solder this coalescence was not significant. To deduce the current density distribution, finite-element models were constructed on the basis of the laminography images. The discrete voids do not change the global current density distribution, but they induce the local current crowding around the voids: this local current crowding enhances the lateral void growth and coalescence. The correlation between the current density and the probability of void formation indicates that a threshold current density exists for the activation of void formation. There is a significant increase in the probability of void formation when the current density exceeds half of the maximum value.

  3. OCT Amplitude and Speckle Statistics of Discrete Random Media.

    PubMed

    Almasian, Mitra; van Leeuwen, Ton G; Faber, Dirk J

    2017-11-01

    Speckle, amplitude fluctuations in optical coherence tomography (OCT) images, contains information on sub-resolution structural properties of the imaged sample. Speckle statistics could therefore be utilized in the characterization of biological tissues. However, a rigorous theoretical framework relating OCT speckle statistics to structural tissue properties has yet to be developed. As a first step, we present a theoretical description of OCT speckle, relating the OCT amplitude variance to size and organization for samples of discrete random media (DRM). Starting the calculations from the size and organization of the scattering particles, we analytically find expressions for the OCT amplitude mean, amplitude variance, the backscattering coefficient and the scattering coefficient. We assume fully developed speckle and verify the validity of this assumption by experiments on controlled samples of silica microspheres suspended in water. We show that the OCT amplitude variance is sensitive to sub-resolution changes in size and organization of the scattering particles. Experimentally determined and theoretically calculated optical properties are compared and in good agreement.

  4. Stabilization and discontinuity-capturing parameters for space-time flow computations with finite element and isogeometric discretizations

    NASA Astrophysics Data System (ADS)

    Takizawa, Kenji; Tezduyar, Tayfun E.; Otoguro, Yuto

    2018-04-01

    Stabilized methods, which have been very common in flow computations for many years, typically involve stabilization parameters, and discontinuity-capturing (DC) parameters if the method is supplemented with a DC term. Various well-performing stabilization and DC parameters have been introduced for stabilized space-time (ST) computational methods in the context of the advection-diffusion equation and the Navier-Stokes equations of incompressible and compressible flows. These parameters were all originally intended for finite element discretization but quite often used also for isogeometric discretization. The stabilization and DC parameters we present here for ST computations are in the context of the advection-diffusion equation and the Navier-Stokes equations of incompressible flows, target isogeometric discretization, and are also applicable to finite element discretization. The parameters are based on a direction-dependent element length expression. The expression is outcome of an easy to understand derivation. The key components of the derivation are mapping the direction vector from the physical ST element to the parent ST element, accounting for the discretization spacing along each of the parametric coordinates, and mapping what we have in the parent element back to the physical element. The test computations we present for pure-advection cases show that the parameters proposed result in good solution profiles.

  5. Metal speciation and toxicity of Tamar Estuary water to larvae of the Pacific oyster, Crassostrea gigas.

    PubMed

    Money, Cathryn; Braungardt, Charlotte B; Jha, Awadhesh N; Worsfold, Paul J; Achterberg, Eric P

    2011-07-01

    As part of the PREDICT Tamar Workshop, the toxicity of estuarine waters in the Tamar Estuary (southwest England) was assessed by integration of metal speciation determination with bioassays. High temporal resolution metal speciation analysis was undertaken in situ by deployment of a Voltammetric In situ Profiling (VIP) system. The VIP detects Cd (cadmium), Pb (lead) and Cu (copper) species smaller than 4 nm in size and this fraction is termed 'dynamic' and considered biologically available. Cadmium was mainly present in the dynamic form and constituted between 56% and 100% of the total dissolved concentration, which was determined subsequently in the laboratory in filtered discrete samples. In contrast, the dynamic Pb and Cu fractions were less important, with a much larger proportion of these metals associated with organic ligands and/or colloids (45-90% Pb and 46-85% Cu), which probably reduced the toxicological impact of these elements in this system. Static toxicity tests, based on the response of Crassostrea gigas larva exposed to discrete water samples showed a high level of toxicity (up to 100% abnormal development) at two stations in the Tamar, particularly during periods of the tidal cycle when the influence of more pristine coastal water was at its lowest. Competitive ligand-exchange Cu titrations showed that natural organic ligands reduced the free cupric ion concentration to levels that were unlikely to have been the sole cause of the observed toxicity. Nonetheless, it is probable that the combined effect of the metals determined in this work contributed significantly to the bioassay response. Copyright © 2011 Elsevier Ltd. All rights reserved.

  6. Development of gradient descent adaptive algorithms to remove common mode artifact for improvement of cardiovascular signal quality.

    PubMed

    Ciaccio, Edward J; Micheli-Tzanakou, Evangelia

    2007-07-01

    Common-mode noise degrades cardiovascular signal quality and diminishes measurement accuracy. Filtering to remove noise components in the frequency domain often distorts the signal. Two adaptive noise canceling (ANC) algorithms were tested to adjust weighted reference signals for optimal subtraction from a primary signal. Update of weight w was based upon the gradient term of the steepest descent equation: [see text], where the error epsilon is the difference between primary and weighted reference signals. nabla was estimated from Deltaepsilon(2) and Deltaw without using a variable Deltaw in the denominator which can cause instability. The Parallel Comparison (PC) algorithm computed Deltaepsilon(2) using fixed finite differences +/- Deltaw in parallel during each discrete time k. The ALOPEX algorithm computed Deltaepsilon(2)x Deltaw from time k to k + 1 to estimate nabla, with a random number added to account for Deltaepsilon(2) . Deltaw--> 0 near the optimal weighting. Using simulated data, both algorithms stably converged to the optimal weighting within 50-2000 discrete sample points k even with a SNR = 1:8 and weights which were initialized far from the optimal. Using a sharply pulsatile cardiac electrogram signal with added noise so that the SNR = 1:5, both algorithms exhibited stable convergence within 100 ms (100 sample points). Fourier spectral analysis revealed minimal distortion when comparing the signal without added noise to the ANC restored signal. ANC algorithms based upon difference calculations can rapidly and stably converge to the optimal weighting in simulated and real cardiovascular data. Signal quality is restored with minimal distortion, increasing the accuracy of biophysical measurement.

  7. Multivariate localization methods for ensemble Kalman filtering

    NASA Astrophysics Data System (ADS)

    Roh, S.; Jun, M.; Szunyogh, I.; Genton, M. G.

    2015-12-01

    In ensemble Kalman filtering (EnKF), the small number of ensemble members that is feasible to use in a practical data assimilation application leads to sampling variability of the estimates of the background error covariances. The standard approach to reducing the effects of this sampling variability, which has also been found to be highly efficient in improving the performance of EnKF, is the localization of the estimates of the covariances. One family of localization techniques is based on taking the Schur (element-wise) product of the ensemble-based sample covariance matrix and a correlation matrix whose entries are obtained by the discretization of a distance-dependent correlation function. While the proper definition of the localization function for a single state variable has been extensively investigated, a rigorous definition of the localization function for multiple state variables that exist at the same locations has been seldom considered. This paper introduces two strategies for the construction of localization functions for multiple state variables. The proposed localization functions are tested by assimilating simulated observations experiments into the bivariate Lorenz 95 model with their help.

  8. Detection and quantification of anionic detergent (lissapol) in milk using attenuated total reflectance-Fourier Transform Infrared spectroscopy.

    PubMed

    Jaiswal, Pranita; Jha, Shyam Narayan; Kaur, Jaspreet; Borah, Anjan

    2017-04-15

    Adulteration of milk to gain economic benefit is rampant. Addition of detergent in milk can cause food poisoning and other complications. Fourier Transform Infrared spectroscopy was evaluated as rapid method for detection and quantification of anionic detergent (lissapol) in milk. Spectra of pure and artificially adulterated milk (0.2-2.0% detergent) samples revealed clear differences in wavenumber range of 4000-500cm -1 . The apparent variations observed in region of 1600-995 and 3040-2851cm -1 corresponds to absorption frequencies of common constituents of detergent (linear alkyl benzene sulphonate). Principal component analysis showed discrete clustering of samples based on level of detergent (p⩽0.05) in milk. The classification efficiency for test samples were recorded to be >93% using Soft Independent Modelling of Class Analogy approach. Maximum coefficient of determination for prediction of detergent was 0.94 for calibration and 0.93 for validation, using partial least square regression in wavenumber combination of 1086-1056, 1343-1333, 1507-1456, 3040-2851cm -1 . Copyright © 2016 Elsevier Ltd. All rights reserved.

  9. Distribution of peri-implant stresses with a countertorque device.

    PubMed

    Sendyk, Claudio Luiz; Lopez, Thais Torralbo; de Araujo, Cleudmar Amaral; Sendyk, Wilson Roberto; Goncalvez, Valdir Ferreira

    2013-01-01

    To verify the effectiveness of a countertorque device in dental implants in redistributing stress to the bone-implant interface during tightening of the abutment screw. Two prismatic photoelastic samples containing implants were made, one with a 3.75-mm-diameter implant and the other with a 5.0-mm-diameter implant (both implants had an external-hexagon interface) and the respective abutments were attached (CeraOne). The samples were placed in a support and submitted to torques of 10, 20, 32, and 45 Ncm with an electronic torque meter. The torque application was repeated 10 times on each sample (n = 10) with and without a countertorque device. Photoelastic patterns were detected; thus, a photographic register of each test was selected. The fringe patterns were analyzed at discrete points near the implants' external arch. In both implants analyzed, a stress gradient reduction was observed through the implant with the countertorque device. The countertorque device used in this study proved to be effective in reducing the stresses generated in the peri-implant bone tissue during torque application.

  10. Determination of total polyphenol index in wines employing a voltammetric electronic tongue.

    PubMed

    Cetó, Xavier; Gutiérrez, Juan Manuel; Gutiérrez, Manuel; Céspedes, Francisco; Capdevila, Josefina; Mínguez, Santiago; Jiménez-Jorquera, Cecilia; del Valle, Manel

    2012-06-30

    This work reports the application of a voltammetric electronic tongue system (ET) made from an array of modified graphite-epoxy composites plus a gold microelectrode in the qualitative and quantitative analysis of polyphenols found in wine. Wine samples were analyzed using cyclic voltammetry without any sample pretreatment. The obtained responses were preprocessed employing discrete wavelet transform (DWT) in order to compress and extract significant features from the voltammetric signals, and the obtained approximation coefficients fed a multivariate calibration method (artificial neural network-ANN-or partial least squares-PLS-) which accomplished the quantification of total polyphenol content. External test subset samples results were compared with the ones obtained with the Folin-Ciocalteu (FC) method and UV absorbance polyphenol index (I(280)) as reference values, with highly significant correlation coefficients of 0.979 and 0.963 in the range from 50 to 2400 mg L(-1) gallic acid equivalents, respectively. In a separate experiment, qualitative discrimination of different polyphenols found in wine was also assessed by principal component analysis (PCA). Copyright © 2012 Elsevier B.V. All rights reserved.

  11. Developing a discrete choice experiment in Malawi: eliciting preferences for breast cancer early detection services.

    PubMed

    Kohler, Racquel E; Lee, Clara N; Gopal, Satish; Reeve, Bryce B; Weiner, Bryan J; Wheeler, Stephanie B

    2015-01-01

    In Malawi, routine breast cancer screening is not available and little is known about women's preferences regarding early detection services. Discrete choice experiments are increasingly used to reveal preferences about new health services; however, selecting appropriate attributes that describe a new health service is imperative to ensure validity of the choice experiment. To identify important factors that are relevant to Malawian women's preferences for breast cancer detection services and to select attributes and levels for a discrete choice experiment in a setting where both breast cancer early detection and choice experiments are rare. We reviewed the literature to establish an initial list of potential attributes and levels for a discrete choice experiment and conducted qualitative interviews with health workers and community women to explore relevant local factors affecting decisions to use cancer detection services. We tested the design through cognitive interviews and refined the levels, descriptions, and designs. Themes that emerged from interviews provided critical information about breast cancer detection services, specifically, that breast cancer interventions should be integrated into other health services because asymptomatic screening may not be practical as an individual service. Based on participants' responses, the final attributes of the choice experiment included travel time, health encounter, health worker type and sex, and breast cancer early detection strategy. Cognitive testing confirmed the acceptability of the final attributes, comprehension of choice tasks, and women's abilities to make trade-offs. Applying a discrete choice experiment for breast cancer early detection was feasible with appropriate tailoring for a low-income, low-literacy African setting.

  12. Violent patients: what Italian psychiatrists feel and how this could change their patient care.

    PubMed

    Catanesi, Roberto; Carabellese, Felice; Candelli, Chiara; Valerio, Antonia; Martinelli, Domenico

    2010-06-01

    The study takes a detailed look at psychiatric patient violence towards their psychiatrists. It takes into consideration the views and opinions of Italian psychiatrists, whether they have experienced violent behaviour first hand and, if so, which type of aggression and whether this caused them to modify their behaviour towards the patient and his or her treatment. A multiple-choice questionnaire is sent to all members of the Italian Society of Psychiatry, with 1,202 psychiatrists responding (20.23% of the sample). The data are evaluated using SPSS with chi-square test calculations for discrete and continuous variables and t-testing for independent samples (significance p < .05). Almost all psychiatrists (90.9%) have experienced verbal aggression; 72% have been threatened with dangerous objects and 64.58% have suffered physical aggression. Physical aggression experiences result in a 50% increase in the probability of modifying one's therapeutic behaviour. Significant differences emerge between the psychiatrists, according to differences in age and career experience. Psychiatrists state that they do not consider themselves to be adequately prepared to deal with the violence of patients, and almost all psychiatrists felt the need for specific training in how to manage such violence.

  13. Quasi-Monte Carlo Methods Applied to Tau-Leaping in Stochastic Biological Systems.

    PubMed

    Beentjes, Casper H L; Baker, Ruth E

    2018-05-25

    Quasi-Monte Carlo methods have proven to be effective extensions of traditional Monte Carlo methods in, amongst others, problems of quadrature and the sample path simulation of stochastic differential equations. By replacing the random number input stream in a simulation procedure by a low-discrepancy number input stream, variance reductions of several orders have been observed in financial applications. Analysis of stochastic effects in well-mixed chemical reaction networks often relies on sample path simulation using Monte Carlo methods, even though these methods suffer from typical slow [Formula: see text] convergence rates as a function of the number of sample paths N. This paper investigates the combination of (randomised) quasi-Monte Carlo methods with an efficient sample path simulation procedure, namely [Formula: see text]-leaping. We show that this combination is often more effective than traditional Monte Carlo simulation in terms of the decay of statistical errors. The observed convergence rate behaviour is, however, non-trivial due to the discrete nature of the models of chemical reactions. We explain how this affects the performance of quasi-Monte Carlo methods by looking at a test problem in standard quadrature.

  14. Tape Cassette Bacteria Detection System

    NASA Technical Reports Server (NTRS)

    1973-01-01

    The design, fabrication, and testing of an automatic bacteria detection system with a zero-g capability and based on the filter-capsule approach is described. This system is intended for monitoring the sterility of regenerated water in a spacecraft. The principle of detection is based on measuring the increase in chemiluminescence produced by the action of bacterial porphyrins (i.e., catalase, cytochromes, etc.) on a luminol-hydrogen peroxide mixture. Since viable as well as nonviable organisms initiate this luminescence, viable organisms are detected by comparing the signal of an incubated water sample with an unincubated control. Higher signals for the former indicate the presence of viable organisms. System features include disposable sealed sterile capsules, each containing a filter membrane, for processing discrete water samples and a tape transport for moving these capsules through a processing sequence which involves sample concentration, nutrient addition, incubation, a 4 Molar Urea wash and reaction with luminol-hydrogen peroxide in front of a photomultiplier tube. Liquids are introduced by means of a syringe needle which pierces a rubber septum contained in the wall of the capsule. Detection thresholds obtained with this unit towards E. coli and S. marcescens assuming a 400 ml water sample are indicated.

  15. Convergence and Efficiency of Adaptive Importance Sampling Techniques with Partial Biasing

    NASA Astrophysics Data System (ADS)

    Fort, G.; Jourdain, B.; Lelièvre, T.; Stoltz, G.

    2018-04-01

    We propose a new Monte Carlo method to efficiently sample a multimodal distribution (known up to a normalization constant). We consider a generalization of the discrete-time Self Healing Umbrella Sampling method, which can also be seen as a generalization of well-tempered metadynamics. The dynamics is based on an adaptive importance technique. The importance function relies on the weights (namely the relative probabilities) of disjoint sets which form a partition of the space. These weights are unknown but are learnt on the fly yielding an adaptive algorithm. In the context of computational statistical physics, the logarithm of these weights is, up to an additive constant, the free-energy, and the discrete valued function defining the partition is called the collective variable. The algorithm falls into the general class of Wang-Landau type methods, and is a generalization of the original Self Healing Umbrella Sampling method in two ways: (i) the updating strategy leads to a larger penalization strength of already visited sets in order to escape more quickly from metastable states, and (ii) the target distribution is biased using only a fraction of the free-energy, in order to increase the effective sample size and reduce the variance of importance sampling estimators. We prove the convergence of the algorithm and analyze numerically its efficiency on a toy example.

  16. Stability of Dynamical Systems with Discontinuous Motions:

    NASA Astrophysics Data System (ADS)

    Michel, Anthony N.; Hou, Ling

    In this paper we present a stability theory for discontinuous dynamical systems (DDS): continuous-time systems whose motions are not necessarily continuous with respect to time. We show that this theory is not only applicable in the analysis of DDS, but also in the analysis of continuous dynamical systems (continuous-time systems whose motions are continuous with respect to time), discrete-time dynamical systems (systems whose motions are defined at discrete points in time) and hybrid dynamical systems (HDS) (systems whose descriptions involve simultaneously continuous-time and discrete-time). We show that the stability results for DDS are in general less conservative than the corresponding well-known classical Lyapunov results for continuous dynamical systems and discrete-time dynamical systems. Although the DDS stability results are applicable to general dynamical systems defined on metric spaces (divorced from any kind of description by differential equations, or any other kinds of equations), we confine ourselves to finite-dimensional dynamical systems defined by ordinary differential equations and difference equations, to make this paper as widely accessible as possible. We present only sample results, namely, results for uniform asymptotic stability in the large.

  17. Development of Discrete Compaction Bands in Two Porous Sandstones

    NASA Astrophysics Data System (ADS)

    Tembe, S.; Baud, P.; Wong, T.

    2003-12-01

    Compaction band formation has been documented by recent field and laboratory studies as a localized failure mode occurring in porous sandstones. The coupling of compaction and localization may significantly alter the stress field and strain partitioning, and act as barriers within reservoirs. Two end-members of this failure mode that develop subperpendicular to the maximum principal stress have been identified: numerous discrete compaction bands with a thickness of only several grains, or a few diffuse bands that are significantly thicker. Much of what is known about discrete compaction bands derives from laboratory experiments performed on the relatively homogeneous Bentheim sandstone with 23% porosity. In this study we observe similar compaction localization behavior in the Diemelstadt sandstone, that has an initial porosity of 24.4% and a modal composition of 68% quartz, 26% feldspar, 4% oxides, and 2% micas. CT scans of the Diemelstadt sandstone indicate bedding corresponding to low porosity laminae. Saturated samples cored perpendicular to bedding were deformed at room temperature under drained conditions at a constant pore pressure of 10 MPa and a confining pressure range of 20-175 MPa. Acoustic emission activity and pore volume change were recorded continuously. Samples were deformed to axial strains of 1-4% and recovered from the triaxial cell for microstructural analysis. The mechanical data map the transition in failure mode from brittle faulting to compactive cataclastic flow. The brittle regime occurred at effective pressures up to 40 MPa, associated with failure by conjugate shear bands. At an effective pressure range of 60-175 MPa strain hardening and shear-enhanced compaction were accompanied by the development of discrete compaction bands, that was manifested by episodic surges of acoustic emission. Preliminary microstructural observations of the failed samples suggest that bedding influenced the band orientations which varies between 75-90\\deg relative to the maximum principle stress. Our study demonstrates that despite their different mineralogy, failure modes and development of the compaction localization are similar in the Diemelstadt and Benthiem sandstones.

  18. Ensemble-type numerical uncertainty information from single model integrations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rauser, Florian, E-mail: florian.rauser@mpimet.mpg.de; Marotzke, Jochem; Korn, Peter

    2015-07-01

    We suggest an algorithm that quantifies the discretization error of time-dependent physical quantities of interest (goals) for numerical models of geophysical fluid dynamics. The goal discretization error is estimated using a sum of weighted local discretization errors. The key feature of our algorithm is that these local discretization errors are interpreted as realizations of a random process. The random process is determined by the model and the flow state. From a class of local error random processes we select a suitable specific random process by integrating the model over a short time interval at different resolutions. The weights of themore » influences of the local discretization errors on the goal are modeled as goal sensitivities, which are calculated via automatic differentiation. The integration of the weighted realizations of local error random processes yields a posterior ensemble of goal approximations from a single run of the numerical model. From the posterior ensemble we derive the uncertainty information of the goal discretization error. This algorithm bypasses the requirement of detailed knowledge about the models discretization to generate numerical error estimates. The algorithm is evaluated for the spherical shallow-water equations. For two standard test cases we successfully estimate the error of regional potential energy, track its evolution, and compare it to standard ensemble techniques. The posterior ensemble shares linear-error-growth properties with ensembles of multiple model integrations when comparably perturbed. The posterior ensemble numerical error estimates are of comparable size as those of a stochastic physics ensemble.« less

  19. Multilevel sequential Monte Carlo: Mean square error bounds under verifiable conditions

    DOE PAGES

    Del Moral, Pierre; Jasra, Ajay; Law, Kody J. H.

    2017-01-09

    We consider the multilevel sequential Monte Carlo (MLSMC) method of Beskos et al. (Stoch. Proc. Appl. [to appear]). This technique is designed to approximate expectations w.r.t. probability laws associated to a discretization. For instance, in the context of inverse problems, where one discretizes the solution of a partial differential equation. The MLSMC approach is especially useful when independent, coupled sampling is not possible. Beskos et al. show that for MLSMC the computational effort to achieve a given error, can be less than independent sampling. In this article we significantly weaken the assumptions of Beskos et al., extending the proofs tomore » non-compact state-spaces. The assumptions are based upon multiplicative drift conditions as in Kontoyiannis and Meyn (Electron. J. Probab. 10 [2005]: 61–123). The assumptions are verified for an example.« less

  20. Multilevel sequential Monte Carlo: Mean square error bounds under verifiable conditions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Del Moral, Pierre; Jasra, Ajay; Law, Kody J. H.

    We consider the multilevel sequential Monte Carlo (MLSMC) method of Beskos et al. (Stoch. Proc. Appl. [to appear]). This technique is designed to approximate expectations w.r.t. probability laws associated to a discretization. For instance, in the context of inverse problems, where one discretizes the solution of a partial differential equation. The MLSMC approach is especially useful when independent, coupled sampling is not possible. Beskos et al. show that for MLSMC the computational effort to achieve a given error, can be less than independent sampling. In this article we significantly weaken the assumptions of Beskos et al., extending the proofs tomore » non-compact state-spaces. The assumptions are based upon multiplicative drift conditions as in Kontoyiannis and Meyn (Electron. J. Probab. 10 [2005]: 61–123). The assumptions are verified for an example.« less

  1. Monte Carlo algorithms for Brownian phylogenetic models.

    PubMed

    Horvilleur, Benjamin; Lartillot, Nicolas

    2014-11-01

    Brownian models have been introduced in phylogenetics for describing variation in substitution rates through time, with applications to molecular dating or to the comparative analysis of variation in substitution patterns among lineages. Thus far, however, the Monte Carlo implementations of these models have relied on crude approximations, in which the Brownian process is sampled only at the internal nodes of the phylogeny or at the midpoints along each branch, and the unknown trajectory between these sampled points is summarized by simple branchwise average substitution rates. A more accurate Monte Carlo approach is introduced, explicitly sampling a fine-grained discretization of the trajectory of the (potentially multivariate) Brownian process along the phylogeny. Generic Monte Carlo resampling algorithms are proposed for updating the Brownian paths along and across branches. Specific computational strategies are developed for efficient integration of the finite-time substitution probabilities across branches induced by the Brownian trajectory. The mixing properties and the computational complexity of the resulting Markov chain Monte Carlo sampler scale reasonably with the discretization level, allowing practical applications with up to a few hundred discretization points along the entire depth of the tree. The method can be generalized to other Markovian stochastic processes, making it possible to implement a wide range of time-dependent substitution models with well-controlled computational precision. The program is freely available at www.phylobayes.org. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  2. Differential porosimetry and permeametry for random porous media.

    PubMed

    Hilfer, R; Lemmer, A

    2015-07-01

    Accurate determination of geometrical and physical properties of natural porous materials is notoriously difficult. Continuum multiscale modeling has provided carefully calibrated realistic microstructure models of reservoir rocks with floating point accuracy. Previous measurements using synthetic microcomputed tomography (μ-CT) were based on extrapolation of resolution-dependent properties for discrete digitized approximations of the continuum microstructure. This paper reports continuum measurements of volume and specific surface with full floating point precision. It also corrects an incomplete description of rotations in earlier publications. More importantly, the methods of differential permeametry and differential porosimetry are introduced as precision tools. The continuum microstructure chosen to exemplify the methods is a homogeneous, carefully calibrated and characterized model for Fontainebleau sandstone. The sample has been publicly available since 2010 on the worldwide web as a benchmark for methodical studies of correlated random media. High-precision porosimetry gives the volume and internal surface area of the sample with floating point accuracy. Continuum results with floating point precision are compared to discrete approximations. Differential porosities and differential surface area densities allow geometrical fluctuations to be discriminated from discretization effects and numerical noise. Differential porosimetry and Fourier analysis reveal subtle periodic correlations. The findings uncover small oscillatory correlations with a period of roughly 850μm, thus implying that the sample is not strictly stationary. The correlations are attributed to the deposition algorithm that was used to ensure the grain overlap constraint. Differential permeabilities are introduced and studied. Differential porosities and permeabilities provide scale-dependent information on geometry fluctuations, thereby allowing quantitative error estimates.

  3. Pore-level numerical analysis of the infrared surface temperature of metallic foam

    NASA Astrophysics Data System (ADS)

    Li, Yang; Xia, Xin-Lin; Sun, Chuang; Tan, He-Ping; Wang, Jing

    2017-10-01

    Open-cell metallic foams are increasingly used in various thermal systems. The temperature distributions are significant for the comprehensive understanding of these foam-based engineering applications. This study aims to numerically investigate the modeling of the infrared surface temperature (IRST) of open-cell metallic foam measured by an infrared camera placed above the sample. Two typical approaches based on Backward Monte Carlo simulation are developed to estimate the IRSTs: the first one, discrete-scale approach (DSA), uses a realistic discrete representation of the foam structure obtained from a computed tomography reconstruction while the second one, continuous-scale approach (CSA), assumes that the foam sample behaves like a continuous homogeneous semi-transparent medium. The radiative properties employed in CSA are directly determined by a ray-tracing process inside the discrete foam representation. The IRSTs for different material properties (material emissivity, specularity parameter) are computed by the two approaches. The results show that local IRSTs can vary according to the local compositions of the foam surface (void and solid). The temperature difference between void and solid areas is gradually attenuated with increasing material emissivity. In addition, the annular void space near to the foam surface behaves like a black cavity for thermal radiation, which is ensued by copious neighboring skeletons. For most of the cases studied, the mean IRSTs computed by the DSA and CSA are close to each other, except when the material emissivity is highly weakened and the sample temperature is extremely high.

  4. Development of a Rolling Dynamic Deflectometer for Continuous Deflection Testing of Pavements

    DOT National Transportation Integrated Search

    1998-05-01

    A rolling dynamic deflectometer (RDD) was developed as a nondestructive method for determining continuous deflection profiles of pavements. Unlike other commonly used pavement testing methods, the RDD performs continuous rather than discrete measurem...

  5. The Emotions of Socialization-Related Learning: Understanding Workplace Adaptation as a Learning Process.

    ERIC Educational Resources Information Center

    Reio, Thomas G., Jr.

    The influence of selected discrete emotions on socialization-related learning and perception of workplace adaptation was examined in an exploratory study. Data were collected from 233 service workers in 4 small and medium-sized companies in metropolitan Washington, D.C. The sample members' average age was 32.5 years, and the sample's racial makeup…

  6. Completion summary for borehole USGS 136 near the Advanced Test Reactor Complex, Idaho National Laboratory, Idaho

    USGS Publications Warehouse

    Twining, Brian V.; Bartholomay, Roy C.; Hodges, Mary K.V.

    2012-01-01

    In 2011, the U.S. Geological Survey, in cooperation with the U.S. Department of Energy, cored and completed borehole USGS 136 for stratigraphic framework analyses and long-term groundwater monitoring of the eastern Snake River Plain aquifer at the Idaho National Laboratory. The borehole was initially cored to a depth of 1,048 feet (ft) below land surface (BLS) to collect core, open-borehole water samples, and geophysical data. After these data were collected, borehole USGS 136 was cemented and backfilled between 560 and 1,048 ft BLS. The final construction of borehole USGS 136 required that the borehole be reamed to allow for installation of 6-inch (in.) diameter carbon-steel casing and 5-in. diameter stainless-steel screen; the screened monitoring interval was completed between 500 and 551 ft BLS. A dedicated pump and water-level access line were placed to allow for aquifer testing, for collecting periodic water samples, and for measuring water levels.Geophysical and borehole video logs were collected after coring and after the completion of the monitor well. Geophysical logs were examined in conjunction with the borehole core to describe borehole lithology and to identify primary flow paths for groundwater, which occur in intervals of fractured and vesicular basalt.A single-well aquifer test was used to define hydraulic characteristics for borehole USGS 136 in the eastern Snake River Plain aquifer. Specific-capacity, transmissivity, and hydraulic conductivity from the aquifer test were at least 975 gallons per minute per foot, 1.4 × 105 feet squared per day (ft2/d), and 254 feet per day, respectively. The amount of measureable drawdown during the aquifer test was about 0.02 ft. The transmissivity for borehole USGS 136 was in the range of values determined from previous aquifer tests conducted in other wells near the Advanced Test Reactor Complex: 9.5 × 103 to 1.9 × 105 ft2/d.Water samples were analyzed for cations, anions, metals, nutrients, total organic carbon, volatile organic compounds, stable isotopes, and radionuclides. Water samples from borehole USGS 136 indicated that concentrations of tritium, sulfate, and chromium were affected by wastewater disposal practices at the Advanced Test Reactor Complex. Depth-discrete groundwater samples were collected in the open borehole USGS 136 near 965, 710, and 573 ft BLS using a thief sampler; on the basis of selected constituents, deeper groundwater samples showed no influence from wastewater disposal at the Advanced Test Reactor Complex.

  7. The isolation of spatial patterning modes in a mathematical model of juxtacrine cell signalling.

    PubMed

    O'Dea, R D; King, J R

    2013-06-01

    Juxtacrine signalling mechanisms are known to be crucial in tissue and organ development, leading to spatial patterns in gene expression. We investigate the patterning behaviour of a discrete model of juxtacrine cell signalling due to Owen & Sherratt (1998, Mathematical modelling of juxtacrine cell signalling. Math. Biosci., 153, 125-150) in which ligand molecules, unoccupied receptors and bound ligand-receptor complexes are modelled. Feedback between the ligand and receptor production and the level of bound receptors is incorporated. By isolating two parameters associated with the feedback strength and employing numerical simulation, linear stability and bifurcation analysis, the pattern-forming behaviour of the model is analysed under regimes corresponding to lateral inhibition and induction. Linear analysis of this model fails to capture the patterning behaviour exhibited in numerical simulations. Via bifurcation analysis, we show that since the majority of periodic patterns fold subcritically from the homogeneous steady state, a wide variety of stable patterns exists at a given parameter set, providing an explanation for this failure. The dominant pattern is isolated via numerical simulation. Additionally, by sampling patterns of non-integer wavelength on a discrete mesh, we highlight a disparity between the continuous and discrete representations of signalling mechanisms: in the continuous case, patterns of arbitrary wavelength are possible, while sampling such patterns on a discrete mesh leads to longer wavelength harmonics being selected where the wavelength is rational; in the irrational case, the resulting aperiodic patterns exhibit 'local periodicity', being constructed from distorted stable shorter wavelength patterns. This feature is consistent with experimentally observed patterns, which typically display approximate short-range periodicity with defects.

  8. An electronic circuit for sensing malfunctions in test instrumentation

    NASA Technical Reports Server (NTRS)

    Miller, W. M., Jr.

    1969-01-01

    Monitoring device differentiates between malfunctions occurring in the system undergoing test and malfunctions within the test instrumentation itself. Electronic circuits in the monitor use transistors to commutate silicon controlled rectifiers by removing the drive voltage, display circuits are then used to monitor multiple discrete lines.

  9. A New On-the-Fly Sampling Method for Incoherent Inelastic Thermal Neutron Scattering Data in MCNP6

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pavlou, Andrew Theodore; Brown, Forrest B.; Ji, Wei

    2014-09-02

    At thermal energies, the scattering of neutrons in a system is complicated by the comparable velocities of the neutron and target, resulting in competing upscattering and downscattering events. The neutron wavelength is also similar in size to the target's interatomic spacing making the scattering process a quantum mechanical problem. Because of the complicated nature of scattering at low energies, the thermal data files in ACE format used in continuous-energy Monte Carlo codes are quite large { on the order of megabytes for a single temperature and material. In this paper, a new storage and sampling method is introduced that ismore » orders of magnitude less in size and is used to sample scattering parameters at any temperature on-the-fly. In addition to the reduction in storage, the need to pre-generate thermal scattering data tables at fine temperatures has been eliminated. This is advantageous for multiphysics simulations which may involve temperatures not known in advance. A new module was written for MCNP6 that bypasses the current S(α,β) table lookup in favor of the new format. The new on-the-fly sampling method was tested for graphite for two benchmark problems at ten temperatures: 1) an eigenvalue test with a fuel compact of uranium oxycarbide fuel homogenized into a graphite matrix, 2) a surface current test with a \\broomstick" problem with a monoenergetic point source. The largest eigenvalue difference was 152pcm for T= 1200K. For the temperatures and incident energies chosen for the broomstick problem, the secondary neutron spectrum showed good agreement with the traditional S(α,β) sampling method. These preliminary results show that sampling thermal scattering data on-the-fly is a viable option to eliminate both the storage burden of keeping thermal data at discrete temperatures and the need to know temperatures before simulation runtime.« less

  10. The influence of gene expression profiling on decisional conflict in decision making for early-stage breast cancer chemotherapy.

    PubMed

    MacDonald, Karen V; Bombard, Yvonne; Deal, Ken; Trudeau, Maureen; Leighl, Natasha; Marshall, Deborah A

    2016-07-01

    Women with early-stage breast cancer, of whom only 15% will experience a recurrence, are often conflicted or uncertain about taking chemotherapy. Gene expression profiling (GEP) of tumours informs risk prediction, potentially affecting treatment decisions. We examined whether receiving a GEP test score reduces decisional conflict in chemotherapy treatment decision making. A general population sample of 200 women completed the decisional conflict scale (DCS) at baseline (no GEP test score scenario) and after (scenario with GEP test score added) completing a discrete choice experiment survey for early-stage breast cancer chemotherapy. We scaled the 16-item DCS total scores and subscores from 0 to 100 and calculated means, standard deviations and change in scores, with significance (p < 0.05) based on matched pairs t-tests. We identified five respondent subgroups based on preferred treatment option; almost 40% did not change their chemotherapy decision after receiving GEP testing information. Total score and all subscores (uncertainty, informed, values clarity, support, and effective decision) decreased significantly in the respondent subgroup who were unsure about taking chemotherapy initially but changed to no chemotherapy (n =33). In the subgroup of respondents (n = 25) who chose chemotherapy initially but changed to unsure, effective decision subscore increased significantly. In the overall sample, changes in total and all subscores were non-significant. GEP testing adds value for women initially unsure about chemotherapy treatment with a decrease in decisional conflict. However, for women who are confident about their treatment decisions, GEP testing may not add value. Decisions to request GEP testing should be personalised based on patient preferences. Copyright © 2016 Elsevier Ltd. All rights reserved.

  11. Determination of Parachute Joint Factors using Seam and Joint Testing

    NASA Technical Reports Server (NTRS)

    Mollmann, Catherine

    2015-01-01

    This paper details the methodology for determining the joint factor for all parachute components. This method has been successfully implemented on the Capsule Parachute Assembly System (CPAS) for the NASA Orion crew module for use in determining the margin of safety for each component under peak loads. Also discussed are concepts behind the joint factor and what drives the loss of material strength at joints. The joint factor is defined as a "loss in joint strength...relative to the basic material strength" that occurs when "textiles are connected to each other or to metals." During the CPAS engineering development phase, a conservative joint factor of 0.80 was assumed for each parachute component. In order to refine this factor and eliminate excess conservatism, a seam and joint testing program was implemented as part of the structural validation. This method split each of the parachute structural joints into discrete tensile tests designed to duplicate the loading of each joint. Breaking strength data collected from destructive pull testing was then used to calculate the joint factor in the form of an efficiency. Joint efficiency is the percentage of the base material strength that remains after degradation due to sewing or interaction with other components; it is used interchangeably with joint factor in this paper. Parachute materials vary in type-mainly cord, tape, webbing, and cloth -which require different test fixtures and joint sample construction methods. This paper defines guidelines for designing and testing samples based on materials and test goals. Using the test methodology and analysis approach detailed in this paper, the minimum joint factor for each parachute component can be formulated. The joint factors can then be used to calculate the design factor and margin of safety for that component, a critical part of the design verification process.

  12. Probabilistic inference using linear Gaussian importance sampling for hybrid Bayesian networks

    NASA Astrophysics Data System (ADS)

    Sun, Wei; Chang, K. C.

    2005-05-01

    Probabilistic inference for Bayesian networks is in general NP-hard using either exact algorithms or approximate methods. However, for very complex networks, only the approximate methods such as stochastic sampling could be used to provide a solution given any time constraint. There are several simulation methods currently available. They include logic sampling (the first proposed stochastic method for Bayesian networks, the likelihood weighting algorithm) the most commonly used simulation method because of its simplicity and efficiency, the Markov blanket scoring method, and the importance sampling algorithm. In this paper, we first briefly review and compare these available simulation methods, then we propose an improved importance sampling algorithm called linear Gaussian importance sampling algorithm for general hybrid model (LGIS). LGIS is aimed for hybrid Bayesian networks consisting of both discrete and continuous random variables with arbitrary distributions. It uses linear function and Gaussian additive noise to approximate the true conditional probability distribution for continuous variable given both its parents and evidence in a Bayesian network. One of the most important features of the newly developed method is that it can adaptively learn the optimal important function from the previous samples. We test the inference performance of LGIS using a 16-node linear Gaussian model and a 6-node general hybrid model. The performance comparison with other well-known methods such as Junction tree (JT) and likelihood weighting (LW) shows that LGIS-GHM is very promising.

  13. Implementation Strategies for Large-Scale Transport Simulations Using Time Domain Particle Tracking

    NASA Astrophysics Data System (ADS)

    Painter, S.; Cvetkovic, V.; Mancillas, J.; Selroos, J.

    2008-12-01

    Time domain particle tracking is an emerging alternative to the conventional random walk particle tracking algorithm. With time domain particle tracking, particles are moved from node to node on one-dimensional pathways defined by streamlines of the groundwater flow field or by discrete subsurface features. The time to complete each deterministic segment is sampled from residence time distributions that include the effects of advection, longitudinal dispersion, a variety of kinetically controlled retention (sorption) processes, linear transformation, and temporal changes in groundwater velocities and sorption parameters. The simulation results in a set of arrival times at a monitoring location that can be post-processed with a kernel method to construct mass discharge (breakthrough) versus time. Implementation strategies differ for discrete flow (fractured media) systems and continuous porous media systems. The implementation strategy also depends on the scale at which hydraulic property heterogeneity is represented in the supporting flow model. For flow models that explicitly represent discrete features (e.g., discrete fracture networks), the sampling of residence times along segments is conceptually straightforward. For continuous porous media, such sampling needs to be related to the Lagrangian velocity field. Analytical or semi-analytical methods may be used to approximate the Lagrangian segment velocity distributions in aquifers with low-to-moderate variability, thereby capturing transport effects of subgrid velocity variability. If variability in hydraulic properties is large, however, Lagrangian velocity distributions are difficult to characterize and numerical simulations are required; in particular, numerical simulations are likely to be required for estimating the velocity integral scale as a basis for advective segment distributions. Aquifers with evolving heterogeneity scales present additional challenges. Large-scale simulations of radionuclide transport at two potential repository sites for high-level radioactive waste will be used to demonstrate the potential of the method. The simulations considered approximately 1000 source locations, multiple radionuclides with contrasting sorption properties, and abrupt changes in groundwater velocity associated with future glacial scenarios. Transport pathways linking the source locations to the accessible environment were extracted from discrete feature flow models that include detailed representations of the repository construction (tunnels, shafts, and emplacement boreholes) embedded in stochastically generated fracture networks. Acknowledgment The authors are grateful to SwRI Advisory Committee for Research, the Swedish Nuclear Fuel and Waste Management Company, and Posiva Oy for financial support.

  14. Laser based water equilibration method for d18O determination of water samples

    NASA Astrophysics Data System (ADS)

    Mandic, Magda; Smajgl, Danijela; Stoebener, Nils

    2017-04-01

    Determination of d18O with water equilibration method using mass spectrometers equipped with equilibration unit or Gas Bench is known already for many years. Now, with development of laser spectrometers this extends methods and possibilities to apply different technologies in laboratory but also in the field. The Thermo Scientific™ Delta Ray™ Isotope Ratio Infrared Spectrometer (IRIS) analyzer with the Universal Reference Interface (URI) Connect and Teledyne Cetac ASX-7100 offers high precision and throughput of samples. It employs optical spectroscopy for continuous measurement of isotope ratio values and concentration of carbon dioxide in ambient air, and also for analysis of discrete samples from vials, syringes, bags, or other user-provided sample containers. Test measurements and conformation of precision and accuracy of method determination d18O in water samples were done in Thermo Fisher application laboratory with three lab standards, namely ANST, Ocean II and HBW. All laboratory standards were previously calibrated with international reference material VSMOW2 and SLAP2 to assure accuracy of the isotopic values of the water. With method that we present in this work achieved repeatability and accuracy are 0.16‰ and 0.71‰, respectively, which fulfill requirements of regulatory method for wine and must after equilibration with CO2.

  15. Simulation-based Testing of Control Software

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ozmen, Ozgur; Nutaro, James J.; Sanyal, Jibonananda

    It is impossible to adequately test complex software by examining its operation in a physical prototype of the system monitored. Adequate test coverage can require millions of test cases, and the cost of equipment prototypes combined with the real-time constraints of testing with them makes it infeasible to sample more than a small number of these tests. Model based testing seeks to avoid this problem by allowing for large numbers of relatively inexpensive virtual prototypes that operate in simulation time at a speed limited only by the available computing resources. In this report, we describe how a computer system emulatormore » can be used as part of a model based testing environment; specifically, we show that a complete software stack including operating system and application software - can be deployed within a simulated environment, and that these simulations can proceed as fast as possible. To illustrate this approach to model based testing, we describe how it is being used to test several building control systems that act to coordinate air conditioning loads for the purpose of reducing peak demand. These tests involve the use of ADEVS (A Discrete Event System Simulator) and QEMU (Quick Emulator) to host the operational software within the simulation, and a building model developed with the MODELICA programming language using Buildings Library and packaged as an FMU (Functional Mock-up Unit) that serves as the virtual test environment.« less

  16. Effect of diatomic molecular properties on binary laser pulse optimizations of quantum gate operations.

    PubMed

    Zaari, Ryan R; Brown, Alex

    2011-07-28

    The importance of the ro-vibrational state energies on the ability to produce high fidelity binary shaped laser pulses for quantum logic gates is investigated. The single frequency 2-qubit ACNOT(1) and double frequency 2-qubit NOT(2) quantum gates are used as test cases to examine this behaviour. A range of diatomics is sampled. The laser pulses are optimized using a genetic algorithm for binary (two amplitude and two phase parameter) variation on a discretized frequency spectrum. The resulting trends in the fidelities were attributed to the intrinsic molecular properties and not the choice of method: a discretized frequency spectrum with genetic algorithm optimization. This is verified by using other common laser pulse optimization methods (including iterative optimal control theory), which result in the same qualitative trends in fidelity. The results differ from other studies that used vibrational state energies only. Moreover, appropriate choice of diatomic (relative ro-vibrational state arrangement) is critical for producing high fidelity optimized quantum logic gates. It is also suggested that global phase alignment imposes a significant restriction on obtaining high fidelity regions within the parameter search space. Overall, this indicates a complexity in the ability to provide appropriate binary laser pulse control of diatomics for molecular quantum computing. © 2011 American Institute of Physics

  17. Deleting 'irrational' responses from discrete choice experiments: a case of investigating or imposing preferences?

    PubMed

    Lancsar, Emily; Louviere, Jordan

    2006-08-01

    Investigation of the 'rationality' of responses to discrete choice experiments (DCEs) has been a theme of research in health economics. Responses have been deleted from DCEs where they have been deemed by researchers to (a) be 'irrational', defined by such studies as failing tests for non-satiation, or (b) represent lexicographic preferences. This paper outlines a number of reasons why deleting responses from DCEs may be inappropriate after first reviewing the theory underpinning rationality, highlighting that the importance placed on rationality depends on the approach to consumer theory to which one ascribes. The aim of this paper is not to suggest that all preferences elicited via DCEs are rational. Instead, it is to suggest a number of reasons why it may not be the case that all preferences labelled as 'irrational' are indeed so. Hence, deleting responses may result in the removal of valid preferences; induce sample selection bias; and reduce the statistical efficiency and power of the estimated choice models. Further, evidence suggests random utility theory may be able to cope with such preferences. Finally, we discuss a number of implications for the design, implementation and interpretation of DCEs and recommend caution regarding the deletion of preferences from stated preference experiments. Copyright 2006 John Wiley & Sons, Ltd.

  18. The shift-invariant discrete wavelet transform and application to speech waveform analysis.

    PubMed

    Enders, Jörg; Geng, Weihua; Li, Peijun; Frazier, Michael W; Scholl, David J

    2005-04-01

    The discrete wavelet transform may be used as a signal-processing tool for visualization and analysis of nonstationary, time-sampled waveforms. The highly desirable property of shift invariance can be obtained at the cost of a moderate increase in computational complexity, and accepting a least-squares inverse (pseudoinverse) in place of a true inverse. A new algorithm for the pseudoinverse of the shift-invariant transform that is easier to implement in array-oriented scripting languages than existing algorithms is presented together with self-contained proofs. Representing only one of the many and varied potential applications, a recorded speech waveform illustrates the benefits of shift invariance with pseudoinvertibility. Visualization shows the glottal modulation of vowel formants and frication noise, revealing secondary glottal pulses and other waveform irregularities. Additionally, performing sound waveform editing operations (i.e., cutting and pasting sections) on the shift-invariant wavelet representation automatically produces quiet, click-free section boundaries in the resulting sound. The capabilities of this wavelet-domain editing technique are demonstrated by changing the rate of a recorded spoken word. Individual pitch periods are repeated to obtain a half-speed result, and alternate individual pitch periods are removed to obtain a double-speed result. The original pitch and formant frequencies are preserved. In informal listening tests, the results are clear and understandable.

  19. Path integral approach to the Wigner representation of canonical density operators for discrete systems coupled to harmonic baths.

    PubMed

    Montoya-Castillo, Andrés; Reichman, David R

    2017-01-14

    We derive a semi-analytical form for the Wigner transform for the canonical density operator of a discrete system coupled to a harmonic bath based on the path integral expansion of the Boltzmann factor. The introduction of this simple and controllable approach allows for the exact rendering of the canonical distribution and permits systematic convergence of static properties with respect to the number of path integral steps. In addition, the expressions derived here provide an exact and facile interface with quasi- and semi-classical dynamical methods, which enables the direct calculation of equilibrium time correlation functions within a wide array of approaches. We demonstrate that the present method represents a practical path for the calculation of thermodynamic data for the spin-boson and related systems. We illustrate the power of the present approach by detailing the improvement of the quality of Ehrenfest theory for the correlation function C zz (t)=Re⟨σ z (0)σ z (t)⟩ for the spin-boson model with systematic convergence to the exact sampling function. Importantly, the numerically exact nature of the scheme presented here and its compatibility with semiclassical methods allows for the systematic testing of commonly used approximations for the Wigner-transformed canonical density.

  20. Delineating Facies Spatial Distribution by Integrating Ensemble Data Assimilation and Indicator Geostatistics with Level Set Transformation.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hammond, Glenn Edward; Song, Xuehang; Ye, Ming

    A new approach is developed to delineate the spatial distribution of discrete facies (geological units that have unique distributions of hydraulic, physical, and/or chemical properties) conditioned not only on direct data (measurements directly related to facies properties, e.g., grain size distribution obtained from borehole samples) but also on indirect data (observations indirectly related to facies distribution, e.g., hydraulic head and tracer concentration). Our method integrates for the first time ensemble data assimilation with traditional transition probability-based geostatistics. The concept of level set is introduced to build shape parameterization that allows transformation between discrete facies indicators and continuous random variables. Themore » spatial structure of different facies is simulated by indicator models using conditioning points selected adaptively during the iterative process of data assimilation. To evaluate the new method, a two-dimensional semi-synthetic example is designed to estimate the spatial distribution and permeability of two distinct facies from transient head data induced by pumping tests. The example demonstrates that our new method adequately captures the spatial pattern of facies distribution by imposing spatial continuity through conditioning points. The new method also reproduces the overall response in hydraulic head field with better accuracy compared to data assimilation with no constraints on spatial continuity on facies.« less

  1. Job decision latitude, job demands, and cardiovascular disease: a prospective study of Swedish men.

    PubMed Central

    Karasek, R; Baker, D; Marxer, F; Ahlbom, A; Theorell, T

    1981-01-01

    The association between specific job characteristics and subsequent cardiovascular disease was tested using a large random sample of the male working Swedish population. The prospective development of coronary heart disease (CHD) symptoms and signs was analyzed using a multivariate logistic regression technique. Additionally, a case-controlled study was used to analyze all cardiovascular-cerebrovascular (CHD-CVD) deaths during a six-year follow-up. The indicator of CHD symptoms and signs was validated in a six-year prospective study of CHD deaths (standardized mortality ratio 5.0; p less than or equal to .001). A hectic and psychologically demanding job increases the risk of developing CHD symptoms and signs (standardized odds ratio 1.29, p less than 0.25) and premature CHD-CVD death (relative risk 4.0, p less than .01). Low decision latitude-expressed as low intellectual discretion and low personal schedule freedom-is also associated with increased risk of cardiovascular disease. Low intellectual discretion predicts the development of CHD symptoms and signs (SOR 1.44, p less than .01), while low personal schedule freedom among the majority of workers with the minimum statutory education increases the risk of CHD-CVD death (RR 6.6, p less than .0002). The associations exist after controlling for age, education, smoking, and overweight. PMID:7246835

  2. Job decision latitude, job demands, and cardiovascular disease: a prospective study of Swedish men.

    PubMed

    Karasek, R; Baker, D; Marxer, F; Ahlbom, A; Theorell, T

    1981-07-01

    The association between specific job characteristics and subsequent cardiovascular disease was tested using a large random sample of the male working Swedish population. The prospective development of coronary heart disease (CHD) symptoms and signs was analyzed using a multivariate logistic regression technique. Additionally, a case-controlled study was used to analyze all cardiovascular-cerebrovascular (CHD-CVD) deaths during a six-year follow-up. The indicator of CHD symptoms and signs was validated in a six-year prospective study of CHD deaths (standardized mortality ratio 5.0; p less than or equal to .001). A hectic and psychologically demanding job increases the risk of developing CHD symptoms and signs (standardized odds ratio 1.29, p less than 0.25) and premature CHD-CVD death (relative risk 4.0, p less than .01). Low decision latitude-expressed as low intellectual discretion and low personal schedule freedom-is also associated with increased risk of cardiovascular disease. Low intellectual discretion predicts the development of CHD symptoms and signs (SOR 1.44, p less than .01), while low personal schedule freedom among the majority of workers with the minimum statutory education increases the risk of CHD-CVD death (RR 6.6, p less than .0002). The associations exist after controlling for age, education, smoking, and overweight.

  3. A mixture-energy-consistent six-equation two-phase numerical model for fluids with interfaces, cavitation and evaporation waves

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pelanti, Marica, E-mail: marica.pelanti@ensta-paristech.fr; Shyue, Keh-Ming, E-mail: shyue@ntu.edu.tw

    2014-02-15

    We model liquid–gas flows with cavitation by a variant of the six-equation single-velocity two-phase model with stiff mechanical relaxation of Saurel–Petitpas–Berry (Saurel et al., 2009) [9]. In our approach we employ phasic total energy equations instead of the phasic internal energy equations of the classical six-equation system. This alternative formulation allows us to easily design a simple numerical method that ensures consistency with mixture total energy conservation at the discrete level and agreement of the relaxed pressure at equilibrium with the correct mixture equation of state. Temperature and Gibbs free energy exchange terms are included in the equations as relaxationmore » terms to model heat and mass transfer and hence liquid–vapor transition. The algorithm uses a high-resolution wave propagation method for the numerical approximation of the homogeneous hyperbolic portion of the model. In two dimensions a fully-discretized scheme based on a hybrid HLLC/Roe Riemann solver is employed. Thermo-chemical terms are handled numerically via a stiff relaxation solver that forces thermodynamic equilibrium at liquid–vapor interfaces under metastable conditions. We present numerical results of sample tests in one and two space dimensions that show the ability of the proposed model to describe cavitation mechanisms and evaporation wave dynamics.« less

  4. Examining school-based bullying interventions using multilevel discrete time hazard modeling.

    PubMed

    Ayers, Stephanie L; Wagaman, M Alex; Geiger, Jennifer Mullins; Bermudez-Parsai, Monica; Hedberg, E C

    2012-10-01

    Although schools have been trying to address bullying by utilizing different approaches that stop or reduce the incidence of bullying, little remains known about what specific intervention strategies are most successful in reducing bullying in the school setting. Using the social-ecological framework, this paper examines school-based disciplinary interventions often used to deliver consequences to deter the reoccurrence of bullying and aggressive behaviors among school-aged children. Data for this study are drawn from the School-Wide Information System (SWIS) with the final analytic sample consisting of 1,221 students in grades K - 12 who received an office disciplinary referral for bullying during the first semester. Using Kaplan-Meier Failure Functions and Multi-level discrete time hazard models, determinants of the probability of a student receiving a second referral over time were examined. Of the seven interventions tested, only Parent-Teacher Conference (AOR = 0.65, p < .01) and Loss of Privileges (AOR = 0.71, p < .10) were significant in reducing the rate of the reoccurrence of bullying and aggressive behaviors. By using a social-ecological framework, schools can develop strategies that deter the reoccurrence of bullying by identifying key factors that enhance a sense of connection between the students' mesosystems as well as utilizing disciplinary strategies that take into consideration student's microsystem roles.

  5. Examining School-Based Bullying Interventions Using Multilevel Discrete Time Hazard Modeling

    PubMed Central

    Wagaman, M. Alex; Geiger, Jennifer Mullins; Bermudez-Parsai, Monica; Hedberg, E. C.

    2014-01-01

    Although schools have been trying to address bulling by utilizing different approaches that stop or reduce the incidence of bullying, little remains known about what specific intervention strategies are most successful in reducing bullying in the school setting. Using the social-ecological framework, this paper examines school-based disciplinary interventions often used to deliver consequences to deter the reoccurrence of bullying and aggressive behaviors among school-aged children. Data for this study are drawn from the School-Wide Information System (SWIS) with the final analytic sample consisting of 1,221 students in grades K – 12 who received an office disciplinary referral for bullying during the first semester. Using Kaplan-Meier Failure Functions and Multi-level discrete time hazard models, determinants of the probability of a student receiving a second referral over time were examined. Of the seven interventions tested, only Parent-Teacher Conference (AOR=0.65, p<.01) and Loss of Privileges (AOR=0.71, p<.10) were significant in reducing the rate of the reoccurrence of bullying and aggressive behaviors. By using a social-ecological framework, schools can develop strategies that deter the reoccurrence of bullying by identifying key factors that enhance a sense of connection between the students’ mesosystems as well as utilizing disciplinary strategies that take into consideration student’s microsystem roles. PMID:22878779

  6. The shift-invariant discrete wavelet transform and application to speech waveform analysis

    NASA Astrophysics Data System (ADS)

    Enders, Jörg; Geng, Weihua; Li, Peijun; Frazier, Michael W.; Scholl, David J.

    2005-04-01

    The discrete wavelet transform may be used as a signal-processing tool for visualization and analysis of nonstationary, time-sampled waveforms. The highly desirable property of shift invariance can be obtained at the cost of a moderate increase in computational complexity, and accepting a least-squares inverse (pseudoinverse) in place of a true inverse. A new algorithm for the pseudoinverse of the shift-invariant transform that is easier to implement in array-oriented scripting languages than existing algorithms is presented together with self-contained proofs. Representing only one of the many and varied potential applications, a recorded speech waveform illustrates the benefits of shift invariance with pseudoinvertibility. Visualization shows the glottal modulation of vowel formants and frication noise, revealing secondary glottal pulses and other waveform irregularities. Additionally, performing sound waveform editing operations (i.e., cutting and pasting sections) on the shift-invariant wavelet representation automatically produces quiet, click-free section boundaries in the resulting sound. The capabilities of this wavelet-domain editing technique are demonstrated by changing the rate of a recorded spoken word. Individual pitch periods are repeated to obtain a half-speed result, and alternate individual pitch periods are removed to obtain a double-speed result. The original pitch and formant frequencies are preserved. In informal listening tests, the results are clear and understandable. .

  7. Simple methodology to directly genotype Trypanosoma cruzi discrete typing units in single and mixed infections from human blood samples.

    PubMed

    Bontempi, Iván A; Bizai, María L; Ortiz, Sylvia; Manattini, Silvia; Fabbro, Diana; Solari, Aldo; Diez, Cristina

    2016-09-01

    Different DNA markers to genotype Trypanosoma cruzi are now available. However, due to the low quantity of parasites present in biological samples, DNA markers with high copy number like kinetoplast minicircles are needed. The aim of this study was to complete a DNA assay called minicircle lineage specific-PCR (MLS-PCR) previously developed to genotype the T. cruzi DTUs TcV and TcVI, in order to genotype DTUs TcI and TcII and to improve TcVI detection. We screened kinetoplast minicircle hypervariable sequences from cloned PCR products from reference strains belonging to the mentioned DTUs using specific kDNA probes. With the four highly specific sequences selected, we designed primers to be used in the MLS-PCR to directly genotype T. cruzi from biological samples. High specificity and sensitivity were obtained when we evaluated the new approach for TcI, TcII, TcV and TcVI genotyping in twenty two T. cruzi reference strains. Afterward, we compared it with hybridization tests using specific kDNA probes in 32 blood samples from chronic chagasic patients from North Eastern Argentina. With both tests we were able to genotype 94% of the samples and the concordance between them was very good (kappa=0.855). The most frequent T. cruzi DTUs detected were TcV and TcVI, followed by TcII and much lower TcI. A unique T. cruzi DTU was detected in 18 samples meantime more than one in the remaining; being TcV and TcVI the most frequent association. A high percentage of mixed detections were obtained with both assays and its impact was discussed. Copyright © 2016 Elsevier B.V. All rights reserved.

  8. Molecular Diagnosis of Chagas Disease in Colombia: Parasitic Loads and Discrete Typing Units in Patients from Acute and Chronic Phases

    PubMed Central

    Hernández, Carolina; Cucunubá, Zulma; Flórez, Carolina; Olivera, Mario; Valencia, Carlos; Zambrano, Pilar; León, Cielo; Ramírez, Juan David

    2016-01-01

    Background The diagnosis of Chagas disease is complex due to the dynamics of parasitemia in the clinical phases of the disease. The molecular tests have been considered promissory because they detect the parasite in all clinical phases. Trypanosoma cruzi presents significant genetic variability and is classified into six Discrete Typing Units TcI-TcVI (DTUs) with the emergence of foreseen genotypes within TcI as TcIDom and TcI Sylvatic. The objective of this study was to determine the operating characteristics of molecular tests (conventional and Real Time PCR) for the detection of T. cruzi DNA, parasitic loads and DTUs in a large cohort of Colombian patients from acute and chronic phases. Methodology/Principal Findings Samples were obtained from 708 patients in all clinical phases. Standard diagnosis (direct and serological tests) and molecular tests (conventional PCR and quantitative PCR) targeting the nuclear satellite DNA region. The genotyping was performed by PCR using the intergenic region of the mini-exon gene, the 24Sa, 18S and A10 regions. The operating capabilities showed that performance of qPCR was higher compared to cPCR. Likewise, the performance of qPCR was significantly higher in acute phase compared with chronic phase. The median parasitic loads detected were 4.69 and 1.33 parasite equivalents/mL for acute and chronic phases. The main DTU identified was TcI (74.2%). TcIDom genotype was significantly more frequent in chronic phase compared to acute phase (82.1% vs 16.6%). The median parasitic load for TcIDom was significantly higher compared with TcI Sylvatic in chronic phase (2.58 vs.0.75 parasite equivalents/ml). Conclusions/Significance The molecular tests are a precise tool to complement the standard diagnosis of Chagas disease, specifically in acute phase showing high discriminative power. However, it is necessary to improve the sensitivity of molecular tests in chronic phase. The frequency and parasitemia of TcIDom genotype in chronic patients highlight its possible relationship to the chronicity of the disease. PMID:27648938

  9. 40 CFR 1033.520 - Alternative ramped modal cycles.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ...) Following the completion of the third test phase of the applicable ramped modal cycle, conduct the post... POLLUTION CONTROLS CONTROL OF EMISSIONS FROM LOCOMOTIVES Test Procedures § 1033.520 Alternative ramped modal... locomotive notch settings. Ramped modal cycles combine multiple test modes of a discrete-mode steady-state...

  10. The Effects of Written Comments on Student Performance.

    ERIC Educational Resources Information Center

    Leauby, Bruce A.; Atkinson, Maryanne

    1989-01-01

    Three accounting teachers gave two tests and a comprehensive final to 417 undergraduates using one of three treatments: no comments written on test paper, comments at teacher's discretion, or standard comments. The type of comment did not affect subsequent test performance, but did significantly affect performance on final exam, especially for…

  11. 40 CFR 61.13 - Emission tests and waiver of emission tests.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... claim of force majeure, the owner or operator shall notify the Administrator, in writing as soon as... provide to the Administrator a written description of the force majeure event and a rationale for... performance test deadline is solely within the discretion of the Administrator. The Administrator will notify...

  12. 40 CFR 61.13 - Emission tests and waiver of emission tests.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... claim of force majeure, the owner or operator shall notify the Administrator, in writing as soon as... provide to the Administrator a written description of the force majeure event and a rationale for... performance test deadline is solely within the discretion of the Administrator. The Administrator will notify...

  13. 40 CFR 61.13 - Emission tests and waiver of emission tests.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... claim of force majeure, the owner or operator shall notify the Administrator, in writing as soon as... provide to the Administrator a written description of the force majeure event and a rationale for... performance test deadline is solely within the discretion of the Administrator. The Administrator will notify...

  14. 40 CFR 61.13 - Emission tests and waiver of emission tests.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... claim of force majeure, the owner or operator shall notify the Administrator, in writing as soon as... provide to the Administrator a written description of the force majeure event and a rationale for... performance test deadline is solely within the discretion of the Administrator. The Administrator will notify...

  15. Personality in the cockroach Diploptera punctata: Evidence for stability across developmental stages despite age effects on boldness.

    PubMed

    Stanley, Christina R; Mettke-Hofmann, Claudia; Preziosi, Richard F

    2017-01-01

    Despite a recent surge in the popularity of animal personality studies and their wide-ranging associations with various aspects of behavioural ecology, our understanding of the development of personality over ontogeny remains poorly understood. Stability over time is a central tenet of personality; ecological pressures experienced by an individual at different life stages may, however, vary considerably, which may have a significant effect on behavioural traits. Invertebrates often go through numerous discrete developmental stages and therefore provide a useful model for such research. Here we test for both differential consistency and age effects upon behavioural traits in the gregarious cockroach Diploptera punctata by testing the same behavioural traits in both juveniles and adults. In our sample, we find consistency in boldness, exploration and sociality within adults whilst only boldness was consistent in juveniles. Both boldness and exploration measures, representative of risk-taking behaviour, show significant consistency across discrete juvenile and adult stages. Age effects are, however, apparent in our data; juveniles are significantly bolder than adults, most likely due to differences in the ecological requirements of these life stages. Size also affects risk-taking behaviour since smaller adults are both bolder and more highly explorative. Whilst a behavioural syndrome linking boldness and exploration is evident in nymphs, this disappears by the adult stage, where links between other behavioural traits become apparent. Our results therefore indicate that differential consistency in personality can be maintained across life stages despite age effects on its magnitude, with links between some personality traits changing over ontogeny, demonstrating plasticity in behavioural syndromes.

  16. Personality in the cockroach Diploptera punctata: Evidence for stability across developmental stages despite age effects on boldness

    PubMed Central

    Mettke-Hofmann, Claudia; Preziosi, Richard F.

    2017-01-01

    Despite a recent surge in the popularity of animal personality studies and their wide-ranging associations with various aspects of behavioural ecology, our understanding of the development of personality over ontogeny remains poorly understood. Stability over time is a central tenet of personality; ecological pressures experienced by an individual at different life stages may, however, vary considerably, which may have a significant effect on behavioural traits. Invertebrates often go through numerous discrete developmental stages and therefore provide a useful model for such research. Here we test for both differential consistency and age effects upon behavioural traits in the gregarious cockroach Diploptera punctata by testing the same behavioural traits in both juveniles and adults. In our sample, we find consistency in boldness, exploration and sociality within adults whilst only boldness was consistent in juveniles. Both boldness and exploration measures, representative of risk-taking behaviour, show significant consistency across discrete juvenile and adult stages. Age effects are, however, apparent in our data; juveniles are significantly bolder than adults, most likely due to differences in the ecological requirements of these life stages. Size also affects risk-taking behaviour since smaller adults are both bolder and more highly explorative. Whilst a behavioural syndrome linking boldness and exploration is evident in nymphs, this disappears by the adult stage, where links between other behavioural traits become apparent. Our results therefore indicate that differential consistency in personality can be maintained across life stages despite age effects on its magnitude, with links between some personality traits changing over ontogeny, demonstrating plasticity in behavioural syndromes. PMID:28489864

  17. Microwave reflectometer ionization sensor

    NASA Technical Reports Server (NTRS)

    Seals, Joseph; Fordham, Jeffrey A.; Pauley, Robert G.; Simonutti, Mario D.

    1993-01-01

    The development of the Microwave Reflectometer Ionization Sensor (MRIS) Instrument for use on the Aeroassist Flight Experiment (AFE) spacecraft is described. The instrument contract was terminated, due to cancellation of the AFE program, subsequent to testing of an engineering development model. The MRIS, a four-frequency reflectometer, was designed for the detection and location of critical electron density levels in spacecraft reentry plasmas. The instrument would sample the relative magnitude and phase of reflected signals at discrete frequency steps across 4 GHz bandwidths centered at four frequencies: 20, 44, 95, and 140 GHz. The sampled data would be stored for later processing to calculate the distance from the spacecraft surface to the critical electron densities versus time. Four stepped PM CW transmitter receivers were located behind the thermal protection system of the spacecraft with horn antennas radiating and receiving through an insulating tile. Techniques were developed to deal with interference, including multiple reflections and resonance effects, resulting from the antenna configuration and operating environment.

  18. Semiautomated Method for Microbiological Vitamin Assays

    PubMed Central

    Berg, T. M.; Behagel, H. A.

    1972-01-01

    A semiautomated method for microbiological vitamin assays is described, which includes separate automated systems for the preparation of the cultures and for the measurement of turbidity. In the dilution and dosage unit based on the continuous-flow principle, vitamin samples were diluted to two different dose levels at a rate of 40 per hr, mixed with the inoculated test broth, and dispensed into culture tubes. After incubation, racks with culture tubes were placed on the sampler of an automatic turbidimeter. This unit, based on the discrete-sample system, measured the turbidity and printed the extinction values at a rate of 300 per hr. Calculations were computerized and the results, including statistical data, are presented in an easily readable form. The automated method is in routine use for the assays of thiamine, riboflavine, pyridoxine, cyanocobalamin, calcium pantothenate, nicotinic acid, pantothenol, and folic acid. Identical vitamin solutions assayed on different days gave variation coefficients for the various vitamin assays of less than 10%. Images PMID:4553802

  19. Descriptive Statistics for Modern Test Score Distributions: Skewness, Kurtosis, Discreteness, and Ceiling Effects.

    PubMed

    Ho, Andrew D; Yu, Carol C

    2015-06-01

    Many statistical analyses benefit from the assumption that unconditional or conditional distributions are continuous and normal. More than 50 years ago in this journal, Lord and Cook chronicled departures from normality in educational tests, and Micerri similarly showed that the normality assumption is met rarely in educational and psychological practice. In this article, the authors extend these previous analyses to state-level educational test score distributions that are an increasingly common target of high-stakes analysis and interpretation. Among 504 scale-score and raw-score distributions from state testing programs from recent years, nonnormal distributions are common and are often associated with particular state programs. The authors explain how scaling procedures from item response theory lead to nonnormal distributions as well as unusual patterns of discreteness. The authors recommend that distributional descriptive statistics be calculated routinely to inform model selection for large-scale test score data, and they illustrate consequences of nonnormality using sensitivity studies that compare baseline results to those from normalized score scales.

  20. Choice-Based Conjoint Analysis: Classification vs. Discrete Choice Models

    NASA Astrophysics Data System (ADS)

    Giesen, Joachim; Mueller, Klaus; Taneva, Bilyana; Zolliker, Peter

    Conjoint analysis is a family of techniques that originated in psychology and later became popular in market research. The main objective of conjoint analysis is to measure an individual's or a population's preferences on a class of options that can be described by parameters and their levels. We consider preference data obtained in choice-based conjoint analysis studies, where one observes test persons' choices on small subsets of the options. There are many ways to analyze choice-based conjoint analysis data. Here we discuss the intuition behind a classification based approach, and compare this approach to one based on statistical assumptions (discrete choice models) and to a regression approach. Our comparison on real and synthetic data indicates that the classification approach outperforms the discrete choice models.

Top